Development discussions | https://github.com/uyuni-project/uyuni | We are participating in Hacktoberfirst for the second year: check the mailing list for details
uyuni:~/sumaform # systemctl status dnsmasq.service
× dnsmasq.service - DNS caching server.
Loaded: loaded (/usr/lib/systemd/system/dnsmasq.service; disabled; vendor preset: disabled)
Active: failed (Result: exit-code) since Wed 2022-11-30 08:19:58 UTC; 31min ago
Process: 15951 ExecStartPre=/usr/sbin/dnsmasq --test (code=exited, status=0/SUCCESS)
Process: 16011 ExecStart=/usr/sbin/dnsmasq --log-async --enable-dbus --keep-in-foreground (code=exited, status=2)
Main PID: 16011 (code=exited, status=2)
Nov 30 08:19:57 uyuni.asia-south2-a.c.our-bruin-361409.internal systemd[1]: Starting DNS caching server....
Nov 30 08:19:57 uyuni.asia-south2-a.c.our-bruin-361409.internal dnsmasq[15951]: dnsmasq: syntax check OK.
Nov 30 08:19:58 uyuni.asia-south2-a.c.our-bruin-361409.internal dnsmasq[16011]: dnsmasq: failed to create listening socket for port 53: Address already in use
Nov 30 08:19:58 uyuni.asia-south2-a.c.our-bruin-361409.internal dnsmasq[16011]: failed to create listening socket for port 53: Address already in use
Nov 30 08:19:58 uyuni.asia-south2-a.c.our-bruin-361409.internal dnsmasq[16011]: FAILED to start up
Nov 30 08:19:58 uyuni.asia-south2-a.c.our-bruin-361409.internal systemd[1]: dnsmasq.service: Main process exited, code=exited, status=2/INVALIDARGUMENT
Nov 30 08:19:58 uyuni.asia-south2-a.c.our-bruin-361409.internal systemd[1]: dnsmasq.service: Failed with result 'exit-code'.
Nov 30 08:19:58 uyuni.asia-south2-a.c.our-bruin-361409.internal systemd[1]: Failed to start DNS caching server..
uyuni:~/sumaform #
uyuni:~/sumaform #
uyuni:~/sumaform # systemctl status NetworkManager
● NetworkManager.service - Network Manager
Loaded: loaded (/usr/lib/systemd/system/NetworkManager.service; disabled; vendor preset: disabled)
Drop-In: /usr/lib/systemd/system/NetworkManager.service.d
└─NetworkManager-ovs.conf
Active: active (running) since Wed 2022-11-30 08:19:58 UTC; 31min ago
Docs: man:NetworkManager(8)
Main PID: 16197 (NetworkManager)
Tasks: 4 (limit: 4915)
CGroup: /system.slice/NetworkManager.service
├─ 15946 /usr/sbin/dnsmasq --no-resolv --keep-in-foreground --no-hosts --bind-interfaces --pid-file=/run/NetworkManager/dnsmasq.pid --listen-address=127.0>
└─ 16197 /usr/sbin/NetworkManager --no-daemon
Nov 30 08:40:12 uyuni.asia-south2-a.c.our-bruin-361409.internal NetworkManager[16197]: <info> [1669797612.4427] device (vnet5): Activation: starting connection 'vnet5>
Nov 30 08:40:12 uyuni.asia-south2-a.c.our-bruin-361409.internal NetworkManager[16197]: <info> [1669797612.4430] device (vnet5): state change: disconnected -> prepare >
Nov 30 08:40:12 uyuni.asia-south2-a.c.our-bruin-361409.internal NetworkManager[16197]: <info> [1669797612.4433] device (vnet5): state change: prepare -> config (reaso>
Nov 30 08:40:12 uyuni.asia-south2-a.c.our-bruin-361409.internal NetworkManager[16197]: <info> [1669797612.4435] device (vnet5): state change: config -> ip-config (rea>
Nov 30 08:40:12 uyuni.asia-south2-a.c.our-bruin-361409.internal NetworkManager[16197]: <info> [1669797612.4436] device (br0): bridge port vnet5 was attached
Nov 30 08:40:12 uyuni.asia-south2-a.c.our-bruin-361409.internal NetworkManager[16197]: <info> [1669797612.4437] device (vnet5): Activation: connection 'vnet5' enslave>
Nov 30 08:40:12 uyuni.asia-south2-a.c.our-bruin-361409.internal NetworkManager[16197]: <info> [1669797612.4438] device (vnet5): state change: ip-config -> ip-check (r>
Nov 30 08:40:12 uyuni.asia-south2-a.c.our-bruin-361409.internal NetworkManager[16197]: <info> [1669797612.4515] device (vnet5): state change: ip-check -> secondaries >
Nov 30 08:40:12 uyuni.asia-south2-a.c.our-bruin-361409.internal NetworkManager[16197]: <info> [1669797612.4521] device (vnet5): state change: secondaries -> activated>
Nov 30 08:40:12 uyuni.asia-south2-a.c.our-bruin-361409.internal NetworkManager[16197]: <info> [1669797612.4534] device (vnet5): Activation: successful, device activat>
uyuni:~/sumaform #
uyuni:~/sumaform # netstat -plunt | grep -i dns
tcp 0 0 127.0.0.1:53 0.0.0.0:* LISTEN 15946/dnsmasq
tcp 0 0 192.168.122.1:53 0.0.0.0:* LISTEN 1595/dnsmasq
udp 0 0 127.0.0.1:53 0.0.0.0:* 15946/dnsmasq
udp 0 0 192.168.122.1:53 0.0.0.0:* 1595/dnsmasq
udp 0 0 0.0.0.0:67 0.0.0.0:* 1595/dnsmasq
uyuni:~/sumaform #
uyuni:~/sumaform # ip addr show
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1460 qdisc pfifo_fast state UP group default qlen 1000
link/ether 42:01:0a:be:00:04 brd ff:ff:ff:ff:ff:ff
altname enp0s4
altname ens4
inet 10.190.0.4/32 scope global dynamic eth0
valid_lft 2662sec preferred_lft 2662sec
inet6 fe80::e659:d850:f8ad:bf2f/64 scope link noprefixroute
valid_lft forever preferred_lft forever
3: eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast master br0 state UP group default qlen 1000
link/ether 42:01:0a:00:00:02 brd ff:ff:ff:ff:ff:ff
altname enp0s5
altname ens5
4: br0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
link/ether 42:01:0a:00:00:02 brd ff:ff:ff:ff:ff:ff
inet6 fe80::4001:aff:fe00:2/64 scope link
valid_lft forever preferred_lft forever
5: virbr0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default qlen 1000
link/ether 52:54:00:60:ce:c2 brd ff:ff:ff:ff:ff:ff
inet 192.168.122.1/24 brd 192.168.122.255 scope global noprefixroute virbr0
valid_lft forever preferred_lft forever
14: vnet8: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master br0 state UNKNOWN group default qlen 1000
link/ether fe:54:00:52:4d:2b brd ff:ff:ff:ff:ff:ff
inet6 fe80::fc54:ff:fe52:4d2b/64 scope link
valid_lft forever preferred_lft forever
15: vnet9: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master br0 state UNKNOWN group default qlen 1000
link/ether fe:54:00:7e:ef:99 brd ff:ff:ff:ff:ff:ff
inet6 fe80::fc54:ff:fe7e:ef99/64 scope link
valid_lft forever preferred_lft forever
uyuni:~/sumaform #
main.tf
still has the line birdge = "br0"
so your VMs don't use the libvirt network you have defined. You need to remove that line, the defaults is to use the libvirt default
network. Of course you will need to rerun terraform apply
and that may destroy what you already have
╷
│ Error: remote-exec provisioner error
│
│ with module.minion.module.minion.module.host.null_resource.provisioning[0],
│ on backend_modules/libvirt/host/main.tf line 251, in resource "null_resource" "provisioning":
│ 251: provisioner "remote-exec" {
│
│ error executing "/tmp/terraform_1286222172.sh": Process exited with status 1
╵
uyuni:~/sumaform
Below four messages appearing as erroneous while running terraform apply
module.minion.module.minion.module.host.null_resource.provisioning[0] (remote-exec): Changes:
module.minion.module.minion.module.host.null_resource.provisioning[0] (remote-exec): ----------
module.minion.module.minion.module.host.null_resource.provisioning[0] (remote-exec): ID: avahi_pkg
module.minion.module.minion.module.host.null_resource.provisioning[0] (remote-exec): Function: pkg.latest
module.minion.module.minion.module.host.null_resource.provisioning[0] (remote-exec): Result: False
module.minion.module.minion.module.host.null_resource.provisioning[0] (remote-exec): Comment: An exception occurred in this state: Traceback (most recent call last):
module.minion.module.minion.module.host.null_resource.provisioning[0] (remote-exec): ----------
module.minion.module.minion.module.host.null_resource.provisioning[0] (remote-exec): ID: avahi_change_domain
module.minion.module.minion.module.host.null_resource.provisioning[0] (remote-exec): Function: file.replace
module.minion.module.minion.module.host.null_resource.provisioning[0] (remote-exec): Name: /etc/avahi/avahi-daemon.conf
module.minion.module.minion.module.host.null_resource.provisioning[0] (remote-exec): Result: False
module.minion.module.minion.module.host.null_resource.provisioning[0] (remote-exec): Comment: /etc/avahi/avahi-daemon.conf: file not found
module.minion.module.minion.module.host.null_resource.provisioning[0] (remote-exec): Started: 09:36:34.797556
module.minion.module.minion.module.host.null_resource.provisioning[0] (remote-exec): Duration: 4.735 ms
module.minion.module.minion.module.host.null_resource.provisioning[0] (remote-exec): Changes:
module.minion.module.minion.module.host.null_resource.provisioning[0] (remote-exec): ----------
module.minion.module.minion.module.host.null_resource.provisioning[0] (remote-exec): ID: avahi_restrict_interfaces
module.minion.module.minion.module.host.null_resource.provisioning[0] (remote-exec): Function: file.replace
module.minion.module.minion.module.host.null_resource.provisioning[0] (remote-exec): Name: /etc/avahi/avahi-daemon.conf
module.minion.module.minion.module.host.null_resource.provisioning[0] (remote-exec): Result: False
module.minion.module.minion.module.host.null_resource.provisioning[0] (remote-exec): Comment: /etc/avahi/avahi-daemon.conf: file not found
module.minion.module.minion.module.host.null_resource.provisioning[0] (remote-exec): Started: 09:36:34.802938
module.minion.module.minion.module.host.null_resource.provisioning[0] (remote-exec): Duration: 3.768 ms
module.minion.module.minion.module.host.null_resource.provisioning[0] (remote-exec): Changes:
module.minion.module.minion.module.host.null_resource.provisioning[0] (remote-exec): ----------
```
module.minion.module.minion.module.host.null_resource.provisioning[0] (remote-exec): ----------
module.minion.module.minion.module.host.null_resource.provisioning[0] (remote-exec): ID: avahi_enable_service
module.minion.module.minion.module.host.null_resource.provisioning[0] (remote-exec): Function: service.running
module.minion.module.minion.module.host.null_resource.provisioning[0] (remote-exec): Name: avahi-daemon
module.minion.module.minion.module.host.null_resource.provisioning[0] (remote-exec): Result: False
module.minion.module.minion.module.host.null_resource.provisioning[0] (remote-exec): Comment: The named service avahi-daemon is not available
module.minion.module.minion.module.host.null_resource.provisioning[0] (remote-exec): S
module.minion.module.minion.module.host.null_resource.provisioning[0] (remote-exec): ----------
module.minion.module.minion.module.host.null_resource.provisioning[0] (remote-exec): ID: avahi_enable_service
module.minion.module.minion.module.host.null_resource.provisioning[0] (remote-exec): Function: service.running
module.minion.module.minion.module.host.null_resource.provisioning[0] (remote-exec): Name: avahi-daemon
module.minion.module.minion.module.host.null_resource.provisioning[0] (remote-exec): Result: False
module.minion.module.minion.module.host.null_resource.provisioning[0] (remote-exec): Comment: The named service avahi-daemon is not available
module.minion.module.minion.module.host.null_resource.provisioning[0] (remote-exec): S
╷
│ Error: remote-exec provisioner error
│
│ with module.minion.module.minion.module.host.null_resource.provisioning[0],
│ on backend_modules/libvirt/host/main.tf line 251, in resource "null_resource" "provisioning":
│ 251: provisioner "remote-exec" {
│
│ error executing "/tmp/terraform_1041118824.sh": Process exited with status 1
╵
module.minion.module.minion.module.host.null_resource.provisioning[0] (remote-exec): Changes:
module.minion.module.minion.module.host.null_resource.provisioning[0] (remote-exec): ----------
module.minion.module.minion.module.host.null_resource.provisioning[0] (remote-exec): ID: avahi_pkg
module.minion.module.minion.module.host.null_resource.provisioning[0] (remote-exec): Function: pkg.latest
module.minion.module.minion.module.host.null_resource.provisioning[0] (remote-exec): Result: False
module.minion.module.minion.module.host.null_resource.provisioning[0] (remote-exec): Comment: An exception occurred in this state: Traceback (most recent call last):
uyuni:~/sumaform # virsh list
Id Name State
------------------------
11 minion running
12 server running
uyuni:~/sumaform #
server:~ # spacewalk-repo-sync --help
Usage: spacewalk-repo-sync [options]
Options:
-h, --help show this help message and exit
-l, --list List the custom channels with the associated
repositories.
-s, --show-packages List all packages in a specified channel.
-u URL, --url=URL The url of the repository. Can be used multiple times.
-c CHANNEL_LABEL, --channel=CHANNEL_LABEL
The label of the channel to sync packages to. Can be
used multiple times.
-p PARENT_LABEL, --parent-channel=PARENT_LABEL
Synchronize the parent channel and all its child
channels.
-d, --dry-run Test run. No sync takes place.
--latest Sync latest packages only. Use carefully - you might
need to fix some dependencies on your own.
-g CONFIG, --config=CONFIG
Configuration file
uyuni:~/sumaform # ip addr show 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever inet6 ::1/128 scope host valid_lft forever preferred_lft forever 2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1460 qdisc pfifo_fast state UP group default qlen 1000 link/ether 42:01:0a:be:00:04 brd ff:ff:ff:ff:ff:ff altname enp0s4 altname ens4 inet 10.190.0.4/32 scope global dynamic eth0 valid_lft 2662sec preferred_lft 2662sec inet6 fe80::e659:d850:f8ad:bf2f/64 scope link noprefixroute valid_lft forever preferred_lft forever 3: eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast master br0 state UP group default qlen 1000 link/ether 42:01:0a:00:00:02 brd ff:ff:ff:ff:ff:ff altname enp0s5 altname ens5 4: br0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000 link/ether 42:01:0a:00:00:02 brd ff:ff:ff:ff:ff:ff inet6 fe80::4001:aff:fe00:2/64 scope link valid_lft forever preferred_lft forever 5: virbr0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default qlen 1000 link/ether 52:54:00:60:ce:c2 brd ff:ff:ff:ff:ff:ff inet 192.168.122.1/24 brd 192.168.122.255 scope global noprefixroute virbr0 valid_lft forever preferred_lft forever 14: vnet8: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master br0 state UNKNOWN group default qlen 1000 link/ether fe:54:00:52:4d:2b brd ff:ff:ff:ff:ff:ff inet6 fe80::fc54:ff:fe52:4d2b/64 scope link valid_lft forever preferred_lft forever 15: vnet9: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master br0 state UNKNOWN group default qlen 1000 link/ether fe:54:00:7e:ef:99 brd ff:ff:ff:ff:ff:ff inet6 fe80::fc54:ff:fe7e:ef99/64 scope link valid_lft forever preferred_lft forever uyuni:~/sumaform #
Hello there.
random thoughts here, you can use a few flags in ip command in order to output just the ip address or the link status
for example:
ip -br -c a
ip -br -c -4 a
ip -br -c -4 l
ip -br -4 r
use_avahi = false
in the base module
Software > Manage
menu
Sync
page and button to trigger the synchronization. it will run spacewalk-repo-sync
under the hood. To reproduce the issue, just run spacewalk-repo-sync --help
while the repo is still synchronizing
/var/log/rhn/reposync*
on the server VM
spacewalk-service status
on the server to figure out if all is properly up and running
uyuni:~ # ping server.suse.lab
PING server.suse.lab (192.168.122.238) 56(84) bytes of data.
64 bytes from server.suse.lab (192.168.122.238): icmp_seq=1 ttl=64 time=0.356 ms
64 bytes from server.suse.lab (192.168.122.238): icmp_seq=2 ttl=64 time=0.291 ms
^C
--- server.suse.lab ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 1014ms
rtt min/avg/max/mdev = 0.291/0.323/0.356/0.032 ms
uyuni:~ #
uyuni:~ # ssh root@192.168.122.238
Last login: Tue Dec 13 10:18:49 2022 from 192.168.122.1
server:~ #
server:~ # spacewalk-service status
● uyuni-update-config.service - Uyuni update config
Loaded: loaded (/usr/lib/systemd/system/uyuni-update-config.service; static)
Active: active (exited) since Tue 2022-12-13 10:18:05 CET; 16min ago
Process: 1160 ExecStart=/usr/sbin/uyuni-update-config (code=exited, status=0/SUCCESS)
Main PID: 1160 (code=exited, status=0/SUCCESS)
● uyuni-check-database.service - Uyuni check database
Loaded: loaded (/usr/lib/systemd/system/uyuni-check-database.service; static)
Active: active (exited) since Tue 2022-12-13 10:18:14 CET; 16min ago
Process: 1300 ExecStart=/usr/sbin/spacewalk-startup-helper check-database (code=exited, status=0/SUCCESS)
Main PID: 1300 (code=exited, status=0/SUCCESS)
● tomcat.service - Apache Tomcat Web Application Container
Loaded: loaded (/usr/lib/systemd/system/tomcat.service; enabled; vendor preset: disabled)
Drop-In: /usr/lib/systemd/system/tomcat.service.d
└─override.conf
Active: active (running) since Tue 2022-12-13 10:18:14 CET; 16min ago
Main PID: 1432 (java)
Tasks: 120 (limit: 576)
CGroup: /system.slice/tomcat.service
└─ 1432 /usr/lib64/jvm/jre/bin/java -Xdebug -Xrunjdwp:transport=dt_socket,address=server.suse.lab:8000,server=y,suspend=n -ea -Xms256m -Xmx1G -Djava.awt.h…
● spacewalk-wait-for-tomcat.service - Spacewalk wait for tomcat
Loaded: loaded (/usr/lib/systemd/system/spacewalk-wait-for-tomcat.service; static)
Active: active (exited) since Tue 2022-12-13 10:18:54 CET; 15min ago
Process: 1433 ExecStart=/usr/sbin/spacewalk-startup-helper wait-for-tomcat (code=exited, status=0/SUCCESS)
Main PID: 1433 (code=exited, status=0/SUCCESS)
● salt-master.service - The Salt Master Server
Loaded: loaded (/usr/lib/systemd/system/salt-master.service; enabled; vendor preset: disabled)
Drop-In: /usr/lib/systemd/system/salt-master.service.d
└─override.conf
Active: active (running) since Tue 2022-12-13 10:18:14 CET; 16min ago
Docs: man:salt-master(1)
file:///usr/share/doc/salt/html/contents.html
https://docs.saltproject.io/en/latest/contents.html
Main PID: 1430 (salt-master)
Tasks: 65
CGroup: /system.slice/salt-master.service
├─ 1430 /usr/bin/python3 /usr/bin/salt-master
├─ 1494 /usr/bin/python3 /usr/bin/salt-master
├─ 1558 /usr/bin/python3 /usr/bin/salt-master
├─ 1562 /usr/bin/python3 /usr/bin/salt-master
├─ 1564 /usr/bin/python3 /usr/bin/salt-master
├─ 1566 /usr/bin/python3 /usr/bin/salt-master
├─ 1567 /usr/bin/python3 /usr/bin/salt-master
├─ 1570 /usr/bin/python3 /usr/bin/salt-master
├─ 1571 /usr/bin/python3 /usr/bin/salt-master
├─ 1576 /usr/bin/python3 /usr/bin/salt-master
├─ 1582 /usr/bin/python3 /usr/bin/salt-master
├─ 1583 /usr/bin/python3 /usr/bin/salt-master
├─ 1584 /usr/bin/python3 /usr/bin/salt-master
├─ 1586 /usr/bin/python3 /usr/bin/salt-master
├─ 1587 /usr/bin/python3 /usr/bin/salt-master
├─ 1590 /usr/bin/python3 /usr/bin/salt-master
├─ 1591 /usr/bin/python3 /usr/bin/salt-master
└─ 5996 /usr/bin/python3 /usr/bin/salt-master
● salt-api.service - The Salt API
Loaded: loaded (/usr/lib/systemd/system/salt-api.service; enabled; vendor preset: disabled)
Drop-In: /usr/lib/systemd/system/salt-api.service.d
└─override.conf
Active: active (running) since Tue 2022-12-13 10:18:14 CET; 16min ago
Docs: man:salt-api(1)
file:///usr/share/doc/salt/html/contents.html
https://docs.saltproject.io/en/latest/contents.html
Main PID: 1429 (salt-api)
Tasks: 104 (limit: 4672)
CGroup: /system.slice/salt-api.service
├─ 1429 /usr/bin/python3 /usr/bin/salt-api
└─ 1585 /usr/bin/python3 /usr/bin/salt-api
● spacewalk-wait-for-salt.service - Make sure that salt is started before httpd
server:~ # systemctl status apache
○ apache2.service - The Apache Webserver
Loaded: loaded (/usr/lib/systemd/system/apache2.service; enabled; vendor preset: disabled)
Drop-In: /usr/lib/systemd/system/apache2.service.d
└─override.conf
Active: inactive (dead)
Dec 15 12:40:54 server systemd[1]: Dependency failed for The Apache Webserver.
Dec 15 12:40:54 server systemd[1]: apache2.service: Job apache2.service/start failed with result 'dependency'.
server:~ #
server:~ # systemctl status postgresql.service
× postgresql.service - PostgreSQL database server
Loaded: loaded (/usr/lib/systemd/system/postgresql.service; enabled; vendor preset: disabled)
Active: failed (Result: exit-code) since Thu 2022-12-15 12:46:03 CET; 5min ago
Process: 1628 ExecStart=/usr/share/postgresql/postgresql-script start (code=exited, status=1/FAILURE)
Dec 15 12:46:03 server systemd[1]: Starting PostgreSQL database server...
Dec 15 12:46:03 server postgresql-script[1635]: 2022-12-15 12:46:03.790 CET [1635]LOG: redirecting log output to logging collector process
Dec 15 12:46:03 server postgresql-script[1635]: 2022-12-15 12:46:03.790 CET [1635]HINT: Future log output will appear in directory "log".
Dec 15 12:46:03 server postgresql-script[1633]: pg_ctl: could not start server
Dec 15 12:46:03 server postgresql-script[1633]: Examine the log output.
Dec 15 12:46:03 server systemd[1]: postgresql.service: Control process exited, code=exited, status=1/FAILURE
Dec 15 12:46:03 server systemd[1]: postgresql.service: Failed with result 'exit-code'.
Dec 15 12:46:03 server systemd[1]: Failed to start PostgreSQL database server.
server:~ #
server:/var/lib/pgsql/data/log # cat postgresql-2022-12-15_124603.log
2022-12-15 12:46:03.791 CET [1635]LOG: starting PostgreSQL 14.5 on x86_64-suse-linux-gnu, compiled by gcc (SUSE Linux) 7.5.0, 64-bit
2022-12-15 12:46:03.791 CET [1635]LOG: listening on IPv4 address "0.0.0.0", port 5432
2022-12-15 12:46:03.791 CET [1635]LOG: listening on IPv6 address "::", port 5432
2022-12-15 12:46:03.792 CET [1635]LOG: listening on Unix socket "/var/run/postgresql/.s.PGSQL.5432"
2022-12-15 12:46:03.792 CET [1635]LOG: listening on Unix socket "/tmp/.s.PGSQL.5432"
2022-12-15 12:46:03.795 CET [1637]LOG: database system was interrupted; last known up at 2022-12-13 10:33:15 CET
2022-12-15 12:46:03.795 CET [1637]PANIC: could not read file "pg_logical/replorigin_checkpoint": read 0 of 4
2022-12-15 12:46:03.797 CET [1635]LOG: startup process (PID 1637) was terminated by signal 6: Aborted
2022-12-15 12:46:03.797 CET [1635]LOG: aborting startup due to startup process failure
2022-12-15 12:46:03.803 CET [1635]LOG: database system is shut down
server:/var/lib/pgsql/data/log #