Where communities thrive


  • Join over 1.5M+ people
  • Join over 100K+ communities
  • Free without limits
  • Create your own community
People
Activity
    amentee
    @amentee
    i even tried to stop and start but it fails even then
    uyuni:~/sumaform # netstat -plunt | grep -i dns
    tcp        0      0 127.0.0.1:53            0.0.0.0:*               LISTEN      15946/dnsmasq
    tcp        0      0 192.168.122.1:53        0.0.0.0:*               LISTEN      1595/dnsmasq
    udp        0      0 127.0.0.1:53            0.0.0.0:*                           15946/dnsmasq
    udp        0      0 192.168.122.1:53        0.0.0.0:*                           1595/dnsmasq
    udp        0      0 0.0.0.0:67              0.0.0.0:*                           1595/dnsmasq
    uyuni:~/sumaform #
    any ideas how to fix this?
    amentee
    @amentee
    below is the output of ip addr show
    uyuni:~/sumaform # ip addr show
    1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
        link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
        inet 127.0.0.1/8 scope host lo
           valid_lft forever preferred_lft forever
        inet6 ::1/128 scope host
           valid_lft forever preferred_lft forever
    2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1460 qdisc pfifo_fast state UP group default qlen 1000
        link/ether 42:01:0a:be:00:04 brd ff:ff:ff:ff:ff:ff
        altname enp0s4
        altname ens4
        inet 10.190.0.4/32 scope global dynamic eth0
           valid_lft 2662sec preferred_lft 2662sec
        inet6 fe80::e659:d850:f8ad:bf2f/64 scope link noprefixroute
           valid_lft forever preferred_lft forever
    3: eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast master br0 state UP group default qlen 1000
        link/ether 42:01:0a:00:00:02 brd ff:ff:ff:ff:ff:ff
        altname enp0s5
        altname ens5
    4: br0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
        link/ether 42:01:0a:00:00:02 brd ff:ff:ff:ff:ff:ff
        inet6 fe80::4001:aff:fe00:2/64 scope link
           valid_lft forever preferred_lft forever
    5: virbr0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default qlen 1000
        link/ether 52:54:00:60:ce:c2 brd ff:ff:ff:ff:ff:ff
        inet 192.168.122.1/24 brd 192.168.122.255 scope global noprefixroute virbr0
           valid_lft forever preferred_lft forever
    14: vnet8: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master br0 state UNKNOWN group default qlen 1000
        link/ether fe:54:00:52:4d:2b brd ff:ff:ff:ff:ff:ff
        inet6 fe80::fc54:ff:fe52:4d2b/64 scope link
           valid_lft forever preferred_lft forever
    15: vnet9: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master br0 state UNKNOWN group default qlen 1000
        link/ether fe:54:00:7e:ef:99 brd ff:ff:ff:ff:ff:ff
        inet6 fe80::fc54:ff:fe7e:ef99/64 scope link
           valid_lft forever preferred_lft forever
    uyuni:~/sumaform #
    virbr0 is showing down
    amentee
    @amentee
    @cbosdonnat:matrix.org ..tried everything but no luck. Still teh same error as above
    Cédric Bosdonnat
    @cbosdonnat:matrix.org
    [m]
    your main.tf still has the line birdge = "br0" so your VMs don't use the libvirt network you have defined. You need to remove that line, the defaults is to use the libvirt default network. Of course you will need to rerun terraform apply and that may destroy what you already have
    amentee
    @amentee
    @cbosdo .. thanks for the help. It worked after removing the br0 from main.tf. Now its throwing avahi error messages. Do I need to install the avahi package?
    ╷
    │ Error: remote-exec provisioner error
    │
    │   with module.minion.module.minion.module.host.null_resource.provisioning[0],
    │   on backend_modules/libvirt/host/main.tf line 251, in resource "null_resource" "provisioning":
    │  251:   provisioner "remote-exec" {
    │
    │ error executing "/tmp/terraform_1286222172.sh": Process exited with status 1
    ╵
    uyuni:~/sumaform

    Below four messages appearing as erroneous while running terraform apply

    module.minion.module.minion.module.host.null_resource.provisioning[0] (remote-exec):      Changes:
    module.minion.module.minion.module.host.null_resource.provisioning[0] (remote-exec): ----------
    module.minion.module.minion.module.host.null_resource.provisioning[0] (remote-exec):           ID: avahi_pkg
    module.minion.module.minion.module.host.null_resource.provisioning[0] (remote-exec):     Function: pkg.latest
    module.minion.module.minion.module.host.null_resource.provisioning[0] (remote-exec):       Result: False
    module.minion.module.minion.module.host.null_resource.provisioning[0] (remote-exec):      Comment: An exception occurred in this state: Traceback (most recent call last):
    module.minion.module.minion.module.host.null_resource.provisioning[0] (remote-exec): ----------
    module.minion.module.minion.module.host.null_resource.provisioning[0] (remote-exec):           ID: avahi_change_domain
    module.minion.module.minion.module.host.null_resource.provisioning[0] (remote-exec):     Function: file.replace
    module.minion.module.minion.module.host.null_resource.provisioning[0] (remote-exec):         Name: /etc/avahi/avahi-daemon.conf
    module.minion.module.minion.module.host.null_resource.provisioning[0] (remote-exec):       Result: False
    module.minion.module.minion.module.host.null_resource.provisioning[0] (remote-exec):      Comment: /etc/avahi/avahi-daemon.conf: file not found
    module.minion.module.minion.module.host.null_resource.provisioning[0] (remote-exec):      Started: 09:36:34.797556
    module.minion.module.minion.module.host.null_resource.provisioning[0] (remote-exec):     Duration: 4.735 ms
    module.minion.module.minion.module.host.null_resource.provisioning[0] (remote-exec):      Changes:
    module.minion.module.minion.module.host.null_resource.provisioning[0] (remote-exec): ----------
    module.minion.module.minion.module.host.null_resource.provisioning[0] (remote-exec):           ID: avahi_restrict_interfaces
    module.minion.module.minion.module.host.null_resource.provisioning[0] (remote-exec):     Function: file.replace
    module.minion.module.minion.module.host.null_resource.provisioning[0] (remote-exec):         Name: /etc/avahi/avahi-daemon.conf
    module.minion.module.minion.module.host.null_resource.provisioning[0] (remote-exec):       Result: False
    module.minion.module.minion.module.host.null_resource.provisioning[0] (remote-exec):      Comment: /etc/avahi/avahi-daemon.conf: file not found
    module.minion.module.minion.module.host.null_resource.provisioning[0] (remote-exec):      Started: 09:36:34.802938
    module.minion.module.minion.module.host.null_resource.provisioning[0] (remote-exec):     Duration: 3.768 ms
    module.minion.module.minion.module.host.null_resource.provisioning[0] (remote-exec):      Changes:
    module.minion.module.minion.module.host.null_resource.provisioning[0] (remote-exec): ----------

    ```
    module.minion.module.minion.module.host.null_resource.provisioning[0] (remote-exec): ----------
    module.minion.module.minion.module.host.null_resource.provisioning[0] (remote-exec): ID: avahi_enable_service
    module.minion.module.minion.module.host.null_resource.provisioning[0] (remote-exec): Function: service.running
    module.minion.module.minion.module.host.null_resource.provisioning[0] (remote-exec): Name: avahi-daemon
    module.minion.module.minion.module.host.null_resource.provisioning[0] (remote-exec): Result: False
    module.minion.module.minion.module.host.null_resource.provisioning[0] (remote-exec): Comment: The named service avahi-daemon is not available
    module.minion.module.minion.module.host.null_resource.provisioning[0] (remote-exec): S

    module.minion.module.minion.module.host.null_resource.provisioning[0] (remote-exec): ----------
    module.minion.module.minion.module.host.null_resource.provisioning[0] (remote-exec): ID: avahi_enable_service
    module.minion.module.minion.module.host.null_resource.provisioning[0] (remote-exec): Function: service.running
    module.minion.module.minion.module.host.null_resource.provisioning[0] (remote-exec): Name: avahi-daemon
    module.minion.module.minion.module.host.null_resource.provisioning[0] (remote-exec): Result: False
    module.minion.module.minion.module.host.null_resource.provisioning[0] (remote-exec): Comment: The named service avahi-daemon is not available
    module.minion.module.minion.module.host.null_resource.provisioning[0] (remote-exec): S
    amentee
    @amentee
    after installing avahi package on minion ,i re-ran the terraform apply
    now the error is reduced to 1
    ╷
    │ Error: remote-exec provisioner error
    │
    │   with module.minion.module.minion.module.host.null_resource.provisioning[0],
    │   on backend_modules/libvirt/host/main.tf line 251, in resource "null_resource" "provisioning":
    │  251:   provisioner "remote-exec" {
    │
    │ error executing "/tmp/terraform_1041118824.sh": Process exited with status 1
    module.minion.module.minion.module.host.null_resource.provisioning[0] (remote-exec):      Changes:
    module.minion.module.minion.module.host.null_resource.provisioning[0] (remote-exec): ----------
    module.minion.module.minion.module.host.null_resource.provisioning[0] (remote-exec):           ID: avahi_pkg
    module.minion.module.minion.module.host.null_resource.provisioning[0] (remote-exec):     Function: pkg.latest
    module.minion.module.minion.module.host.null_resource.provisioning[0] (remote-exec):       Result: False
    module.minion.module.minion.module.host.null_resource.provisioning[0] (remote-exec):      Comment: An exception occurred in this state: Traceback (most recent call last):
    uyuni:~/sumaform # virsh list
     Id   Name     State
    ------------------------
     11   minion   running
     12   server   running
    
    uyuni:~/sumaform #
    the good news is - I can login to server and run the command
    server:~ # spacewalk-repo-sync --help
    Usage: spacewalk-repo-sync [options]
    
    Options:
      -h, --help            show this help message and exit
      -l, --list            List the custom channels with the associated
                            repositories.
      -s, --show-packages   List all packages in a specified channel.
      -u URL, --url=URL     The url of the repository. Can be used multiple times.
      -c CHANNEL_LABEL, --channel=CHANNEL_LABEL
                            The label of the channel to sync packages to. Can be
                            used multiple times.
      -p PARENT_LABEL, --parent-channel=PARENT_LABEL
                            Synchronize the parent channel and all its child
                            channels.
      -d, --dry-run         Test run. No sync takes place.
      --latest              Sync latest packages only. Use carefully - you might
                            need to fix some dependencies on your own.
      -g CONFIG, --config=CONFIG
                            Configuration file
    eins
    @eins
    uyuni:~/sumaform # ip addr show
    1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
        link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
        inet 127.0.0.1/8 scope host lo
           valid_lft forever preferred_lft forever
        inet6 ::1/128 scope host
           valid_lft forever preferred_lft forever
    2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1460 qdisc pfifo_fast state UP group default qlen 1000
        link/ether 42:01:0a:be:00:04 brd ff:ff:ff:ff:ff:ff
        altname enp0s4
        altname ens4
        inet 10.190.0.4/32 scope global dynamic eth0
           valid_lft 2662sec preferred_lft 2662sec
        inet6 fe80::e659:d850:f8ad:bf2f/64 scope link noprefixroute
           valid_lft forever preferred_lft forever
    3: eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast master br0 state UP group default qlen 1000
        link/ether 42:01:0a:00:00:02 brd ff:ff:ff:ff:ff:ff
        altname enp0s5
        altname ens5
    4: br0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
        link/ether 42:01:0a:00:00:02 brd ff:ff:ff:ff:ff:ff
        inet6 fe80::4001:aff:fe00:2/64 scope link
           valid_lft forever preferred_lft forever
    5: virbr0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default qlen 1000
        link/ether 52:54:00:60:ce:c2 brd ff:ff:ff:ff:ff:ff
        inet 192.168.122.1/24 brd 192.168.122.255 scope global noprefixroute virbr0
           valid_lft forever preferred_lft forever
    14: vnet8: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master br0 state UNKNOWN group default qlen 1000
        link/ether fe:54:00:52:4d:2b brd ff:ff:ff:ff:ff:ff
        inet6 fe80::fc54:ff:fe52:4d2b/64 scope link
           valid_lft forever preferred_lft forever
    15: vnet9: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master br0 state UNKNOWN group default qlen 1000
        link/ether fe:54:00:7e:ef:99 brd ff:ff:ff:ff:ff:ff
        inet6 fe80::fc54:ff:fe7e:ef99/64 scope link
           valid_lft forever preferred_lft forever
    uyuni:~/sumaform #

    Hello there.

    random thoughts here, you can use a few flags in ip command in order to output just the ip address or the link status
    for example:

    ip -br -c a
    ip -br -c -4 a
    ip -br -c -4 l
    ip -br -4 r
    Cédric Bosdonnat
    @cbosdonnat:matrix.org
    [m]
    @amentee: I usually disable avahi by adding use_avahi = false in the base module
    amentee
    @amentee
    @cbosdonnat:matrix.org ..I am working on this issue (uyuni-project/uyuni#6128). Its a beginner question but please let me know if I edit some code by creating a new branch , commit the same , which function in the test case does the actual test of two instances of spacewalk-repo-sync command . The code for test (https://github.com/uyuni-project/uyuni/blob/master/python/test/unit/spacewalk/satellite_tools/test_reposync.py)
    Cédric Bosdonnat
    @cbosdonnat:matrix.org
    [m]
    @amentee: I think it will be hard to create a unit test for this
    Cédric Bosdonnat
    @cbosdonnat:matrix.org
    [m]
    the best thing to do is to fork the project and work in a branch in your fork
    amentee
    @amentee
    @cbosdo - okay . But how do I access the web ui. I mean I need to generate a test case like syncing the channel first and then I run spacewalk-repo-sync --help. or is there other way to reproduce the issue . So that once I change the code and try to reproduce it again ,I can see whether the amended code has fixed it or not.
    Cédric Bosdonnat
    @cbosdonnat:matrix.org
    [m]
    @amentee: the web UI is on the server VM you deployed with sumaform, just point your browser to it. The credentials on the web UI are admin/admin and on all of the sumaform-generated VMs root/linux
    to reproduce the issue you can add a channel with a repository in the Software > Manage menu
    at the repository level there is a Sync page and button to trigger the synchronization. it will run spacewalk-repo-sync under the hood. To reproduce the issue, just run spacewalk-repo-sync --help while the repo is still synchronizing
    the reposync logs are in /var/log/rhn/reposync* on the server VM
    amentee
    @amentee
    @cbosdonnat:matrix.org ..the server VM on which uyuni is deployed with sumaform is having IP from 192.168.x.x series and its not working. I have allowed port 80 on the firewall as well. Any ideas?
    Cédric Bosdonnat
    @cbosdonnat:matrix.org
    [m]
    @amentee: you can try running spacewalk-service status on the server to figure out if all is properly up and running
    if that is the case, you should try to access the server using its IP address via SSH. If that works you can try to resolve the DNS of the server to see if it points to the proper IP address
    for debugging sessions like this I would disable the firewall on the server machine first if it's enabled by sumaform at all
    amentee
    @amentee
    @cbosdo ..my libvirt hostname on which uyuni is deployed is "server" and its IP is 192.168.122.238.
    uyuni:~ # ping server.suse.lab
    PING server.suse.lab (192.168.122.238) 56(84) bytes of data.
    64 bytes from server.suse.lab (192.168.122.238): icmp_seq=1 ttl=64 time=0.356 ms
    64 bytes from server.suse.lab (192.168.122.238): icmp_seq=2 ttl=64 time=0.291 ms
    ^C
    --- server.suse.lab ping statistics ---
    2 packets transmitted, 2 received, 0% packet loss, time 1014ms
    rtt min/avg/max/mdev = 0.291/0.323/0.356/0.032 ms
    uyuni:~ #
    I am able to ssh as well
    uyuni:~ # ssh root@192.168.122.238
    Last login: Tue Dec 13 10:18:49 2022 from 192.168.122.1
    server:~ #
    server:~ # spacewalk-service status
    ● uyuni-update-config.service - Uyuni update config
         Loaded: loaded (/usr/lib/systemd/system/uyuni-update-config.service; static)
         Active: active (exited) since Tue 2022-12-13 10:18:05 CET; 16min ago
        Process: 1160 ExecStart=/usr/sbin/uyuni-update-config (code=exited, status=0/SUCCESS)
       Main PID: 1160 (code=exited, status=0/SUCCESS)
    
    ● uyuni-check-database.service - Uyuni check database
         Loaded: loaded (/usr/lib/systemd/system/uyuni-check-database.service; static)
         Active: active (exited) since Tue 2022-12-13 10:18:14 CET; 16min ago
        Process: 1300 ExecStart=/usr/sbin/spacewalk-startup-helper check-database (code=exited, status=0/SUCCESS)
       Main PID: 1300 (code=exited, status=0/SUCCESS)
    
    ● tomcat.service - Apache Tomcat Web Application Container
         Loaded: loaded (/usr/lib/systemd/system/tomcat.service; enabled; vendor preset: disabled)
        Drop-In: /usr/lib/systemd/system/tomcat.service.d
                 └─override.conf
         Active: active (running) since Tue 2022-12-13 10:18:14 CET; 16min ago
       Main PID: 1432 (java)
          Tasks: 120 (limit: 576)
         CGroup: /system.slice/tomcat.service
                 └─ 1432 /usr/lib64/jvm/jre/bin/java -Xdebug -Xrunjdwp:transport=dt_socket,address=server.suse.lab:8000,server=y,suspend=n -ea -Xms256m -Xmx1G -Djava.awt.h…
    
    ● spacewalk-wait-for-tomcat.service - Spacewalk wait for tomcat
         Loaded: loaded (/usr/lib/systemd/system/spacewalk-wait-for-tomcat.service; static)
         Active: active (exited) since Tue 2022-12-13 10:18:54 CET; 15min ago
        Process: 1433 ExecStart=/usr/sbin/spacewalk-startup-helper wait-for-tomcat (code=exited, status=0/SUCCESS)
       Main PID: 1433 (code=exited, status=0/SUCCESS)
    
    ● salt-master.service - The Salt Master Server
         Loaded: loaded (/usr/lib/systemd/system/salt-master.service; enabled; vendor preset: disabled)
        Drop-In: /usr/lib/systemd/system/salt-master.service.d
                 └─override.conf
         Active: active (running) since Tue 2022-12-13 10:18:14 CET; 16min ago
           Docs: man:salt-master(1)
                 file:///usr/share/doc/salt/html/contents.html
                 https://docs.saltproject.io/en/latest/contents.html
       Main PID: 1430 (salt-master)
          Tasks: 65
         CGroup: /system.slice/salt-master.service
                 ├─ 1430 /usr/bin/python3 /usr/bin/salt-master
                 ├─ 1494 /usr/bin/python3 /usr/bin/salt-master
                 ├─ 1558 /usr/bin/python3 /usr/bin/salt-master
                 ├─ 1562 /usr/bin/python3 /usr/bin/salt-master
                 ├─ 1564 /usr/bin/python3 /usr/bin/salt-master
                 ├─ 1566 /usr/bin/python3 /usr/bin/salt-master
                 ├─ 1567 /usr/bin/python3 /usr/bin/salt-master
                 ├─ 1570 /usr/bin/python3 /usr/bin/salt-master
                 ├─ 1571 /usr/bin/python3 /usr/bin/salt-master
                 ├─ 1576 /usr/bin/python3 /usr/bin/salt-master
                 ├─ 1582 /usr/bin/python3 /usr/bin/salt-master
                 ├─ 1583 /usr/bin/python3 /usr/bin/salt-master
                 ├─ 1584 /usr/bin/python3 /usr/bin/salt-master
                 ├─ 1586 /usr/bin/python3 /usr/bin/salt-master
                 ├─ 1587 /usr/bin/python3 /usr/bin/salt-master
                 ├─ 1590 /usr/bin/python3 /usr/bin/salt-master
                 ├─ 1591 /usr/bin/python3 /usr/bin/salt-master
                 └─ 5996 /usr/bin/python3 /usr/bin/salt-master
    
    ● salt-api.service - The Salt API
         Loaded: loaded (/usr/lib/systemd/system/salt-api.service; enabled; vendor preset: disabled)
        Drop-In: /usr/lib/systemd/system/salt-api.service.d
                 └─override.conf
         Active: active (running) since Tue 2022-12-13 10:18:14 CET; 16min ago
           Docs: man:salt-api(1)
                 file:///usr/share/doc/salt/html/contents.html
                 https://docs.saltproject.io/en/latest/contents.html
       Main PID: 1429 (salt-api)
          Tasks: 104 (limit: 4672)
         CGroup: /system.slice/salt-api.service
                 ├─ 1429 /usr/bin/python3 /usr/bin/salt-api
                 └─ 1585 /usr/bin/python3 /usr/bin/salt-api
    
    ● spacewalk-wait-for-salt.service - Make sure that salt is started before httpd
    there is no active firewall on hostname "server" on which uyuni is deployed
    and I have stopped the firewalld service on the base machine as well
    still when I hit my IP 192.168.122.238 in the browser it does not work
    @cbosdonnat:matrix.org ...any ideas what else I can try?
    Cédric Bosdonnat
    @cbosdonnat:matrix.org
    [m]
    @amentee: I didn't see apache2 service in your output... is it running? can you access http://server.suse.lab/pub/ ?
    amentee
    @amentee
    @cbosdo ..no my apache service is not working
    server:~ # systemctl status apache
    ○ apache2.service - The Apache Webserver
         Loaded: loaded (/usr/lib/systemd/system/apache2.service; enabled; vendor preset: disabled)
        Drop-In: /usr/lib/systemd/system/apache2.service.d
                 └─override.conf
         Active: inactive (dead)
    
    Dec 15 12:40:54 server systemd[1]: Dependency failed for The Apache Webserver.
    Dec 15 12:40:54 server systemd[1]: apache2.service: Job apache2.service/start failed with result 'dependency'.
    server:~ #
    when i saw the logs it says my postgres service is not working properly
    server:~ # systemctl status postgresql.service
    × postgresql.service - PostgreSQL database server
         Loaded: loaded (/usr/lib/systemd/system/postgresql.service; enabled; vendor preset: disabled)
         Active: failed (Result: exit-code) since Thu 2022-12-15 12:46:03 CET; 5min ago
        Process: 1628 ExecStart=/usr/share/postgresql/postgresql-script start (code=exited, status=1/FAILURE)
    
    Dec 15 12:46:03 server systemd[1]: Starting PostgreSQL database server...
    Dec 15 12:46:03 server postgresql-script[1635]: 2022-12-15 12:46:03.790 CET   [1635]LOG:  redirecting log output to logging collector process
    Dec 15 12:46:03 server postgresql-script[1635]: 2022-12-15 12:46:03.790 CET   [1635]HINT:  Future log output will appear in directory "log".
    Dec 15 12:46:03 server postgresql-script[1633]: pg_ctl: could not start server
    Dec 15 12:46:03 server postgresql-script[1633]: Examine the log output.
    Dec 15 12:46:03 server systemd[1]: postgresql.service: Control process exited, code=exited, status=1/FAILURE
    Dec 15 12:46:03 server systemd[1]: postgresql.service: Failed with result 'exit-code'.
    Dec 15 12:46:03 server systemd[1]: Failed to start PostgreSQL database server.
    server:~ #
    when I checked the logs below are the messages
    server:/var/lib/pgsql/data/log # cat postgresql-2022-12-15_124603.log
    2022-12-15 12:46:03.791 CET   [1635]LOG:  starting PostgreSQL 14.5 on x86_64-suse-linux-gnu, compiled by gcc (SUSE Linux) 7.5.0, 64-bit
    2022-12-15 12:46:03.791 CET   [1635]LOG:  listening on IPv4 address "0.0.0.0", port 5432
    2022-12-15 12:46:03.791 CET   [1635]LOG:  listening on IPv6 address "::", port 5432
    2022-12-15 12:46:03.792 CET   [1635]LOG:  listening on Unix socket "/var/run/postgresql/.s.PGSQL.5432"
    2022-12-15 12:46:03.792 CET   [1635]LOG:  listening on Unix socket "/tmp/.s.PGSQL.5432"
    2022-12-15 12:46:03.795 CET   [1637]LOG:  database system was interrupted; last known up at 2022-12-13 10:33:15 CET
    2022-12-15 12:46:03.795 CET   [1637]PANIC:  could not read file "pg_logical/replorigin_checkpoint": read 0 of 4
    2022-12-15 12:46:03.797 CET   [1635]LOG:  startup process (PID 1637) was terminated by signal 6: Aborted
    2022-12-15 12:46:03.797 CET   [1635]LOG:  aborting startup due to startup process failure
    2022-12-15 12:46:03.803 CET   [1635]LOG:  database system is shut down
    server:/var/lib/pgsql/data/log #
    it seems issue is with port 5432 , even telnet is not working on localhost
    server:/var/lib/pgsql/data/log # telnet 127.0.0.1 5432
    Trying 127.0.0.1...
    telnet: connect to address 127.0.0.1: Connection refused
    server:/var/lib/pgsql/data/log #