Security

 View Only
last person joined: yesterday 

Ask questions and share experiences with Juniper Connected Security. Discuss Advanced Threat Protection, SecIntel, Secure Analytics, Secure Connect, Security Director, and all things related to Juniper security technologies.
  • 1.  High CPU usage and flowd_octeon_hm crashing Juniper SRX340

    Posted 05-22-2023 10:28

    Hello!
    We've got used Juniper SRX340, not new. And we have problems with it
    CLI works very slowly, CPU utilization is constantly high, none of the ports except mgmt work. 
    Log shows flowd_octeon_hm crashing
    It seems that I've got these troubles after I update junos version from 19.4R3-S1.3 to recommended 21.2R3-S3.5. Now I rolled back to 19.4R3-S1.3 and but the problem remains.
    Default config loaded and only the console cable is connected to the device.
    Any ideas? I would be grateful for any advice

    root> show version 
    Model: srx340
    Junos: 19.4R3-S1.3
    JUNOS Software Release [19.4R3-S1.3]

    root> show chassis routing-engine 
    Routing Engine status:
        Temperature                 34 degrees C / 93 degrees F
        CPU temperature             58 degrees C / 136 degrees F
        Total memory              4096 MB Max   819 MB used ( 20 percent)
          Control plane memory    2336 MB Max   818 MB used ( 35 percent)
          Data plane memory       1760 MB Max     0 MB used (  0 percent)
        5 sec CPU utilization:
          User                       6 percent
          Background                 0 percent
          Kernel                    78 percent
          Interrupt                  0 percent
          Idle                      15 percent
        Model                          RE-SRX340
        Serial ID                      CY3216AF0366
        Start time                     2023-05-22 08:32:01 UTC
        Uptime                         59 minutes, 24 seconds
        Last reboot reason             0x1:power cycle/failure 
        Load averages:                 1 minute   5 minute  15 minute
                                           9.39       9.10       8.46

    root> show system storage               
    Filesystem              Size       Used      Avail  Capacity   Mounted on
    /dev/da0s1a             579M       387M       145M       73%  /
    devfs                   1.0K       1.0K         0B      100%  /dev
    /dev/md0                 20M        12M       6.4M       65%  /junos
    /cf/packages            579M       387M       145M       73%  /junos/cf/packages
    devfs                   1.0K       1.0K         0B      100%  /junos/cf/dev
    /dev/md1                1.3G       1.3G         0B      100%  /junos
    /cf                      20M        12M       6.4M       65%  /junos/cf
    devfs                   1.0K       1.0K         0B      100%  /junos/dev/
    /cf/packages            579M       387M       145M       73%  /junos/cf/packages1
    procfs                  4.0K       4.0K         0B      100%  /proc
    /dev/bo0s3e             185M        30K       170M        0%  /config
    /dev/bo0s3f             5.0G       137M       4.4G        3%  /cf/var
    /dev/md2                1.0G        98M       851M       10%  /mfs
    /cf/var/jail            5.0G       137M       4.4G        3%  /jail/var
    /cf/var/jails/rest-api       5.0G       137M       4.4G    3%  /web-api/var
    /cf/var/log             5.0G       137M       4.4G        3%  /jail/var/log
    devfs                   1.0K       1.0K         0B      100%  /jail/dev
    /dev/md3                1.8M       4.0K       1.7M        0%  /jail/mfs

    root> show system processes extensive 
    last pid:  5471;  load averages:  9.07,  9.05,  8.45  up 0+01:00:12    09:31:43
    186 processes: 20 running, 146 sleeping, 4 stopped, 1 zombie, 15 waiting

    Mem: 533M Active, 346M Inact, 1923M Wired, 617M Cache, 112M Buf, 544M Free
    Swap: 792M Total, 792M Free
      PID USERNAME PRI NICE   SIZE    RES STATE  C   TIME   WCPU COMMAND
       22 root     155   52     0K    16K CPU3   3  55:25 92.48% idle: cpu3
       23 root     155   52     0K    16K CPU2   2  55:25 92.48% idle: cpu2
       24 root     155   52     0K    16K CPU1   1  55:16 92.48% idle: cpu1
     5166 root     123    0  1893M  1243M RUN    0   3:25 83.45% flowd_octeon_hm
       25 root     155   52     0K    16K RUN    0  10:08  1.46% idle: cpu0
     5339 root      70    0  2892K  1328K RUN    0   0:10  1.46% gzip
     5166 root     117    0  1893M  1243M STOP   0   3:25  0.00% flowd_octeon_hm
     5166 root     117    0  1893M  1243M STOP   1   3:25  0.00% flowd_octeon_hm
     5166 root       8    0  1893M  1243M STOP   2   3:25  0.00% flowd_octeon_hm
     5166 root       8    0  1893M  1243M STOP   3   3:25  0.00% flowd_octeon_hm
     2074 root      20    0   155M 43988K RUN    0   0:30  0.00% authd
       96 root      -8    0     0K    16K mdwait 0   0:20  0.00% md1
       27 root     -36 -139     0K    16K WAIT   0   0:13  0.00% swi7: clock
     2105 root      20    0 36284K 11892K select 0   0:11  0.00% license-check
     2057 root      20    0 50656K 20484K select 0   0:10  0.00% pfed

     root> show log messages | last 100 
    May 22 09:39:29   init: forwarding (PID 5524) terminated by signal number 11. Core dumped!
    May 22 09:39:29   init: Dump Command: /bin/sh (PID 5876) started
    May 22 09:39:29   init: forwarding (PID 5877) started
    May 22 09:39:31   flowd_octeon_hm: flowd_srxle_is_mpim_present: slot 1, mPIM not present
    May 22 09:39:31   flowd_octeon_hm: flowd_srx_i2c_scan: slot 1, mPIM not detected
    May 22 09:39:31   flowd_octeon_hm: flowd_srxle_is_mpim_present: slot 2, mPIM not present
    May 22 09:39:31   flowd_octeon_hm: flowd_srx_i2c_scan: slot 2, mPIM not detected
    May 22 09:39:31   flowd_octeon_hm: flowd_srxle_is_mpim_present: slot 3, mPIM not present
    May 22 09:39:31   flowd_octeon_hm: flowd_srx_i2c_scan: slot 3, mPIM not detected
    May 22 09:39:31   flowd_octeon_hm: flowd_srxle_is_mpim_present: slot 4, mPIM not present
    May 22 09:39:31   flowd_octeon_hm: flowd_srx_i2c_scan: slot 4, mPIM not detected
    May 22 09:39:33   /kernel: cpuid = 0
    May 22 09:39:33   /kernel: BAD_PAGE_FAULT: pid 5877 (flowd_octeon_hm), uid 0: pc 0x41249828 got a write fault at 0x2030
    May 22 09:39:33   /kernel: Trapframe Register Dump:
    May 22 09:39:33   /kernel: zero: 0000000000000000  at: 00000000474d0000  v0: 0000000000000000  v1: 0000000000000000
    May 22 09:39:33   /kernel:   a0: 0000000000000000  a1: 0000000000002030  a2: 0000000001010101  a3: 0000000000002030
    May 22 09:39:33   /kernel:   t0: 0000000050808cf1  t1: 0000000000002030  t2: 0000000000000000  t3: 0000000000000000
    May 22 09:39:33   /kernel:  ta0: 000000000000001b ta1: 000000000288d6b0 ta2: 0000000000000001 ta3: 000000003fe00000
    May 22 09:39:33   /kernel:   t8: ffffffffa1a5d600  t9: 000000004401b3e0  s0: 0000000000000000  s1: 0000000000000001
    May 22 09:39:33   /kernel:   s2: 0000000000008426  s3: 0000000045f40000  s4: 0000000000000000  s5: 0000000002431b10
    May 22 09:39:33   /kernel:   s6: 0000000000000682  s7: 0000000045cf0000  k0: 0000000000000000  k1: 0000000000000000
    May 22 09:39:33   /kernel:   gp: 0000000000000000  sp: 0000000002431a68  s8: 0000000049dec740  ra: 0000000041254ed8
    May 22 09:39:33   /kernel:   sr: 0000000050808cf2 mullo: ffffffff9999999c    mulhi: 0000000000000001
    May 22 09:39:33   /kernel:   pc: 0000000041249828 cause: 000000000000000c badvaddr: 0000000000002030
    May 22 09:39:33   /kernel: Page table info for pc address 0x41249828: pte = 0x0
    May 22 09:39:33   /kernel: Dumping 4 words starting at pc address 0x41249828:
    May 22 09:39:33   /kernel: ad260000 40886000 00000000 00000000
    May 22 09:39:33   /kernel: Flowd process id: 5877 is dumping core, cleaning up RTFIFO resources
    May 22 09:40:09   dumpd: tar: flowd_octeon_hm.core.4.gz: file changed as we read it 1684748384 != 1684747994 tar: Error exit delayed from previous errors
    May 22 09:40:09   dumpd: Unable to create core tarball /var/tmp/flowd_octeon_hm.core-tarball.4.tgz
    May 22 09:40:17   gksd: Exit at main 853
    May 22 09:40:41   mgd[5928]: UI_CHILD_SIGNALED: Child received signal: PID 5929, signal Terminated: 15, command='/usr/libexec/ui/show-support'
    May 22 09:45:30   dumpd: Core and context for flowd_octeon_hm saved in /var/tmp/flowd_octeon_hm.core-tarball.4.tgz
    May 22 09:45:48   init: forwarding (PID 5877) terminated by signal number 11. Core dumped!
    May 22 09:45:48   init: Dump Command: /bin/sh (PID 6251) started
    May 22 09:45:48   init: forwarding (PID 6252) started
    May 22 09:45:50   flowd_octeon_hm: flowd_srxle_is_mpim_present: slot 1, mPIM not present
    May 22 09:45:50   flowd_octeon_hm: flowd_srx_i2c_scan: slot 1, mPIM not detected
    May 22 09:45:50   flowd_octeon_hm: flowd_srxle_is_mpim_present: slot 2, mPIM not present
    May 22 09:45:50   flowd_octeon_hm: flowd_srx_i2c_scan: slot 2, mPIM not detected
    May 22 09:45:50   flowd_octeon_hm: flowd_srxle_is_mpim_present: slot 3, mPIM not present
    May 22 09:45:50   flowd_octeon_hm: flowd_srx_i2c_scan: slot 3, mPIM not detected
    May 22 09:45:50   flowd_octeon_hm: flowd_srxle_is_mpim_present: slot 4, mPIM not present
    May 22 09:45:50   flowd_octeon_hm: flowd_srx_i2c_scan: slot 4, mPIM not detected
    May 22 09:45:54   /kernel: cpuid = 0    
    May 22 09:45:54   /kernel: BAD_PAGE_FAULT: pid 6252 (flowd_octeon_hm), uid 0: pc 0x41249828 got a write fault at 0x2030
    May 22 09:45:54   /kernel: Trapframe Register Dump:
    May 22 09:45:54   /kernel: zero: 0000000000000000  at: 00000000474d0000  v0: 0000000000000000  v1: 0000000000000000
    May 22 09:45:54   /kernel:   a0: 0000000000000000  a1: 0000000000002030  a2: 0000000001010101  a3: 0000000000002030
    May 22 09:45:54   /kernel:   t0: 0000000050808cf1  t1: 0000000000002030  t2: 0000000000000000  t3: 0000000000000000
    May 22 09:45:54   /kernel:  ta0: 000000000000001b ta1: 000000000288d6b0 ta2: 0000000000000001 ta3: 000000003fe00000
    May 22 09:45:54   /kernel:   t8: ffffffffa1a5d600  t9: 000000004401b3e0  s0: 0000000000000000  s1: 0000000000000001
    May 22 09:45:54   /kernel:   s2: 0000000000008426  s3: 0000000045f40000  s4: 0000000000000000  s5: 0000000002431b10
    May 22 09:45:54   /kernel:   s6: 0000000000000682  s7: 0000000045cf0000  k0: 0000000000000000  k1: 0000000000000000
    May 22 09:45:54   /kernel:   gp: 0000000000000000  sp: 0000000002431a68  s8: 0000000049dec740  ra: 0000000041254ed8
    May 22 09:45:54   /kernel:   sr: 0000000050808cf2 mullo: ffffffff9999999c    mulhi: 0000000000000001
    May 22 09:45:54   /kernel:   pc: 0000000041249828 cause: 000000000000000c badvaddr: 0000000000002030
    May 22 09:45:54   /kernel: Page table info for pc address 0x41249828: pte = 0x0
    May 22 09:45:54   /kernel: Dumping 4 words starting at pc address 0x41249828:
    May 22 09:45:54   /kernel: ad260000 40886000 00000000 00000000
    May 22 09:45:54   /kernel: Flowd process id: 6252 is dumping core, cleaning up RTFIFO resources
    May 22 09:46:43   gksd: Exit at main 853



  • 2.  RE: High CPU usage and flowd_octeon_hm crashing Juniper SRX340

    Posted 05-22-2023 14:03

    Hello

    We had a similar problem that CLi was going very slow and nothing was visible in the processes. 
    JTAC helped solve the problem which was that there were a lot of attempts to log in to the device from the world even though there were restrictions in place. 
    As we are an ISP, we set restrictions on ssh on the floor above, i.e. at our CORE, which the client agreed to, and the problem solved itself. 

    If you have the opportunity to completely block access from the Internet to the device so that any ports are open only from the LAN, hopefully this will help 



    ------------------------------
    Grzegorz Dacka
    ------------------------------



  • 3.  RE: High CPU usage and flowd_octeon_hm crashing Juniper SRX340

    Posted 05-22-2023 14:16

    Unfortunately, this is not my case. Only a console cable is connected to the device. No WAN, no LAN. And as I wrote above, only the console and management ports work. Even if I wanted to connect it to the internet I can't.
    But anyway thanks for the advice!




  • 4.  RE: High CPU usage and flowd_octeon_hm crashing Juniper SRX340

    Posted 05-23-2023 10:28

    check  the fpc status
    >show chassis hardware



    ------------------------------
    SALVATORE COLIMORO
    ------------------------------



  • 5.  RE: High CPU usage and flowd_octeon_hm crashing Juniper SRX340

    Posted 05-24-2023 01:56

    PIC missing. Perhaps it's the reason why the flowd is crashing.

    root@R-BCM-01> show chassis hardware 
    Hardware inventory:
    Item             Version  Part number  Serial number     Description
    Chassis                                CY3216AF0366      SRX340
    Routing Engine   REV 0x13 650-065043   CY3216AF0366      RE-SRX340
    FPC 0                     BUILTIN      BUILTIN           FPC
    Power Supply 0  

    root@R-BCM-01> show chassis fpc 
                         Temp  CPU Utilization (%)   CPU Utilization (%)  Memory    Utilization (%)
    Slot State            (C)  Total  Interrupt      1min   5min   15min  DRAM (MB) Heap     Buffer
      0  Present         -------------------- CPU less FPC --------------------
      1  Empty           
      2  Empty           
      3  Empty           
      4  Empty       

    root@R-BCM-01> show chassis fpc pic-status  
    Slot 0   Present      FPC                                           

    root@R-BCM-01> request chassis fpc slot 0 offline 
    FPC 0 is in transition, try again

    root@R-BCM-01> request chassis fpc slot 0 online     
    FPC 0 is in transition, try again




  • 6.  RE: High CPU usage and flowd_octeon_hm crashing Juniper SRX340

     
    Posted 05-23-2023 11:35

    Hello,

    I can see there is Bad Page fault in logs and also core dumps generated. 
    Due to some reason the flowd is crashing. 
    I would suggest you open a JTAC ticket to have a look at the core 
    Alternatively, Since no traffic is there on device, try deleting all the config, set root password and then restart the device ( to eliminate possibilities of a config error ) 

    Regards, 



    ------------------------------
    Brijil R
    ------------------------------



  • 7.  RE: High CPU usage and flowd_octeon_hm crashing Juniper SRX340

    Posted 05-24-2023 02:07

    I tried to delete all the config. Also I checked flash memory from single mode, tried to copy snapshot from healthy srx340. Nothing helped, It seems it's hardware problem