FW CNaaS Replacement of broken HW

Perform the following procedure:

  1. Check the following parameters prior to  deploying an RMA device in a Chassis Cluster environment:

    Make sure that the following parameters on the new RMA device are the same as the active node of the Chassis Cluster.
     
    • On the cluster node not being replaced, disable preempt for RG1+ groups.
    • Check the hardware on the active cluster node and ensure that the device that being placed in the cluster has the same hardware setup and that all FPCs are present in the same slot and are active. The command to check this is show chassis hardware .
    • Check the Junos OS version on the active node of the cluster and upgrade or downgrade Junos (for more information, refer to  KB16652 - SRX Getting Started - Junos Software Installation/Upgrade ) on the new device, so that they match. 

    • Save the configuration in a file on the working node and upload the file to the new device in the /var/tmp directory.

  2. Console to the isolated RMA device (make sure that no cables are connected, other than the console cable) and perform the following steps:   
    1. Get into configuration mode.

    2. Execute the # delete command.

    3. Configure the root password:

      # set system root-authentication plain-text-password
       
    4. Then commit:

      # commit
     
  3.  
  • Configure chassis clustering on the isolated RMA device. Use the following command to enable the chassis cluster (you can execute the show chassis cluster status command on the working node to identify the cluster-id):

    code> set chassis cluster cluster-id <id> node <No.>

    Where <No.> will be 1 or 0, depending on which node is being replaced.
  • Reboot the new node. The node will come online with the cluster being enabled:

    > request system reboot
  • Enter configuration mode and load the configuration from the file, which was copied in the /var/tmp directory in Step 1. Use the following command to load the configuration:

    # load override /var/tmp/ <filename>
  • When the configuration is completely loaded, commit the configuration:

    # commit and-quit
     
  • Halt the new node:

    > request system halt
     
  • Now connect the fabric and control ports (make sure that none of the revenue port cables are connected) and reboot the node.
  • Check the status of the FPC PIC by executing the show chassis fpc pic-status command. In the output, all of the FPCs and PICs should be online.

  • When the new node comes online, it should join the cluster as the secondary. You can check the status by executing the show chassis cluster status command. In the output, the priority of RG0 should be the configured value, and the priority of the other RG should be 0 if interface monitoring has been configured.

  • As the new node comes online, it will transition through the following states: Hold > Primary > Secondary. If the new node does not complete the move to Secondary state, contact Juniper Technical Assistance to investigate.

  • If the output that is generated in Step 10 shows the primary and secondary for all RGs, then connect all the revenue port cables and again check the chassis cluster status via the show chassis cluster status command. In this output, you should see the configured values for all of the RGs.

  • If you can access the Internet from the new node, update the license on the new node or download the license and load it. If you are downloading the license on the PC, then save it in a file and upload it to the new node in the /var/tmp directory:
  • No labels