Skip to main content
Redhat Developers  Logo
  • Products

    Featured

    • Red Hat Enterprise Linux
      Red Hat Enterprise Linux Icon
    • Red Hat OpenShift AI
      Red Hat OpenShift AI
    • Red Hat Enterprise Linux AI
      Linux icon inside of a brain
    • Image mode for Red Hat Enterprise Linux
      RHEL image mode
    • Red Hat OpenShift
      Openshift icon
    • Red Hat Ansible Automation Platform
      Ansible icon
    • Red Hat Developer Hub
      Developer Hub
    • View All Red Hat Products
    • Linux

      • Red Hat Enterprise Linux
      • Image mode for Red Hat Enterprise Linux
      • Red Hat Universal Base Images (UBI)
    • Java runtimes & frameworks

      • JBoss Enterprise Application Platform
      • Red Hat build of OpenJDK
    • Kubernetes

      • Red Hat OpenShift
      • Microsoft Azure Red Hat OpenShift
      • Red Hat OpenShift Virtualization
      • Red Hat OpenShift Lightspeed
    • Integration & App Connectivity

      • Red Hat Build of Apache Camel
      • Red Hat Service Interconnect
      • Red Hat Connectivity Link
    • AI/ML

      • Red Hat OpenShift AI
      • Red Hat Enterprise Linux AI
    • Automation

      • Red Hat Ansible Automation Platform
      • Red Hat Ansible Lightspeed
    • Developer tools

      • Red Hat Trusted Software Supply Chain
      • Podman Desktop
      • Red Hat OpenShift Dev Spaces
    • Developer Sandbox

      Developer Sandbox
      Try Red Hat products and technologies without setup or configuration fees for 30 days with this shared Openshift and Kubernetes cluster.
    • Try at no cost
  • Technologies

    Featured

    • AI/ML
      AI/ML Icon
    • Linux
      Linux Icon
    • Kubernetes
      Cloud icon
    • Automation
      Automation Icon showing arrows moving in a circle around a gear
    • View All Technologies
    • Programming Languages & Frameworks

      • Java
      • Python
      • JavaScript
    • System Design & Architecture

      • Red Hat architecture and design patterns
      • Microservices
      • Event-Driven Architecture
      • Databases
    • Developer Productivity

      • Developer productivity
      • Developer Tools
      • GitOps
    • Secure Development & Architectures

      • Security
      • Secure coding
    • Platform Engineering

      • DevOps
      • DevSecOps
      • Ansible automation for applications and services
    • Automated Data Processing

      • AI/ML
      • Data Science
      • Apache Kafka on Kubernetes
      • View All Technologies
    • Start exploring in the Developer Sandbox for free

      sandbox graphic
      Try Red Hat's products and technologies without setup or configuration.
    • Try at no cost
  • Learn

    Featured

    • Kubernetes & Cloud Native
      Openshift icon
    • Linux
      Rhel icon
    • Automation
      Ansible cloud icon
    • Java
      Java icon
    • AI/ML
      AI/ML Icon
    • View All Learning Resources

    E-Books

    • GitOps Cookbook
    • Podman in Action
    • Kubernetes Operators
    • The Path to GitOps
    • View All E-books

    Cheat Sheets

    • Linux Commands
    • Bash Commands
    • Git
    • systemd Commands
    • View All Cheat Sheets

    Documentation

    • API Catalog
    • Product Documentation
    • Legacy Documentation
    • Red Hat Learning

      Learning image
      Boost your technical skills to expert-level with the help of interactive lessons offered by various Red Hat Learning programs.
    • Explore Red Hat Learning
  • Developer Sandbox

    Developer Sandbox

    • Access Red Hat’s products and technologies without setup or configuration, and start developing quicker than ever before with our new, no-cost sandbox environments.
    • Explore Developer Sandbox

    Featured Developer Sandbox activities

    • Get started with your Developer Sandbox
    • OpenShift virtualization and application modernization using the Developer Sandbox
    • Explore all Developer Sandbox activities

    Ready to start developing apps?

    • Try at no cost
  • Blog
  • Events
  • Videos

Advanced container management at the edge for Node.js apps

May 22, 2024
Michael Dawson
Related topics:
ContainersEdge computingKubernetesNode.js
Related products:
Red Hat Advanced Cluster Management for KubernetesRed Hat build of Node.js

Share:

    This is part three in our series on running Node.js applications on the edge with Red Hat Enterprise Linux (RHEL)/Fedora, which includes: 

    • Running Node.js applications on the edge with RHEL/Fedora
    • Containerizing your Node.js applications at the edge on RHEL/Fedora
    • Advanced container management at the edge for Node.js applications (this post)  

    In the first part, we introduced you to the hardware and software for our Node.js-based edge example as well as some of the details on laying the foundation for deploying the application by building and installing the operating system using Fedora IoT.

    In the second part we dug a bit deeper into the application itself, and how to build, bundle, deploy, and update the Node.js application using Podman and containers.

    In this third and final part, we'll look at running the container on the device using Kubernetes and managing it remotely with Red Hat’s Advanced Cluster Management for Kubernetes.

    A quick reminder of the example

    The hardware/software outlined in the example in part 1 monitors the underground gas tank at a gas station showing the current temperature and the status of the tank lids. The hardware is based on a Raspberry Pi 4 with a temperature sensor and lid switches as shown in Figure 1. The application is a Next.js based application running on Fedora IoT, which displays the status of those sensors.

    Picture of the gas station simulator edge device
    Figure 1: The hardware.
    Picture of example UI were both tanks caps are shown as closed and tank is green.
    Figure 2: The user interface.

     In part 2 we built the application using a Containerfile and pushed it to quay.io as a container named midawson/gas-station, as shown in Figure 3. 

    Picture of Quay.io repository for midawson showing gas-station images
    Figure 3: gas-station image in Quay.io.

    To deploy ,we then pulled and ran the container on the device using Podman. Podman Quadlet was used so that we could pull and start the container on boot of the device, and the application could be automatically updated on a timed interval. 

    While this approach is appropriate in many cases, some organizations will want to have more control over when and how the application is updated. In addition, as the number or complexity of applications deployed on an edge device increases, it may become necessary to manage a group of containers and more easily control the routes to those applications.  

    Kubernetes allows groups of containers to be managed, along with the internal and external routes for them. Building on that capability, Red Hat’s Advance Cluster Management for Kubernetes allows deployments to be controlled centrally, simplifying management of a larger fleet as well as providing control over how and when applications are updated.

    Kubernetes at the edge: Hello MicroShift

    With edge devices growing in terms of their CPU and memory (for example, 4 cores and 8G of memory for the Raspberry Pi 4 in our example) and Kubernetes distributions for constrained environments, it is possible to run a capable Kubernetes distribution at the edge. MicroShift is an example of one of these Kubernetes distributions for which the minimum requirements were only 2 CPU cores, 2-3GG of RAM and 10GB of storage at the time this article was published.

    Note that MicroShift is not supported on Fedora IoT, and RHEL is not supported on the Raspberry Pi, so what we’ll experiment with in this article is not supported in any way. On the other hand, using a device that supports Red Hat Device Edge, installing MicroShift is as easy as adding a few packages in the blueprint we use to create the image for our device. The article Meet Red Hat Device Edge with MicroShift covers how to create that blueprint so we’ll not cover that here. Instead we’ll skip forward, assuming we have MicroShift running on our target device.

    Deploying our Node.js application in Kubernetes

    We can start by deploying the gas-station application to Kubernetes locally. To do that we need a Kubernetes Deployment to run the container and a Kubernetes service to allow the port exposed by the application to be reached. A basic example would be (http://github.com/mhdawson/gas-station/blob/main/deployment.yaml):

    apiVersion: apps/v1
    kind: Deployment
    metadata:
      name: gas-station-deployment
      labels:
        app: gas-station
    spec:
      selector:
        matchLabels:
          app: gas-station
      replicas: 1
      template:
        metadata:
          labels:
            app: gas-station
        spec:
          containers:
          - name: gas-station
            image: quay.io/midawson/gas-station:latest
            ports:
            - containerPort: 3000
            resources:
              limits:
                memory: 128Mi
                cpu: "500m"
              requests:
                memory: 128Mi
                cpu: "250m"
    ---
    apiVersion: v1
    kind: Service
    metadata:
      name: gas-station
    spec:
      selector:
        app: gas-station
      ports:
        - protocol: TCP
          port: 3000
          targetPort: 3000
          nodePort: 30011
      type: NodePort

    This deploys the same container that we built in part 2—quay.io/midawson/gas-station:latest.

    Unfortunately, as was the case when we first containerized the application and ran it with Podman, the GPIO switches are not available in the container and the tank will show green even if one of the caps is removed, as shown in Figures 4 and 5.

     

    Picture of tank simulator with one tank cap open
    Figure 4: Tank simulator with one tank cap open.

     

    Picture of example UI were both tanks caps are shown as closed and tank is green.
    Figure 5: gas-station UI showing both tank caps are closed.

    Ok, so we’ll just use the equivalent to the Podman --device options, right? Well unfortunately, this does not exist as a simple option to add to our container specification. As in part 2, one of the interesting parts of running our application in the edge device is going to be enabling access to the hardware devices.

    Getting access to the device in the container: Approach 1

    Most of the available instructions/advice on the internet on how to get access to devices in a container running under Kubernetes suggest you make the container privileged by configuring the securityContext:

          containers:
          - name: gas-station
            securityContext:
              privileged: true
            image: quay.io/midawson/gas-station:latest
            ports:
            - containerPort: 3000
            resources:
              limits:
                memory: 128Mi
                cpu: "500m"
              requests:
                memory: 128Mi
                cpu: "250m"

    When a container is privileged it has broad access to the host, including all of the devices on the host. You can read more about configuring a securityContext in Configure a Security Context for a Pod or Container. This gives significant capabilities to the container. Therefore, by default, deployments that have containers configured this way will be rejected when you try to deploy them to MicroShift.

    When we first tried to deploy the application we got this error:

    Warning: would violate PodSecurity "restricted:v1.24": privileged (container "gas-station" must not set securityContext.privileged=true), allowPrivilegeEscalation != false (container "gas-station" must set securityContext.allowPrivilegeEscalation=false), unrestricted capabilities (container "gas-station" must set securityContext.capabilities.drop=["ALL"])
    deployment.apps/michael-station-deployment created

    This is because of pod security admission policies which by default restrict what privileges containers may be given. You can read more about these policies in Pod Security Admission.

    Unfortunately, if you follow the suggestions in the warning you will end up with a deployment where the container cannot access the device for the GPIO. Instead, we needed to modify the admission policy for the namespace we created so that it accepts privileged containers. We did that as follows:

     oc label namespaces test pod-security.kubernetes.io/enforce=privileged --overwrite=true

    After that we still got an error because by default a newly created namespace is not associated with a credential that has the right permissions. We saw this from the error reported by the replica set created by the deployment:

    Warning  FailedCreate  14s (x15 over 96s)  replicaset-controller  Error creating: pods "michael-station-deployment-5f96c65958-" is forbidden: unable to validate against any security context constraint: [provider "anyuid": Forbidden: not usable by user or serviceaccount, provider restricted-v2: .containers[0].privileged: Invalid value: true: Privileged containers are not allowed, provider "restricted": Forbidden: not usable by user or serviceaccount, provider "nonroot-v2": Forbidden: not usable by user or serviceaccount, provider "nonroot": Forbidden: not usable by user or serviceaccount, provider "hostmount-anyuid": Forbidden: not usable by user or serviceaccount, provider "hostnetwork-v2": Forbidden: not usable by user or serviceaccount, provider "hostnetwork": Forbidden: not usable by user or serviceaccount, provider "hostaccess": Forbidden: not usable by user or serviceaccount, provider "privileged": Forbidden: not usable by user or serviceaccount]

    We discovered through trial and error that the default namespace does have enough privileges that the containers can start. However, our understanding is that you don’t want to use the default namespace for application deployments as outlined in OpenShift Runtime Security Best Practices. Instead we would have needed to figure out how to create the appropriate credentials and associated them with the namespace.

    Another lesson we learned is that you should be careful if you add other options beyond the "privileged: true" option to the securityContext. We believe that adding some of those suggested in the pod admission warning caused us some trouble in getting access to the devices.

    The up side to the trouble we ran into is that along the way we also discovered an alternative way to make the device accessible in the container. Depending on what’s important in terms of security within your device, this alternate approach (covered in the next section) might make sense for you. It’s what we used for the rest of what we are sharing in this post.

    Before we move on to describing the second approach, here is a list of the devices in the gas-station container when running privileged in the default namespace as shown in Figure 6.

    Output of doing an "ls /dev/*" in the deployed container
    Figure 6: Output of "ls /dev/*" in deployed container.

    Since we can see /dev/gpiochip0, the GPIO switches can now be read and the application worked as expected.

    Getting access to the device in the container: Approach 2

    As we mentioned we had some trouble getting all of the devices to show up initially in the container using a privileged securityContext so we searched for other alternatives.

    What we discovered is that there are configuration options in the container engine (CRI-O) used in MicroShift. The CRI-O configuration file is located in:

    /etc/crio/crio.conf

    It has two sections that are of interest with respect to accessing devices:

    # List of devices on the host that a
    # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
    allowed_devices = [
    ]
     
    # List of additional devices. specified as
    # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
    # If it is empty or commented out, only the devices
    # defined in the container json file by the user/kube will be added.
    additional_devices = [
    ] 

    The first section allows a set of devices to be configured so that they can be requested based on an annotation on the container being deployed. This should give a capability similar to using --device with Podman.  

    The second section lists additional devices that should be mapped into all containers. This is the option we used by configuring it as follows:

    # List of additional devices. specified as
    additional_devices = [
      "/dev/gpiochip0:/dev/gpiochip0:rwm"
    ] 

     After adding that configuration and then restarting CRI-O with:

    systemctl restart crio

    The /dev/gpiochip0 device showed up within our gas-station container without needing to be privileged!

    The advantage of this approach is that the gas-station container does not need to be given broad access to the host. The downside is that the device is available to all containers. This is similar to kernel modules (for example, the one loaded for the temperature sensor) so may be appropriate for some deployments.

    The tradeoff is between giving one container (the one which needs to access a device) broad access to the host, versus giving all containers access to the device. We think in some deployments (including our gas-station example) giving all containers access to the additional device is a lower risk than giving the container accessing the device broad access to the host. Using the additional_devices configuration along with an annotation would make this even better so only the containers we tag with the annotation can access the device.

    Using the additional_devices configuration, this is the list of devices we see in the gas_station container, as shown in Figure 7,

    Picture of output of "ls /dev/*" in deployed container using second approach
    Figure 7: output of "ls /dev/*" in deployed container with second approach.

    This is without having to set a securityContext, without having to change the pod admission policy for the namespace and without having to associate a special credential with the namespace. 

    We now have a fully working application once again, as shown in Figures 8 and 9.

     

    Picture of tank simulator with one tank cap open
    Figure 8: Tank simulator with one tank cap open.

     

    Picture of Gas station UI, one top opened, tank is red
    Figure 9 : gas station UI showing one tank cap open.

    External connectivity

    In part 2 when running with Podman we could run our application container on the port of our choosing by adding the port to the Blueprint and running Podman with the -p command to specify the external port. Now that we are running under Kubernetes, it is more flexible, but also more complicated.  

    In our example we created a nodePort service:

    ---
    apiVersion: v1
    kind: Service
    metadata:
      name: gas-station
    spec:
      selector:
        app: gas-station
      ports:
        - protocol: TCP
          port: 3000
          targetPort: 3000
          nodePort: 30011
      type: NodePort

    By default the ports available for a nodePort service are between 30000-32767 so we cannot reuse port 3000 that way. Instead we can run on another port like 30011 and then use a port forward on the machine to achieve something similar by adding a firewall rule like:

    firewall-cmd --add-rich-rule='rule family=ipv4 forward-port to-port=3000 protocol=tcp port=30011'

    Running under Kubernetes also brings the ability to have multiple instances of the container and load balancing across them. In that case we could configure a load balancer as outlined in howto_load_balancer from the MicroShift user guide.

    The TLDR; is that we can keep it simple with a single container listening on port exposed for the edge device but also have a lot more of the flexibility that comes with Kubernetes.

    Updating the application

    In the previous sections we’ve covered deploying the application locally with Kubernetes. If we continue to control deployment locally this has not changed too much from using Podman/Quadlet unless we have a complex application with multiple running containers that need to be managed together. This might be enough to motivate using Kubernetes in some edge deployments.

    Application updates would be quite similar to when using Podman with either a timed or custom built process for updating the deployment for the application. As before, carefully planning how you manage tags is important—see Docker Tagging: Best practices for tagging and versioning docker images.

    The second advantage that running under Kubernetes unlocks is being able to remotely control the deployments through tools like Red Hat’s Advanced Cluster Management for Kubernetes (ACM). Even for a simple application with one running container, the ability to remotely monitor and manage deployments may be just what an organization is looking for.

    Red Hat’s Advanced Cluster Management for Kubernetes allows a cluster to be enrolled. Once enrolled you can push deployments to the cluster. The general model would be as follows:

    1. Build the edge device so that the base image includes MicroShift but no deployment. 
    2. Enroll the device to connect it with ACM.
    3. Use ACM to deploy and update applications.

    In earlier sections we talked about building the base image to include MicroShift. Next, we’ll move on to steps 2 and 3.

    Enrolling with Advanced Cluster Management for Kubernetes

    Installation of Advanced Cluster Management for Kubernetes (ACM) is relatively simple by installing the operator provided through OperatorHub and following the instructions, as shown in Figure 10.

    Picture of operator hub UI, with square for Red Hat Advanced cluster management for kubernetes displayed
    Figure 10: Installation of Advanced Cluster Management for Kubernetes.

    Once ACM is installed you can switch to the All Clusters view, as shown in Figure 11.

    Picture of OpenShift UI selecting between local cluster and all clusters
    Figure 11: Switching to all clusters view.

    From there you have the option to import clusters, as shown in Figure 12.

    Picture of OpenShift UI for importing a new cluster
    Figure 12: Option to import clusters.

    When importing you have a number of options on how to import the cluster, as shown in Figure 13.

    Picture of Opeshift UI for importing an existing cluster
    Figure 13: Options for importing an existing cluster.

    In our case we chose the Kubeconfig option, where you get the Kubeconfig for the device and paste it into the form.

    Once imported, the cluster shows up in the all cluster view. We named the cluster running in the device gas-station-pi-4 as shown in Figure 14.

    Picture of OpenShift UI showing cluster imported from Raspberry PI
    Figure 14: The cluster imported which is running on the Raspberry PI.

    At this point we are able to deploy workloads to the cluster. In the case of a fleet of edge devices you would want to automate this. The article Bring your own fleet with Red Hat Advanced Cluster Management for Kubernetes auto-import and automation tools has some suggestions on how to do that.

    Using ACM to deploy and update applications

    Now we get to the good part. We can now deploy an application to our edge device remotely through ACM. We can do that through the Applications page. There are a number of options when creating the application, as shown in Figure 15.

    Picture of OpenShift UI for creating applications in managed clusters
    Figure 15: Deploying applications.

    We chose to use a subscription. A subscription has a few options, as illustrated in Figure 15.

    Picture of OpenShift UI for creating an application with a subscription
    Figure 15: The options for creating an application.

    We chose Git ,which pulls the application from a GitHub repository and specified that the application should be deployed to the gas-station namespace. We’ll use the same gas-station GitHub repository that we’ve been using throughout the series:

    http://github.com/mhdawson/gas-station

    We added the deployment file that we took you through earlier to that repository:

    http://github.com/mhdawson/gas-station/blob/main/deployment.yaml

    We can specify the repository when creating the Git subscription along with other options like a subdirectory, branch, or specific tag, but because our deployment.yaml is at the top level we can keep it simple, as shown in Figure 16.

    OpenShift UI showing creating an application using the Git subscription option
    Figure 16: Creating the application using a Git subscription.

    We can also select rules for which clusters the application will be deployed to, as shown in Figure 17. You can see how this could be used to deploy to a set of clusters that were labeled as being devices within different gas stations.

    Picture of OpenShift UI selecting the placement for the new application
    Figure 17: Selecting the placement option for the new application.

    In our case we kept it simple and just specified a rule that matched the cluster running on our edge device.

    Once we hit the Create button, the application is automatically deployed and will start running on the edge device. From the topology view we can see what was deployed, as shown in Figure 18.

    Picture of OpenShift UI showing the layout of the application created
    Figure 18: Topology for the newly deployed application.

    Not only can see the deployed components, but we can look at logs and etc. for each of the components, as shown in Figures 19 and 20.

    Picture of OpenShift UI showing topology for application and additional information for the selected component.
    Figure 19: Additional details for component selected from topology.
    Picture of OpenShift UI showing logs for pod selected from topology view for the deployed application.
    Figure 20: The logs from the pod. 

    From those pages we can see that our Next.js application running on Node.js is up and running on the device.

    We can also confirm that we can access the application by navigating to the port that was exposed (Figure 21):

    Picture of the gas station UI showing that it is accessible from port 30011 which is the NodePort value used.
    Figure 21: The application running on port 30011.

    If we want to expose the application on a different port (for example, 80 or 443) we could use a port forward as we did in the local deployment, or use use some of the other ingress options in Kubernetes.

    To update the application, we can now:

    1. Push an updated version of the application to Quay.io and tag it appropriately.
    2. Update the deployment file in the GitHub repository for the application, for example move the tag from v0.1 to v0.2.

    ACM checks the GitHub repository every minute by default and will update the deployed application when changes are detected. You can also ask for synchronization through the Sync option on the Overview page for the application as shown in Figure 22.

    Picture of OpenShift UI showing summary page for the deployed gas-station application
    Figure 22: Summary page for deployed gas station application with Sync option.

    One of the key motivators for moving to Kubernetes was to get more control over when and how the application is updated. ACM provides a number of different options to control when an application will be updated after a change is detected in the GitHub repo. This can include specifying specific time windows, excluding time windows, and more. You can read about all of the options in the Scheduling a deployment section in the Managing applications section of the ACM documentation.  

    At this point we’ve achieved a high level of control over when and how the application is updated in the edge device and we’ve reached the end of our journey with remote management of the application as show in Figure 23:

    Picture of Red Hat Advanced Cluster Management for Kubernetes deploying to multiple Edge devices
    Figure 23: Deploying to multiple edge devices.

    Wrapping up

    This three-part series on managing Node.js applications at the edge took you along the journey from manually building and installing a Node.js application on the edge device all the way to being able remotely deploy and update the application. Regardless of which endpoint along this journey makes sense to you, I hope this guide helped you learn about some of the considerations and options that are available, as well as how to navigate common the challenges along the way.

    Node.js and JavaScript at the edge: The why, what, and how

    You can watch the following video to learn more about building, deploying, and managing Node.js applications running on the edge.

    If you would like to learn more about what the Red Hat Node.js team is up to, you can check out the Node.js product page,  the Node.js topic page, and the Node.js Reference Architecture.

    Related Posts

    • Run Node.js applications on the edge with RHEL and Fedora

    • Containerize Node.js applications at the edge on RHEL and Fedora

    • IoT edge development and deployment with containers through OpenShift: Part 1

    • How to install single node OpenShift on AWS

    • Developing at the edge: Best practices for edge computing

    • 5 things developers should know about edge computing

    Recent Posts

    • Storage considerations for OpenShift Virtualization

    • Upgrade from OpenShift Service Mesh 2.6 to 3.0 with Kiali

    • EE Builder with Ansible Automation Platform on OpenShift

    • How to debug confidential containers securely

    • Announcing self-service access to Red Hat Enterprise Linux for Business Developers

    What’s up next?

    Want to do more with Node.js? This cheat sheet offers tips for making the most of Node.js JavaScript applications on Red Hat OpenShift as well as other container environments.

    Get the cheat sheet
    Red Hat Developers logo LinkedIn YouTube Twitter Facebook

    Products

    • Red Hat Enterprise Linux
    • Red Hat OpenShift
    • Red Hat Ansible Automation Platform

    Build

    • Developer Sandbox
    • Developer Tools
    • Interactive Tutorials
    • API Catalog

    Quicklinks

    • Learning Resources
    • E-books
    • Cheat Sheets
    • Blog
    • Events
    • Newsletter

    Communicate

    • About us
    • Contact sales
    • Find a partner
    • Report a website issue
    • Site Status Dashboard
    • Report a security problem

    RED HAT DEVELOPER

    Build here. Go anywhere.

    We serve the builders. The problem solvers who create careers with code.

    Join us if you’re a developer, software engineer, web designer, front-end designer, UX designer, computer scientist, architect, tester, product manager, project manager or team lead.

    Sign me up

    Red Hat legal and privacy links

    • About Red Hat
    • Jobs
    • Events
    • Locations
    • Contact Red Hat
    • Red Hat Blog
    • Inclusion at Red Hat
    • Cool Stuff Store
    • Red Hat Summit
    © 2025 Red Hat

    Red Hat legal and privacy links

    • Privacy statement
    • Terms of use
    • All policies and guidelines
    • Digital accessibility

    Report a website issue