Keyboard shortcuts

Press or to navigate between chapters

Press S or / to search in the book

Press ? to show this help

Press Esc to hide this help

Introduction

Pextra CloudEnvironment® is a modern private cloud management and virtualization platform. It is capable of managing globally-distributed datacenters and provides a unified, multi-tenant management interface for all resources. It is designed to be highly scalable and flexible, with a focus on security and ease of use. Storage, networking, and compute resources are completely abstracted and software-defined, allowing for easy management and automation of all aspects of the deployment.

This guide provides rich user documentation on how to install, administer, and use Pextra CloudEnvironment®. This guide assumes minimal prior knowledge, and is designed to be accessible to users of all skill levels, from beginners to experts.

License

This documentation is licensed under the Creative Commons Attribution-ShareAlike 4.0 International license.

For AI Agents

We support the /llms.txt standard for providing structured context to help LLMs understand Pextra CloudEnvironment®.

The following /llms.txt files are available for use:

  • /llms.txt: A shorter file that provides links and metadata about the documentation.
  • /llms-full.txt: The complete documentation in a single file.

Pre-Installation Steps

Before installing Pextra CloudEnvironment®, ensure that you have completed all the items in this checklist. This will help ensure a smooth installation process and optimal performance of your private cloud environment.

a) Check System Requirements

  1. Review the system requirements for Pextra CloudEnvironment®.
  2. Check for any unsupported configurations that may affect your installation.
  3. For production workloads, review the officially-supported servers list for optimal performance.

b) Obtain License Keys

  1. Visit portal.pextra.cloud to obtain a Pextra CloudEnvironment® license key (to get a free evaluation license, fill out the form here). One license per node is required. This license is required at installation time.
  2. Visit cockroachlabs.cloud to obtain a CockroachDB license key. One license per complete deployment (spanning all datacenters, clusters, and nodes) is required. This license is required after installation.

c) Prepare Installation Media

  1. Download the Pextra CloudEnvironment® ISO from the portal or the link provided in your license email.
  2. Verify the ISO checksum to ensure file integrity and authenticity.
  3. Prepare a bootable USB drive for installation.

d) Back Up Existing Data

  1. Back up any existing data on the servers that will be used for installation, as the installation process may overwrite existing data.

Additional Resources

  1. Familiarize yourself with the support subscriptions available for Pextra CloudEnvironment®.
  2. Join the community forums for additional support and to connect with other users.
  3. Review other documentation pages for detailed guides and troubleshooting tips.

System Requirements

In this section, the system requirements, including CPU, memory, storage, and network requirements, are outlined for the Pextra CloudEnvironment® platform.

Hardware Requirements

Every node running Pextra CloudEnvironment® must meet the following minimum hardware requirements. These requirements are designed to ensure optimal performance and reliability of the platform.

Minimum Hardware Requirements

note

While it is possible to run the platform with these specifications, it is not recommended for deployment in production environments.

ComponentRequirement
CPU4 cores, x86_64/aarch641, VT-x/AMD-V, AVX22
Memory8 GB
Storage16 GB HDD
Network1 Gbps

note

The recommended hardware requirements are based on the average workload of a small to medium-sized business. For larger deployments, consider scaling up the hardware specifications accordingly.

ComponentRequirement
CPU8 cores
Memory32 GB
Storage128 GB SSD
Network1 Gbps

Notes


  1. The platform is only supported on 64-bit CPUs with the x86_64 (amd64) or aarch64 (arm64) architectures. arm64 support was added in release 1.10.5+6816a0c. 32-bit CPUs will never be supported.

  2. These extensions are supported by all modern CPUs. The platform may function without virtualization extensions (VT-x/AMD-V), but AVX2 is a strict requirement. Running the platform without virtualization extensions is not supported nor recommended.

Officially-Supported Servers

The following enterprise-grade servers are officially supported by Pextra CloudEnvironment®. These servers are recommended for production environments.

Server ModelManufacturerCPUMemoryStorage1
PowerEdge R770Dell EMCDual Intel Xeon 6 (up to 144c)Up to 8 TB DDR5Up to 40 NVMe / 24 SAS/SATA SFF
ProLiant DL380 Gen12HPEDual Intel Xeon 6 (8–144c)Up to 8 TB DDR5Up to 36 EDSFF E3.S NVMe
ThinkSystem SR665LenovoDual AMD EPYC 9004 (up to 128c)Up to 8 TB DDR5Up to 40×2.5″ / 20×3.5″ bays
PowerEdge R6525Dell EMCDual AMD EPYC 7002/7003Up to 4 TB DDR4Flexible NVMe/SAS options
ProLiant DL325 Gen12HPESingle AMD EPYC Gen5 (up to 192c)Up to 6 TB DDR5Up to 20 EDSFF NVMe SSDs
ThinkSystem SR630 V3LenovoDual AMD EPYC 9004/9005Up to 8 TB DDR5High-density NVMe bays
PowerEdge R6515Dell EMCSingle AMD EPYC (dense node)Up to 2 TB DDR4NVMe/HDD hybrid options
ProLiant DL360 Gen12HPEDual Intel Xeon 6Up to 8 TB DDR5NVMe/SAS mix front bays
ThinkSystem SR650 V3LenovoDual Intel Xeon / AMD EPYCUp to 8 TB DDR5Up to 36 NVMe drives
PowerEdge R750Dell EMCDual Intel Xeon Scalable Gen3Up to 8 TB DDR4Up to 28 SFF or 12 LFF bays
ProLiant DL145 Gen11HPEDual AMD EPYC 8004Up to 768 GB DDR5Up to 6× EDSFF E3.S NVMe
ThinkSystem SR860 V3LenovoQuad-socket Intel/AMDUp to 12 TB DDR5Up to 48×2.5″, GPU ready
PowerEdge C6525Dell EMCDual AMD EPYC (2U/4-node)High capacityHPC-optimized dense nodes
ProLiant ML350 Gen12HPEDual Intel Xeon 6/ScalableUp to 8 TB DDR5Tower form, mix LFF/SFF/EDSFF

Generally, any server that meets the minimum hardware requirements should work with Pextra CloudEnvironment®. However, we recommend using enterprise-grade servers for production environments to ensure optimal performance and reliability.

Notes


  1. Hardware-based RAID cards are NOT supported. Please see the Unsupported Configurations section for more information.

Unsupported Configurations

Pextra CloudEnvironment® runs on a variety of hardware configurations, but there are certain configurations that are not supported. This list is not exhaustive, but it covers the most common unsupported configurations. If you encounter any issues with your server configuration, please contact support for assistance.

Hardware-Based RAID Cards

Hardware-based RAID cards are NOT supported. The platform requires direct access to the underlying storage devices for optimal performance and reliability. Hardware RAID can introduce complexity and potential issues with data integrity, especially in virtualized environments.

Workaround: For each disk, create a RAID0 (striped) array with a single disk.

warning

This has been reported to work, but it is not officially supported. Use this workaround at your own risk.

32-Bit CPUs

Pextra CloudEnvironment® does not support 32-bit CPUs. The platform requires a 64-bit CPU. 32-bit CPUs will never be supported.

Workaround: Use a different server with a 64-bit CPU architecture.

Installation

This section provides instructions for downloading the installation ISO, and preparing it for installation. It includes steps for creating the bootable USB drives or DVD, as well as running the Pextra CloudEnvironment® installer on your server.

Downloading the Installer

note

The ISO file is approximately 2 GB in size. Make sure you have enough disk space before downloading, and a stable internet connection to avoid download interruptions.

  1. Log into the Pextra Customer Portal.

  2. Click on “Download ISO”, then click on “Generate” to generate download links for the latest version of Pextra CloudEnvironment®: Pextra Customer Portal

  3. Click on the download link to download the ISO file.

After the download is complete, it is strongly recommended to verify the integrity of the downloaded ISO file using the SHA256 and GPG signatures provided on the download page.

Verify File Integrity

note

This step is optional but highly recommended. Verifying the integrity of the downloaded ISO file ensures that the file came from Pextra Inc. and has not been tampered with.

Follow the instructions below for your operating system to verify the file integrity. If at any point, file integrity verification fails, do not proceed with the installation.

Before verifying GPG signatures, download our GPG public key.

Linux

Linux users can use the sha256sum and gpg commands to verify the SHA256 checksum and GPG signature of the downloaded ISO file. sha256sum is usually pre-installed on most Linux distributions, while gpg is also commonly available. If you do not have gpg installed, you can install it using your package manager (e.g., apt, pacman, yum, etc.).

SHA256 Checksum

  1. Make sure to download the SHA256 checksum file (the file that ends with .sha256) from the Pextra Customer Portal.
  2. Open a terminal and navigate to the directory where the downloaded ISO file and SHA256 checksum file are located.
  3. Calculate the SHA256 checksum of the downloaded ISO file using the following command:
    sha256sum pextra-ce.iso
    
  4. Compare the output with the SHA256 checksum provided on the download page. If they match, the file is intact.

GPG Signature

Two signatures are provided: one for the SHA256 checksum file and one for the ISO file itself. Verifying the SHA256 checksum file is sufficient and faster.

  1. Make sure to download the GPG signature file (the file that ends with .sha256.asc) from the Pextra Customer Portal.
  2. Import the Pextra Inc. GPG public key using the following command:
    gpg --import pextra-gpg-key.asc
    
  3. Verify the SHA256 checksum file using the following command:
    gpg --verify pextra-ce.iso.sha256.asc pextra-ce.iso.sha256
    
  4. If the output indicates that the signature is valid, the file is intact. If it indicates that the signature is not valid, do not proceed with the installation.

Verifying the ISO file itself is similar:

  1. Make sure to download the GPG signature file (the file that ends with .iso.asc) from the Pextra Customer Portal.
  2. Verify the ISO file using the following command:
    gpg --verify pextra-ce.iso.asc pextra-ce.iso
    
  3. If the output indicates that the signature is valid, the file is intact. If it indicates that the signature is not valid, do not proceed with the installation.

MacOS

MacOS users can use the shasum and gpg commands to verify the SHA256 checksum and GPG signature of the downloaded ISO file.

SHA256 Checksum

  1. Make sure to download the SHA256 checksum file (the file that ends with .sha256) from the Pextra Customer Portal.
  2. Open a terminal and navigate to the directory where the downloaded ISO file and SHA256 checksum file are located.
  3. Calculate the SHA256 checksum of the downloaded ISO file using the following command:
    shasum -a 256 pextra-ce.iso
    
  4. Compare the output with the SHA256 checksum provided on the download page. If they match, the file is intact.

GPG Signature

  1. Make sure to download the GPG signature file (the file that ends with .sha256.asc) from the Pextra Customer Portal.
  2. Import the Pextra Inc. GPG public key using the following command:
    gpg --import pextra-gpg-key.asc
    
  3. Verify the SHA256 checksum file using the following command:
    gpg --verify pextra-ce.iso.sha256.asc pextra-ce.iso.sha256
    
  4. If the output indicates that the signature is valid, the file is intact. If it indicates that the signature is not valid, do not proceed with the installation.

Verifying the ISO file itself is similar:

  1. Make sure to download the GPG signature file (the file that ends with .iso.asc) from the Pextra Customer Portal.
  2. Verify the ISO file using the following command:
    gpg --verify pextra-ce.iso.asc pextra-ce.iso
    
  3. If the output indicates that the signature is valid, the file is intact. If it indicates that the signature is not valid, do not proceed with the installation.

Windows

Windows users can use the CertUtil PowerShell command to verify the SHA256 checksum. For GPG signatures, GPG4Win can be used, as Windows does not have a built-in method to verify GPG signatures. GPG4Win is free and open source software.

SHA256 Checksum

  1. Make sure to download the SHA256 checksum file (the file that ends with .sha256) from the Pextra Customer Portal.
  2. Open PowerShell and navigate to the directory where the downloaded ISO file and SHA256 checksum file are located.
  3. Calculate the SHA256 checksum of the downloaded ISO file using the following command:
    CertUtil -hashfile pextra-ce.iso SHA256
    
  4. Compare the output with the SHA256 checksum provided on the download page. If they match, the file is intact.
  5. If the output does not match, do not proceed with the installation.

GPG Signature

  1. Download the latest version of GPG4Win and install it.
  2. Make sure to download the GPG signature file (the file that ends with .sha256.asc) from the Pextra Customer Portal.
  3. Open PowerShell and navigate to the directory where the downloaded ISO file and GPG signature file are located.
  4. Import the Pextra Inc. GPG public key using the following command:
    gpg --import pextra-gpg-key.asc
    
  5. Verify the SHA256 checksum file using the following command:
    gpg --verify pextra-ce.iso.sha256.asc pextra-ce.iso.sha256
    
  6. If the output indicates that the signature is valid, the file is intact. If it indicates that the signature is not valid, do not proceed with the installation.

Verifying the ISO file itself is similar:

  1. Make sure to download the GPG signature file (the file that ends with .iso.asc) from the Pextra Customer Portal.
  2. Verify the ISO file using the following command:
    gpg --verify pextra-ce.iso.asc pextra-ce.iso
    
  3. If the output indicates that the signature is valid, the file is intact. If it indicates that the signature is not valid, do not proceed with the installation.

Preparing Installation Media

Now that you have downloaded the ISO installer, you need to create a bootable USB drive or DVD. Follow the instructions below for your operating system to create the installation media.

warning

Creating a bootable USB drive will erase all data on the selected drive. Make sure to back up any important data before proceeding.

Linux

Linux users can use the dd command to create a bootable USB drive. dd is a built-in command and does not require any additional software.

  1. Insert a USB drive with at least 8GB of space. Make sure to back up any important data on the drive, as it will be formatted.
  2. Open a terminal and run the command lsblk to identify the device name of the USB drive (e.g., /dev/sdX, where X is the letter assigned to your USB drive).
  3. Unmount the USB drive using the command (you may need to use sudo):
     umount /dev/sdX*
    
  4. Use the dd command to create a bootable USB drive. Replace /path/to/pextra-ce.iso with the path to the downloaded ISO file and /dev/sdX with the device name of your USB drive (e.g., /dev/sdb):
    dd if=/path/to/pextra-ce.iso of=/dev/sdX bs=4M status=progress
    
  5. After the process is complete, run the following command to ensure all data is written to the USB drive:
    sync
    
  6. Safely eject the USB drive using the command (you may need to use sudo):
    eject /dev/sdX
    

Your USB drive is now ready to be used for installation.

MacOS

MacOS users can also use the dd command to create a bootable USB drive. The process is similar to Linux, but with some differences in the commands used.

  1. Insert a USB drive with at least 8GB of space. Make sure to back up any important data on the drive, as it will be formatted.
  2. Open a terminal and run the command diskutil list to identify the device name of the USB drive (e.g., /dev/diskX, where X is the number assigned to your USB drive).
  3. Unmount the USB drive using the command (you may need to use sudo):
    diskutil unmountDisk /dev/diskX
    
  4. Use the dd command to create a bootable USB drive. Replace /path/to/pextra-ce.iso with the path to the downloaded ISO file and /dev/diskX with the device name of your USB drive (e.g., /dev/disk2):
    sudo dd if=/path/to/pextra-ce.iso of=/dev/diskX bs=4m status=progress
    
  5. After the process is complete, run the following command to ensure all data is written to the USB drive:
    sync
    
  6. Safely eject the USB drive using the command (you may need to use sudo):
    diskutil eject /dev/diskX
    

Your USB drive is now ready to be used for installation.

Windows

Windows users can use Rufus in DD mode to create a bootable USB drive, as there is no built-in mechanism to create bootable USB drives from ISO files. Rufus is free and open source software.

  1. Download the latest version of Rufus and run it.

  2. Insert a USB drive with at least 8GB of space. Make sure to back up any important data on the drive, as it will be formatted. In Rufus, select the USB drive by clicking on the “Device” dropdown menu:

    Rufus

  3. Select the downloaded ISO file by clicking on the “SELECT” button. Navigate to the location where you saved the ISO file and select it:

    Rufus select ISO

  4. With the USB and ISO ready, the window should similar to this. Click on “START” button to begin the process:

    Rufus ready

  5. A pop-up window will appear. Select “Write in DD Image mode” and click “OK”:

    Rufus select DD

  6. Another pop-up window will appear, warning you that all data on the USB drive will be erased. Click “OK” to proceed:

    Rufus confirm

  7. Once the process is complete, the bar will be green and say “READY”. You can close Rufus:

    Rufus complete

  8. Safely eject the USB drive from your computer.

Your USB drive is now ready to be used for installation.

Booting from the Installation Media

  1. Insert the bootable USB drive or DVD into the server.
  2. Power on the server and enter the BIOS/UEFI settings (usually by pressing F2, F10, or DEL during startup).
  3. Change the boot order to prioritize the USB drive or DVD.
  4. Save the changes and exit the BIOS/UEFI settings.
  5. The server should boot from the installation media, and you will see the bootloader screen: Pextra CloudEnvironment® Installer

Press the Enter key to start the installation process. You can now proceed with the installation steps.

Installation Steps

Follow the steps below to install Pextra CloudEnvironment® on your server.

Steps

  1. Acknowledge the End User License Agreement (EULA).

  2. Configure the management network.

    • The installer will automatically detect network interface configuration from DHCP.
    • The server IP must not change after installation (unless you are an advanced user—see the related Community FAQ).
    • If your network interface does not appear, please let us know.
  3. Enter your license key.

    • If you do not have a license key, refer to the Pre-Installation section for more information on obtaining a license.
  4. Configure the default organization and timezone.

    • This is the root organization (the owner of the deployment) that has access to all resources.
    • Additional organizations can be created later.
    • It is highly recommended to set the timezone to Etc/UTC, however, you can choose your local timezone if needed.
  5. Configure the administrator user.

    • This user is the root user of the deployment and has access to all resources.
    • Choose a strong password and make sure to remember it.
    • After the installation, it is recommended to create an additional user with limited permissions for day-to-day operations.
    • For command-line access, the Linux user root’s password is set to the same password as the administrator user.
  6. Configure the boot disk.

    • The installer will automatically detect available disks. Choose the disk where you want to install the operating system.
    • The installer will format the selected disk, so make sure to back up any important data before proceeding.
  7. Finalize the installation.

    • A summary of your configuration will be displayed. Review the settings and click “Install” to begin the installation process.
  8. Wait for the installation to complete.

    • The installation process may take some time, depending on your network speed and hardware configuration. Typically, it takes about 20-30 minutes.
    • If you see any errors during the installation, please let us know.
  9. Reboot the server.

    • If you did not select “Auto-reboot” during the installation, you will need to click “Reboot” to restart the server.
    • Remove the installation media (USB drive or DVD) before rebooting, otherwise the server may boot from the installation media again.

Your server is now ready to use! To access the web interface, please refer to the Accessing the Web Interface section.

You can now proceed to perform post-installation steps to configure your deployment.

Post-Installation Steps

After the installation is complete, some additional steps must be performed to ensure that your Pextra CloudEnvironment® deployment is fully functional and optimized for your needs.

a) Upgrade to the latest version:

Refer to the System Upgrade section for instructions on how to upgrade to the latest version.

b) Set CockroachDB license key

Refer to the Set CockroachDB License Key section for instructions on how to set the CockroachDB license key. This is not required if your node will join an existing, licensed cluster.

c) Join the node to an existing cluster (if applicable):

Refer to the Cluster Management section for instructions on how to join a node to a cluster if you are deploying a cluster or joining a node to an existing cluster.

d) Configure user accounts:

Refer to the Identity Access Management (IAM) section for instructions on how to create and manage user accounts and permissions.

e) Configure networking:

Refer to the Network Management section for instructions on how to configure network settings.

f) Configure storage pools:

Refer to the Storage Management section for instructions on how to create and manage storage pools.

g) Configure AI providers

Refer to the AI Providers section for instructions on how to add and configure AI providers.

h) Monitor system performance:

Refer to the Monitoring & Metrics section for instructions on how to monitor system performance.

Web Interface

This section describes how to access and navigate the web interface of Pextra CloudEnvironment®. The web interface is the primary tool for managing your deployment, allowing you to perform various tasks, monitor system metrics, and configure settings.

note

The web interface requires a modern web browser with JavaScript enabled. It is recommended to use Mozilla Firefox or Google Chrome for the best experience.

Accessing the Web Interface

You can access the Pextra CloudEnvironment® web interface by entering the management IP address in your web browser. The default URL is https://<management-ip>:5007, where <management-ip> is the IP address you configured during the installation process.

note

The web interface uses HTTPS for secure communication. You will see a self-signed certificate warning in your browser. This is normal, as the certificate is generated during the installation process. You can safely ignore this warning and proceed to the web interface.

Logging In

To log in to the web interface, use the credentials you set during the installation process. The default username is pceadmin, and the password is the one you specified during installation: Login Page

Once logged in, you will be directed to the current node’s page1.

Notes


  1. The IP address that you connect to (this is especially relevant for nodes in a cluster). The node that you are currently connected to is shown with a light green dot next to the node’s entry in the resource tree. All requests are proxied to the node that you are connected to.

Resource Tree

On the left side of the web interface, you will find a tree view that displays the hierarchy of your deployment. This view provides a complete overview of all organizations, datacenters, clusters, nodes, and instances within your deployment. You can expand and collapse the tree’s nodes to navigate through the different levels of your infrastructure: Resource Tree

AI Features

The Pextra CloudEnvironment® web interface includes two AI integrations:

  • Pextra Cortex™ (beta): A fully-capable AI agent for planning and executing complex orchestration tasks within Pextra CloudEnvironment®.

  • AI Assist: Context-sensitive AI completion and suggestions.

note

An administrator of the organization must configure at least one AI provider for AI features to function. If no AI providers are configured, AI features will not be available in the web interface.

Pextra Cortex™

Pextra Cortex™ is an advanced AI agent integrated into the Pextra CloudEnvironment® web interface. It is designed to assist users in planning and executing complex orchestration tasks, by leveraging the open-source Model Context Protocol (MCP) standard. Pextra Cortex™ can understand high-level user instructions and screenshots and translate them into actionable, executable plans1.

note

An administrator of the organization must configure at least one AI provider for AI features to function. If no AI providers are configured, AI features will not be available in the web interface.

Usage

  1. To access Pextra Cortex™, click on the message bubble icon in the bottom-right corner of the web interface:

    Pextra Cortex Icon

  2. Provide instructions in natural language, then click the Send button. You can also attach screenshots:

Pextra Cortex Input Pextra Cortex Screenshot
  1. Pextra Cortex™ will spend a few moments thinking. During this time, it may ask for clarifications or additional information to better understand your request:

    Pextra Cortex Thinking

  2. A response will be generated:

    Pextra Cortex Response

  3. For safety reasons, before executing any AI-generated plans, Pextra Cortex™ asks for user confirmation:

    Pextra Cortex Confirmation

Running Locally

note

It is recommended to use Pextra Cortex™ through the web interface for the best experience, as it is fully integrated and optimized for Pextra CloudEnvironment®.

Refer to the MCP server repository that powers Pextra Cortex™. This allows you to use a desktop app or command-line interface (CLI) to build AI agents that interact with Pextra CloudEnvironment®.

Notes


  1. Pextra AI features are powered by third-party AI providers, as configured in your organization settings. The quality and accuracy of responses may vary based on the provider and the specific task at hand. Always review AI-generated content.

AI Assist

Throughout the Pextra CloudEnvironment® web interface, you will find the AI Assist button, which provides context-sensitive suggestions and assistance. Describe your task in natural language, and the AI Assist feature will generate relevant suggestions to facilitate your work1.

See the example below for a demonstration of how to use the AI Assist feature.

Before: AI Assist Button After: AI Assist Suggestions

note

An administrator of the organization must configure at least one AI provider for AI features to function. If no AI providers are configured, AI features will not be available in the web interface.

Notes


  1. Pextra AI features are powered by third-party AI providers, as configured in your organization settings. The quality and accuracy of responses may vary based on the provider and the specific task at hand. Always review AI-generated content.

Node Management

This section provides a guide to managing individual nodes within your deployment.

Nodes are the physical or virtual servers1 that run Pextra CloudEnvironment®. They serve as the foundation of your infrastructure, providing the compute, storage, and network resources required by your deployment.

The ID prefix for nodes is node-2.

Notes


  1. Running Pextra CloudEnvironment® in a virtual machine is in beta. Try running Pextra CloudEnvironment® inside of Pextra CloudEnvironment®!

  2. Resources in Pextra CloudEnvironment® are identified by unique IDs. Node IDs will have the prefix node-, followed by a unique identifier (e.g., node-qthm_iLrflJ_DtSS1l4Gx).

System Upgrade

System upgrades should be routinely performed in order to ensure that the latest bug fixes, security patches, and features are available.

note

A valid license key must be present when upgrading Pextra CloudEnvironment®. To set the node’s license key, refer to the Set License Key section.

warning

System upgrades will fail if they are not run as the root Linux user.

Console

  1. Access the node’s console through SSH or through the “Console” tab in the node view.

  2. First, update the node’s package index by running the following command:

    apt update
    

    This command may take some time to finish depending on the node’s connection speed.

  3. If any system upgrades are available, the following message will be shown:

    [xx] packages can be upgraded. Run 'apt list --upgradable' to see them.
    

    If this message is not shown, the node is on the latest version. No action is required.

  4. If the above message is shown, the node can be upgraded to the latest version by running the following command:

    apt upgrade
    

    This command may take a while to finish depending on the number of upgrades and the node’s connection speed.

Set License Key

License keys are long-lived and typically do not need to be changed. However, if you need to change the license key, you can do so by following these steps:

tip

License keys can be purchased from the Pextra Customer Portal. Support subscriptions are also available for purchase.

Web Interface

  1. Right-click on the node in the resource tree and select Set License Key: Right-Click

  2. A modal will appear. The current license key along with its validity will be displayed. Enter the new license key in the text box and click Confirm: Set License Key

  3. If any errors occur, they will be displayed, otherwise, the modal will close. For example: Error

To confirm that the license key has been set, you can check the licensing status of the node’s cluster.

Set CockroachDB License Key

Pextra CloudEnvironment®’s highly-scalable private cloud is built on CockroachDB’s distributed architecture. One license per complete deployment (spanning all datacenters, clusters, and nodes) is required.

warning

Pextra CloudEnvironment® will not be functional after one week (7 days) without a valid CockroachDB license key.

tip

CockroachDB license keys can be obtained from cockroachlabs.cloud.

Console

  1. Access the node’s console through SSH or through the “Console” tab in the node view.

  2. First, enter the CockroachDB console by running the following command:

    sudo cockroach sql --certs-dir=/usr/local/lib/cockroach/certs -u pextra_ce_pcedaemon
    
  3. Set the license key by running the following command in the CockroachDB console:

    SET CLUSTER SETTING enterprise.license = '<license key here>';
    
  4. To exit the CockroachDB console, click CTRL+C.

For more information, visit the CockroachDB licensing FAQs.

Cluster Management

The Cluster Management section allows administrators to manage cluster configurations and metadata within Pextra Cloud. Each cluster represents a logical grouping of nodes, resources, and workloads.

The ID prefix for clusters is cls-1.

tip

You can edit cluster details such as the name and description directly from the UI.
Changes take effect immediately after confirmation.


Notes


  1. Resources in Pextra CloudEnvironment® are identified by unique IDs. Cluster IDs have the prefix cls-, followed by a unique identifier (e.g., cls-YtR8FqvL29sKjb7WxD3Zn).

Editing a Cluster

To update a cluster’s name or description:

  1. In the left navigation panel, right-click on the cluster you want to modify.

Edit Cluster

  1. Select Edit from the context menu.

Edit Cluster

  1. Update the Name and/or Description fields as required.
  2. Click Confirm to apply the changes.

Cluster Updated

Joining Nodes in a Cluster

Clustering allows you to group multiple nodes into a single cluster, enabling resource sharing, redundancy, and centralized management within Pextra Cloud.

tip

After installation, each node initially operates as a standalone cluster with one node. You can join additional nodes to an existing cluster using a clustering key.

Creating and Joining a Cluster

Follow these steps to join nodes into an existing cluster:

  1. Log in directly to the node’s management interface.
    • You should see one cluster and one node listed.
  2. In the left navigation panel, click on Cluster.
  3. On the right-hand panel, click on the Cluster you want to extend.
  4. Obtain the Clustering Key for that cluster.
    Cluster Key
  5. Copy the clustering key.
  6. Log in to each new node using its IP address and port 5007 (for example, https://<node-ip>:5007).
  7. In the left navigation panel of each new node, click Cluster.
  8. Click Join Cluster.
  9. Paste the clustering key obtained earlier into the provided field and confirm the operation.
  10. Return to the main cluster view.
  11. You should now see the newly added node listed as a member of the cluster.

Once the process is complete, all joined nodes operate under a unified cluster, enabling centralized management of workloads, networking, and resources.

Check Licensing Status

The licensing status of a cluster can be checked to ensure that the license keys for all nodes in the cluster are valid.

tip

License keys can be purchased from the Pextra Customer Portal. Support subscriptions are also available for purchase.

Web Interface

  1. Select the cluster in the resource tree and view the page on the right. A card with a quick overview of the licensing status will be displayed: Cluster Page
  2. For a detailed view, click on the Cluster tab in the right pane. The licensing status of each node in the cluster will be displayed: Cluster Page

Datacenter Management

The Datacenter Management section allows administrators to organize and manage clusters within logical groupings called datacenters. A datacenter provides a higher-level abstraction for grouping clusters—typically based on geography, purpose, or administrative domain.

The ID prefix for datacenters is dc-1.

tip

A datacenter can contain multiple clusters. You can also specify a geographical location for each datacenter to help identify its physical or logical placement.

tip

Creating a new datacenter is currently supported only via API.

Notes


  1. Resources in Pextra CloudEnvironment® are identified by unique IDs. Datacenter IDs have the prefix dc-, followed by a unique identifier (e.g., dc-VrP6KxEw19aQb2hTnY4Ld).

Editing a Datacenter

You can update a datacenter’s name, description, or location directly from the UI:

  1. In the left navigation panel, right-click on the datacenter you want to edit.
  2. Select Edit from the context menu.

Edit Datacenter

  1. Update the Name, Description, or Location fields as needed.

Datacenter Updated

  1. Click Confirm to save the changes.

Datacenter Updated

tip

To change the geographical location, click directly on the desired point on the map.
The latitude and longitude values are computed automatically.
Avoid manually entering coordinates—simply select the location on the map and then click Confirm.

Organization Management

The Organization Management section allows administrators to manage and configure organizational tenants within Pextra CloudEnvironment®. Each organization is a fully isolated tenant with its own users, resources, and settings.

The ID prefix for organizations is org-1.

tip

The root organization is created automatically during installation. It has full administrative control over all other organizations in the environment.

tip

Currently, creating a new organization is supported only via APIs.

Notes


  1. Resources in Pextra CloudEnvironment® are identified by unique IDs. Organization IDs have the prefix org-, followed by a unique identifier (e.g., org-CIgLySkaAVeQ5kSLOodD3).

Editing an Organization

You can update an organization’s name or description from the UI:

  1. In the left navigation panel, right-click on the organization you want to edit.
  2. Select Edit from the context menu.

Edit Organization

Edit Organization

  1. Update the Name and/or Description fields as needed.

Edit Organization

  1. Click Confirm to save the changes.

Edit Organization

Identity and Access Management (IAM)

The Identity and Access Management (IAM) module allows administrators to manage users and roles within an organization.

Users are managed at the organization level. Administrators can:

  • Add users individually or import multiple users in bulk using a CSV file.
  • Manage access and permissions through Role-Based Access Control (RBAC).

Accessing IAM

  1. In the left navigation panel, click your organization name.
  2. In the right panel, select IAM to open the user management view.

Viewing Users

To view users within your organization:

  1. In the left navigation panel, click your organization name.
  2. Select IAM from the options.

The Users page displays all available users in the organization, along with their roles and status.

Users list in IAM view

Creating a User

Administrators can create new users directly from the IAM interface.

  1. Navigate to IAM under your organization view.

  2. Click + Create User at the top of the Users page.

  3. The Create User dialog will appear:

    Create User dialog in IAM

  4. The Organization ID field will be prefilled.

  5. Fill in the required user information:

    • Username
    • Linux User (Optional)
    • Description
    • Enable Toggle (Enable or Disable User; local or external)
    • Expiry Date (Optional; assign an expiration date for the user)
  6. Click Confirm to create the user. Create User dialog in IAM

  7. The new user will appear in the users list:

    User appears in IAM list

Deleting a User

To remove a user from the IAM interface:

  1. Locate the user in the Users list.
  2. Click the delete icon on the user’s row.
  3. The Delete User overlay will appear.
  4. Type the confirmation word in the input box.
  5. Click Confirm to delete the user.
    Create User dialog in IAM

Auditing and Compliance

The Auditing and Compliance section allows administrators to monitor user activity and security events. It includes two main features: User Lockouts and Audit Logs.

Audit Trail

Audit Trail

To view audit logs:

  1. From the left navigation panel, click your Organization Name.
  2. On the right, click Audit.
  3. The Audit Logs section displays actions performed by users, providing a full audit trail.

Audit Trail

Audit User Lockouts

To view user lockouts:

  1. From the left navigation panel, click your Organization Name.
  2. On the right, click Audit.
  3. Scroll down to the Users Lockouts section to see details of any locked-out users.

User Lockouts

AI Providers

AI providers are organization-wide connections to cloud or self-hosted AI services. These providers power various AI features, enabling users to use natural language to interact with the Pextra CloudEnvironment® web interface.

At least one AI provider must be configured and enabled for AI features to function. If no AI providers are configured, AI features will not be available in the web interface. For a list of supported AI providers, see the Supported AI Providers section.

The ID prefix for AI providers is orgai-1.

Notes


  1. Resources in Pextra CloudEnvironment® are identified by unique IDs. AI providers will have the prefix orgai-, followed by a unique identifier (e.g., orgai-qthm_iLrflJ_DtSS1l4Gx).

Supported AI Providers

The following AI providers are supported in Pextra CloudEnvironment®:

NameIDCloud-Hosted
OpenAIopenai
Anthropicanthropic
Googlegoogle
xAIxai
Mistralmistral
DeepInfradeepinfra
DeepSeekdeepseek
Cerebrascerebras
Groqgroq
Perplexityperplexity
Coherecohere
Ollamaollama❌ (self-hosted)
LM Studiolmstudio❌ (self-hosted)

OpenAI-Compatible Providers

For AI providers that are OpenAI-compatible but are not explicitly listed above, use the openai provider type and configure a custom base URL. For more information, refer to the Add AI Provider section.

List AI Providers

List AI providers to ensure that Pextra CloudEnvironment® AI features are properly configured and available for use.

Web Interface

  1. Select the organization in the resource tree and view the page on the right. Click on the AI Providers tab in the right pane. The AI providers will be listed: AI Providers Page

note

For security reasons, the API keys for AI providers are not displayed in the web interface. API keys cannot be retrieved once set. Store your API keys securely.

To edit properties of an AI provider, refer to the Edit AI Provider section.

Add AI Provider

Add an AI provider to your organization to enable AI features in the Pextra CloudEnvironment® web interface. At least one AI provider must be configured and enabled for AI features to function. If no AI providers are configured, the AI features will not be available in the web interface.

note

For security reasons, the API keys for AI providers are not displayed in the web interface. API keys cannot be retrieved once set. Store your API keys securely.

Web Interface

  1. Select the organization in the resource tree and view the page on the right. Click on the AI Providers tab in the right pane. AI Providers Page

  2. Click the Add AI Provider button. Add AI Provider button

  3. Choose the AI provider type from the dropdown list. A list of supported AI providers is available in the Supported AI Providers section. AI Provider type selection

  4. Enter the API key and custom base URL (if applicable) for the selected AI provider. Set API key and custom base URL

important

When using a self-hosted AI provider (such as ollama or lmstudio), a custom base URL must be specified. For cloud-hosted providers, the base URL is pre-configured and does not need to be changed.

  1. Enter a name for the AI provider, and an optional description. Disable the provider if you do not want it to be available for use immediately. AI Provider name and description

  2. Enter the name of the model to use with this provider. This model will be used for all AI features unless overridden in specific configurations. AI Provider model

  3. Click Create to add the AI provider to your organization. The new AI provider will be listed on the AI Providers page. Create AI Provider

Adding OpenAI-Compatible Providers

For AI providers that are OpenAI-compatible but are not explicitly listed above, use the openai provider type and configure a custom base URL. This allows you to connect to any service that implements the OpenAI API specification.

When configuring an OpenAI-compatible provider:

  1. Select openai as the provider type
  2. Set the custom base URL to point to your provider’s API endpoint
  3. Use the appropriate API key for your chosen provider

This approach works with many third-party AI services and self-hosted solutions that implement OpenAI-compatible APIs.

Edit AI Provider

note

For security reasons, the API keys for AI providers are not displayed in the web interface. API keys cannot be retrieved once set. Store your API keys securely.

Web Interface

  1. Select the organization in the resource tree and view the page on the right. Click on the AI Providers tab in the right pane. The AI providers will be listed. AI Providers Page

  2. Click the pencil icon next to the AI provider you want to edit. Edit AI Provider button

  3. Update any fields as needed. Edit AI Provider form

note

The API key field will be empty for security reasons. If you need to change the API key, you must enter the new key in this field. The previous key will not be displayed.

  1. Click Edit to save your changes. The AI provider will be updated with the new configuration. Edit AI Provider confirm dialog

Delete AI Provider

warning

If you delete the last AI provider in your organization, AI features will no longer be available in the web interface. At least one AI provider must be configured and enabled for AI features to function.

Web Interface

  1. Select the organization in the resource tree and view the page on the right. Click on the AI Providers tab in the right pane. The AI providers will be listed: AI Providers Page

  2. Click the trash can icon next to the AI provider you want to delete: Delete AI Provider button

  3. A confirmation dialog will appear. Type in “DESTROY” and click Confirm to confirm the deletion of the AI provider: Delete AI Provider confirmation dialog

Network Management

Networking in Pextra CloudEnvironment® is configured at the node (server) level. Each node has physical NICs that can be virtualized to create virtual networks for your instances.

To set up networking:

  1. Click on the node in the left navigation panel.

  2. On the right, select the Network tab.

    Node Networking Tab

From this tab, you can manage:

  • vSwitches – Create and configure virtual switches to connect virtual machines.
  • Port Groups – Define network segments and VLANs for instance traffic.
  • NICs – Assign physical NICs to virtual switches or manage virtual NICs.

tip

Plan your network layout carefully, including VLANs and port groups, before assigning NICs to virtual switches. This ensures proper isolation and connectivity between instances.

Creating a vSwitch

A vSwitch (virtual switch) connects virtual machines and network interfaces within a node. Each vSwitch acts as a bridge between virtual and physical networks.

To create a new vSwitch:

  1. Click on the node in the left navigation panel.

  2. Select the Network tab on the right.

  3. Click on vSwitches.

  4. Click Create vSwitch.

    Create vSwitch General

  5. Enter a name for the vSwitch. The name must follow the required format:

Here, svswitch is the prefix and X is a positive integer.
Example: svswitch10, svswitch21.

  1. Complete the configuration and click Create.

  2. After creation, the new vSwitch will appear in the vSwitch list.

vSwitch Created

tip

Use a consistent naming convention (e.g., svswitch10, svswitch20, etc.) to easily identify network mappings across multiple nodes.

warning

Ensure the vSwitch name is unique cluster-wide to prevent conflicts.
The vSwitch will not be created if a name conflict is detected within the cluster.

You can add one or more uplinks to a vSwitch to connect it to physical network interfaces on a node. Adding multiple uplinks improves redundancy and fault tolerance, ensuring higher network availability.

  1. Select the node in the left navigation panel.

  2. Click the Network tab on the right.

  3. Click on vSwitches.

  4. Locate the vSwitch you want to modify and click the Edit icon.

    vSwitch Overview

  5. From the Uplinks drop-down menu, select one or more network interfaces to attach as uplinks.
    You can select multiple interfaces to enable redundancy.

    vSwitch Uplinks Selection

tip

The image preview may not immediately reflect uplink changes.

vSwitch Uplinks View

  1. Click Confirm to apply the changes.

  2. The Uplink Count column for the vSwitch updates to reflect the new configuration.

    vSwitch Uplink Count

  3. To verify or visualize the connected uplinks, click the Edit icon again.
    The connected uplinks are listed in the vSwitch details view.

    vSwitch Links

tip

Adding multiple uplinks provides network redundancy.
If one uplink fails, the remaining uplinks continue handling network traffic seamlessly.

Creating and Managing Port Groups

Port groups define how virtual machines connect to a vSwitch and access the physical network.
Each port group is associated with a specific vSwitch and can include custom network configurations.

Creating a Port Group

  1. Select the node in the left navigation panel.

  2. Click the Network tab on the right.

  3. Open the Ports tab to view existing port groups.

    Port Groups Overview

  4. Click the Create Port Group button at the top of the page.

    Create Port Group

  5. In the creation form:

    • Select the vSwitch to associate the new port group with.
    • Fill in the remaining configuration fields as needed (for example: VLAN ID, description, or security settings).
    • Click Confirm to save.

    Port Group Created

tip

When managing multiple networks, naming port groups clearly (e.g., Production, Backup, DMZ) makes maintenance and troubleshooting much easier.

Editing a Port Group

To modify an existing port group:

  1. Locate the desired port group in the list.
  2. Click the Edit icon in the same row.

Port Group Created

  1. Adjust the configuration fields as needed.
  2. Click Confirm to apply the changes.

Viewing Available NICs on a Node

To view the network interface cards (NICs) available on a node:

  1. In the left navigation panel, click on the node you want to inspect.

  2. Select the Network tab on the node’s details page.

  3. Click on NICs.

  4. The page will display all available NICs for the selected node.

    View NICs

Storage Management

This section provides a guide to managing storage within your deployment. The Pextra CloudEnvironment® storage engine supports a variety of storage technologies, from local disks to distributed storage systems.

Storage Pools

Storage pools are software-defined storage resources. Storage pools are configured on clusters and propagated to nodes. One configuration can be used across multiple nodes, allowing for flexible storage management.

The ID prefix for storage pools is pool-1.

Storage Pool Types

The following storage pool configurations are supported in Pextra CloudEnvironment®:

Pool TypeVolume Backing2NetworkedDescriptionNotes
Directory (directory)FileUses a directory on the node.The default local storage pool is a directory pool.
NetFS (netfs)FileMounts a network filesystem (NFS or CIFS/SMB) on the node as a storage pool. Similar to a directory pool, but allows for networked storage.Target path must not conflict with other netfs or directory pools.
iSCSI (iscsi)BlockUses an iSCSI target.N/A
Ceph RBD (rbd)BlockUses a Ceph RBD pool.N/A
ZFS (zfs)BlockUses a ZFS pool.A ZFS pool with the same name as the storage pool must exist on each enabled node.
LVM (lvm)BlockUses an LVM volume group.An LVM volume group with the same name as the storage pool must exist on each enabled node.

Notes


  1. Resources in Pextra CloudEnvironment® are identified by unique IDs. Storage pool IDs will have the prefix pool-, followed by a unique identifier (e.g., pool-qthm_iLrflJ_DtSS1l4Gx).

  2. A storage pool can back volumes using either file-based or block-based storage. File-based storage pools use files to store data, while block-based storage pools use raw data blocks. Block-based storage pools typically provide better performance, while file-based storage pools are easier to manage.

List Storage Pools

Storage pools can be listed to view the current storage configuration in the cluster. This includes details about the storage pools, their status, and the nodes they are associated with.

Web Interface

  1. Select the cluster in the resource tree and view the page on the right. Click on the Storage tab in the right pane. The storage pools will be listed: Storage Page

To edit associated nodes of a storage pool, refer to the Edit Storage Pool section.

Storage Pool Status

Each storage pool has a status indicator that provides information about its availability and configuration across the nodes in the cluster. The status can be one of the following:

Storage Pool Grey Dash
The storage pool has not been enabled on any nodes.

Storage Pool Green Checkmark
The storage pool is available on all enabled nodes.

Storage Pool Red X
An error has occurred while propagating the storage pool configuration to enabled nodes. Manual intervention may be required to resolve the issue.

Create Storage Pool

Web Interface

  1. Select the cluster in the resource tree and view the page on the right. Click on the Storage tab in the right pane. Storage Page

  2. Click the Create Pool button. Create Pool Button

  3. Choose the storage pool type, enter a name, and enter the required configuration metadata. Create Pool Form

  4. Click Create to create the storage pool. Initially, the new storage pool will not be enabled on any nodes. To enable the new storage pool on nodes, refer to the Edit Storage Pool section.

Edit Storage Pool

Currently, only node associations with storage pools can be modified. The name of a storage pool cannot be changed after creation.

Web Interface

  1. Select the cluster in the resource tree and view the page on the right. Click on the Storage tab in the right pane. Storage Page

  2. Click on the pencil icon in the card of the storage pool you want to edit. Edit Pool Button

  3. In the edit form, you can select the nodes on which this storage pool should be enabled. The nodes that are already associated with this storage pool will be selected by default. Edit Pool Form

  4. Click Save to apply the changes. The storage pool will be enabled on the selected nodes, and the changes will be propagated according to the storage pool propagation algorithm. This may take some time.

Destroy Storage Pool

note

Storage pools cannot be destroyed if there are volumes on enabled nodes. All volumes must be destroyed, or all enabled nodes with volumes must have their associations removed. This limitation will be addressed in the future.

Web Interface

  1. Select the cluster in the resource tree and view the page on the right. Click on the Storage tab in the right pane. Storage Page

  2. Click on the X icon in the card of the storage pool you want to destroy. Destroy Pool Button

  3. A confirmation dialog will appear. Type in “DESTROY” and click Confirm to confirm the destruction of the storage pool. Destroy Pool Confirmation

  4. The storage pool will be marked for destruction, and will be cleaned up according to the storage pool propagation algorithm. This may take some time. During this time, the storage pool’s name will be unavailable for reuse.

Storage Pool Propagation

Storage pools are propagated across the cluster at regular intervals by a system job, ensuring that all nodes have the latest configuration and state. This propagation process is crucial for maintaining consistency and availability of storage resources.

note

Creating a storage pool with the same name as a storage pool that is marked for deletion is not allowed. If you need to reuse the name, you must wait for the storage pool to be fully cleaned up.

Propagation Algorithm

The propagation algorithm is illustrated in the following diagram:

View source

Storage Volumes

Storage volumes (or just “volumes”) are virtual storage devices that can be attached to instances. Volumes are allocated from storage pools, stored on individual nodes, and are used to store instance disks, snapshots, and other data.

Volumes can be attached to instances to provide additional storage capacity. Volumes can be resized, detached, and destroyed as needed.

The ID prefix for volumes is vol-1.

Notes


  1. Resources in Pextra CloudEnvironment® are identified by unique IDs. Storage volume IDs will have the prefix vol-, followed by a unique identifier (e.g., vol-qthm_iLrflJ_DtSS1l4Gx).

List Volumes

Web Interface

  1. Select the node in the resource tree and view the page on the right. Click on the Storage tab in the right pane. Storage Page

  2. Click on the Volumes tab to view the list of volumes associated with the node.

  3. The list displays all volumes associated with the node. To filter by storage pool, use the Storage Pool dropdown at the top of the list. Filter by Storage Pool

Create Volume

Web Interface

Currently, volumes can only be created when creating a new instance, or through the Volumes API. This will change in the future.

Resize Volume

warning

Resizing a volume while it is in use may lead to data corruption or loss. Proceed with caution.

note

After resizing, you may need to resize the filesystem on the volume to utilize the new size, which must be done inside the instance.

Web Interface

Currently, volumes can only be resized through the Resources tab in the instance details page. This will change in the future.

  1. Select the instance in the resource tree and view the page on the right. Click on the Resources tab in the right pane. Instance Resources Page

  2. Click on the resize icon next to the volume you want to resize. Resize Volume Button

  3. In the resize form, enter the delta size in GiB. This value will be added to the current size of the volume. Resize Volume Form

  4. If the instance is running, the Live Resize option will be checked. This allows the volume to be resized without stopping the instance. Live Resize Option

  5. Click Resize Volume to apply the changes. The volume will be resized according to the specified delta size.

tip

To perform a cold resize, you can stop the instance first, then follow the same steps as above without selecting the Live Resize option. This will ensure that the volume is resized safely without any risk of data corruption.

Attaching a Volume to an Instance

You can attach a storage volume to an existing instance to expand its storage or provide additional data access.

Steps to Attach a Volume

  1. In the left navigation panel, click on the instance you want to attach the volume to.

    Storage Page

  2. On the right-hand panel, click the Resources tab.

  3. Click Attach Device.

  4. Fill in the required information:

    • Device Name: Enter a name for the device (3–32 characters, no spaces).
    • Device Type: Select Storage Volume.
    • Volume: Choose the volume you want to attach (only unattached volumes are available).
    • Device Path: Enter the path for the device (e.g., /dev/sdc).
    • Bus Type: Select a bus type (e.g., virtio).
  5. Click Confirm.

Once confirmed, the volume will be attached to the instance and available for use.

Storage Page

Detaching a Volume from an Instance

Steps to Detach a Volume

  1. In the left navigation panel, click on the instance that has multiple volumes attached.
  2. On the right-hand panel, click the Resources tab.
  3. In the Resources section, scroll to the list of attached volumes.
    Attached Volumes
  4. To detach a volume, click the Delete button in the row of the desired volume.
  5. A confirmation popup will appear. Enter the required confirmation string as prompted and click Confirm.
    Detach Volume Confirmation

Once confirmed, the selected volume will be detached from the instance and will become available to attach to other instances if needed.

tip

You cannot detach the root volume of an instance.

tip

Detaching a volume does not delete it; it is simply removed from the instance. To view all volumes—both attached and detached—click on the node in the left navigation panel, then go to the Storage tab and select Volumes.

Destroy Volume

note

A volume cannot be destroyed if it is attached to an instance. You must first detach the volume from the instance before destroying it.

Web Interface

  1. Select the node in the resource tree and view the page on the right. Click on the Storage tab in the right pane. Storage Page

  2. Click on the Volumes tab to view the list of volumes associated with the node.

  3. Click on the X icon next to the volume you want to destroy. Destroy Volume Button

  4. In the confirmation dialog, type “DESTROY” and click Confirm to confirm the destruction of the volume. Destroy Volume Confirmation

  5. The volume will be destroyed, and it will no longer be available in the list of volumes.

note

If the volume was attached to an instance, you may need to restart the instance to ensure it no longer references the destroyed volume.

Instance Management

The Instance Management section allows administrators to manage virtual instances in the platform. It includes deployment, control, and lifecycle actions.

The ID prefix for instances is inst-1.

tip

You can perform these actions by right-clicking on the target node to deploy a new instance (VM or container). For management of existing instances, right-click on the instance itself.

tip

An instance must be in the Stopped state before it can be destroyed.

Notes


  1. Resources in Pextra CloudEnvironment® are identified by unique IDs. Instance IDs will have the prefix inst-, followed by a unique identifier (e.g., inst-RqP1zhgvLT3qasUOvI5DW).

Instance Deploy

To deploy a new instance (VM or container):

  1. Right-click on the target node and select Deploy.

Configure Instance Overlay

  1. The Deploy Instance overlay will appear for configuration.

    Configure Instance Overlay

  2. Fill in the required general instance information and click Next for customizations.

    Fill Instance Information

  3. Add a volume (disk space) as needed and click Next.

    Add Volume

  4. Configure networking for connectivity if required and click Next.

    Networking Configuration

  5. Review the configuration and click Finish.

    Review Configuration

  6. The new instance will appear under the selected node.

    New Instance in Node

  7. Click on the instance to view its properties and access the console.

    Instance Properties and Console

  8. Click on the instance to view its details:

    • To access the console, select the Console tab.

      Instance Console

  • To view and adjust instance resources, click on the Resources tab. Instance Resources

  • To view instance snapshots and create backups, click on the Backups tab.

Instance Backups

Instance State Management

Managing the state of your instances is a key part of operating in Pextra CloudEnvironment. This section covers how to stop, force stop, restart, and start an instance that has been stopped.

tip

All instance state actions are performed by right-clicking the target instance, selecting Power, and choosing the desired action.

Starting a Stopped Instance

To bring a stopped instance back online:

  1. Right-click the stopped instance.

  2. Select Power → Start.

    Instance State Start

  3. A confirmation dialog will appear. Click Confirm.

    Instance State Confirm Start

  4. Wait for the instance to reach the Running state.

tip

Use the Console tab to monitor boot progress and ensure all services start correctly.

Stopping an Instance

To gracefully stop a running instance:

  1. Right-click the instance you want to stop.

  2. Select Power → Stop.

    Instance State Stop

  3. A confirmation dialog will appear. Click Confirm.

    Instance State Confirm Stop

  4. Wait for the instance to reach the Stopped state.

tip

Stopping an instance shuts it down safely, allowing all processes to terminate properly. This may take a few moments depending on the workload.

Force Stopping an Instance

If an instance does not respond to a normal stop:

  1. Right-click the unresponsive instance.

  2. Select Power → Force Stop.

    Instance State Force Stop

  3. A confirmation dialog will appear. Click Confirm.

    Instance State Confirm Force Stop

  4. Wait for the instance to reach the Stopped state.

warning

Force stopping immediately terminates all processes. Unsaved data may be lost. Use only when a normal stop fails.

Restarting an Instance

To reboot an instance:

  1. Right-click the instance you wish to restart.

  2. Select Power → Restart.

    Instance State Restart

  3. A confirmation dialog will appear. Click Confirm.

    Instance State Restart

  4. Wait for the instance to reach the Running state.

  5. The instance will stop and start automatically. Use this option for applying configuration changes or recovering from transient issues.

tip

Use the Console tab to monitor the restart progress and ensure all services start correctly.

Destroying an Instance

To permanently destroy an instance, it must first be in the Stopped state:

  1. Ensure the instance is stopped.

  2. Right-click the stopped instance.

  3. Select Destroy.

    Instance State Destroy

  4. A confirmation dialog will appear. Type the requested text in the popup and click Confirm.

    Instance State Confirm Destroy

  5. Optionally, select the checkbox to destroy all attached volumes (storage) as well.

tip

An instance must be in the Stopped state before it can be destroyed.

warning

Destroying an instance is permanent and cannot be undone. All data on the instance will be lost.

Monitoring and Metrics

Pextra CloudEnvironment® provides a comprehensive Monitoring and Metrics dashboard, allowing administrators to observe and manage the health, performance, and resource usage of clusters, nodes, and instances.

Viewing Available Monitoring Metrics

To access monitoring metrics:

  1. Select the cluster, node, or instance in the left navigation panel.
  2. On the right-hand panel, the Monitoring section displays key metrics, organized under three main categories: Cluster, Node, and Instance.

tip

To view monitoring metrics, click on the specific resource—whether a cluster, node, or instance—in the left navigation panel. This will open the monitoring dashboard for that resource on the right-hand panel.

Cluster-Level Monitoring

Cluster-level metrics provide a comprehensive overview of the entire cluster’s health, performance, and resource usage.

tip

To view cluster metrics, click on the cluster in the left navigation panel. The metrics for the selected cluster will appear on the right-hand panel.

The following metrics are available at the cluster level:

  • Cluster Health: Displays the overall health status of the cluster.
    Cluster Health
  • Number of Nodes: Shows the total number of nodes in the cluster.
    Number of Nodes
  • License Status: Displays current license status, including expiration and any issues.
    License Status
  • Deployment Overview: Provides overall cluster resource usage, including CPU, memory, storage, total vCPUs, total memory, deployment duration, total storage, and cluster mode.
    Deployment Overview

Node-Level Monitoring

Node-level metrics provide detailed insights to help identify performance or health issues on individual nodes.

tip

To view node health metrics, click on the node in the left navigation panel. The metrics for that node will appear on the right-hand panel.

Node Health Overview

The following metrics are available at the node level:

  • Node Health: Displays the overall status of the node.
    Node Health
  • CPU Usage: Shows real-time CPU utilization for the node.
    Node CPU Usage
  • Memory Usage: Monitors memory consumption per node.
    Node Memory Usage
  • Storage Usage: Displays storage allocation and usage for the node.
    Node Storage Usage

3. Instance-Level Monitoring

Instance-level metrics provide detailed insights into each deployed virtual machine or container.

tip

To view instance metrics, click on the instance in the left navigation panel. The available metrics for that instance will appear on the right-hand panel.

The available instance metrics include:

  • Name: The name of the instance.
  • Status: Connection status—Connected (green) or Disconnected (red).
  • Memory Allocated: Amount of memory assigned to the instance.
  • Instance Type: Type of instance, such as QEMU or LXC.
  • Deployment Duration: Length of time the instance has been deployed.

Instance Overview

Frequently Asked Questions

Welcome to the Pextra CloudEnvironment® FAQ! Here you can find answers to common questions about installing, configuring, and managing your nodes and instances.

If you have a question that is not already listed and is not repetitive, please send it to faq@pextra.cloud. Our team will review submissions and add relevant questions to the FAQ.

Why did my node go down?

There are several potential causes for a node in your Pextra CloudEnvironment® cluster to suddenly go down. Manual intervention may be required to diagnose and resolve the issue. In most cases, the node will self-recover after the underlying problem is fixed.

note

It may not be possible to access the node remotely if it is down due to hardware failure. Physical access or out-of-band management tools (like IPMI, iLO, or DRAC) may be required.

Resource Exhaustion

Running out of critical resources like CPU, memory, or disk space can cause a node to crash or freeze. You may need to consider upgrading hardware resources if resource exhaustion is a recurring issue.

Disk Space

warning

Do not, under any circumstances, delete system files or directories unless you are absolutely sure of their purpose. This includes directories like /bin, /sbin, /usr, /etc, and /var. This will destroy your installation and may lead to data loss.

  1. Check disk space usage on all mounted partitions:
    df -h
    

If any partitions are at or near 100% usage, take immediate action to free up space.

  1. Find large files consuming disk space:

    du -ah / | sort -rh | head -n 20
    
  2. Clear cache and temporary files:

    apt clean
    
  3. Remove old log files or archive them to free up space:

    find /var/log -type f -name "*.log" -exec rm -f {} \;
    

Memory

  1. Check memory usage:
    free -h
    

If memory usage is critically high, consider adding more RAM or stopping unnecessary instances.

Hardware Failures

Hardware issues like disk failures, memory errors, or power supply problems can lead a node to fail and become unresponsive. If any component is identified as failing, schedule replacement immediately to minimize downtime.

  1. Check for hardware errors:

    dmesg | grep -i error
    tail -n 100 /var/log/syslog
    
  2. Memory (RAM) test using Memtest86+ can identify RAM issues. You may run it from a bootable USB.

  3. Inspect disk health:

    smartctl -a /dev/sdX	# Replace /dev/sdX with your disk identifier
    
  4. Review power supply and cooling systems to ensure they are functioning correctly.

Network Issues

Network connectivity problems can result from interface failures, network partitions, or misconfigurations. Verify network status and connectivity to other nodes in the cluster.

  1. Check network interface status:
    ip a
    
  2. Test connectivity to other nodes:
    ping <other-node-ip>
    
  3. Review network configuration and logs for errors.
    cat /var/log/syslog | grep -i network
    
  4. Restart network services if necessary:
    systemctl restart systemd-networkd
    

Software Bugs or Misconfigurations

Although it is highly unlikely, bugs in the operating system or misconfigurations can lead to instability and crashes. To update your system, refer to the System Upgrade section.

How are disks formatted during installation?

The Pextra CloudEnvironment® installer automatically formats disks during the installation process to prepare them for use by the system. The selected target disk is partitioned in the following manner:

  1. EFI System Partition (EF00):

    • Size: 1 GiB
    • Purpose: Contains the UEFI bootloader and related files necessary for system boot.
  2. Root Partition (8300):

    • Size: Remaining disk space
    • Purpose: Contains the root filesystem (/) where the operating system and Pextra CloudEnvironment® components are installed.

Why is Pextra CloudEnvironment® distributed as an ISO image?

To ensure a consistent and reliable installation experience across diverse hardware environments, Pextra CloudEnvironment® is distributed as an ISO image instead of traditional package formats.

If distributed as system packages, users would need to manually install a compatible operating system, configure storage, networking, and dependencies, which could lead to inconsistencies and potential misconfigurations. By using an ISO image, we provide a pre-configured environment that includes all necessary components, ensuring that installations are uniform and optimized for Pextra CloudEnvironment®.

How do I change the IP address of a node?

Currently, changing the IP address of a node after installation is not supported without performing internal reconfiguration. This is especially true for the Community Edition unless you are an advanced user.

We plan to address this limitation in a future release and simplify the process for all users.

Reporting Issues

This section provides guidance on how to effectively report issues, bugs, and feature requests related to the software. It includes instructions on collecting logs, creating support tickets, and gathering diagnostic information to assist in troubleshooting and resolution.

Collecting Logs

Seeking Help

This section provides guidance on how to seek help through the appropriate channels, including the official helpdesk and community forum. It also includes tips for effective support tickets to ensure a smooth resolution process.

Official Helpdesk

important

An active support subscription is required to create a support ticket through our official helpdesk. If you would like to purchase a support subscription, please visit our customer portal.

note

If you do not have an active support subscription, you can still seek help through our community forum. See the Community Forum section for more details.

If you have an active support subscription, you can create a support ticket through our official helpdesk:

  1. Navigate to our helpdesk.

  2. Log in with your credentials.

  3. Click on “Create Ticket”.

  4. Complete the ticket submission form with the following information:

    • A clear, descriptive title
    • Detailed description of the issue
    • Steps to reproduce the issue
    • Screenshots or error messages (if applicable)
    • System information and logs (see Collecting Logs)
    • Any troubleshooting steps you’ve already attempted
  5. Select the appropriate priority level based on the impact to your operations.

  6. Submit the ticket.

Our support team will respond according to the Service Level Agreement (SLA) associated with your support subscription level.

Support Ticket Lifecycle

Each support ticket goes through a lifecycle, which includes the following stages:

  1. Submission: You create and submit a ticket.

  2. Acknowledgment: Support team acknowledges receipt.

  3. Investigation: Support team investigates the issue.

  4. Resolution or Escalation: Issue is either resolved or escalated to engineering.

  5. Verification: You verify the solution works.

  6. Closure: Ticket is closed once you confirm the issue is resolved.

tip

Check your email regularly for updates on your support ticket. The support team may request additional information or provide solutions that require your input.

Community Forum

tip

Pextra has a dedicated support team available to assist you with any issues you may encounter. If you have an active support subscription, we recommend using the official helpdesk for the fastest response times.

If you do not have an active support subscription or prefer community-based assistance, you can post your issue on our community forum:

  1. Visit our community forum.

  2. Create an account or log in if you already have one.

  3. Navigate to the appropriate section (e.g., “Installation Issues,” “Configuration Help,” etc.).

  4. Click on “New Topic” to create a new post.

  5. Provide a clear title and detailed description of your issue.

  6. Include relevant system information, logs, and any troubleshooting steps you’ve already taken.

  7. Submit your post.

Our community members and Pextra staff monitor the forums regularly and will respond as soon as possible. While this option does not have a formal SLA, the community is active and helpful.

Tips for Support Tickets

To ensure that your support ticket is effective and leads to a quick resolution, follow these tips:

  1. Be specific: Provide precise details about what you were doing when the issue occurred.

  2. Include context: Mention your environment details, such as hardware specifications, current version, and any recent changes.

  3. Attach logs: Always include relevant logs.

  4. Document steps to reproduce: List the exact steps someone would need to follow to encounter the same issue.

  5. Describe expected vs. actual behavior: Explain what you expected to happen and what actually happened.

  6. Add screenshots: Visual evidence can help the support team understand the issue more quickly.

Repository Mirrors for Airgapped Environments

In standard deployments, servers connects to the Pextra repository to download updates. However, airgapped environments require special consideration for package management, as they lack direct internet access. This tutorial provides a guide to managing an offline repository mirror in airgapped environments using aptly. This approach ensures Pextra CloudEnvironment systems remain updatable and secure even in the most restrictive network environments.

Before You Begin

Hardware Requirements:

  • Mirror server (online system with internet access) with sufficient storage space
    • The Pextra repository is approximately 100MiB in size per architecture (amd64 and arm64)
  • USB drive or removable media (for full airgap transfers only)
  • Network connectivity between mirror and offline servers (for restricted airgap only)

Software Requirements:

  • Debian-based system with administrative privileges
  • curl, tar, and standard Unix utilities
  • Administrative (sudo) access

Estimated Setup Time:

  • 30 minutes for restricted airgap
  • 1 hour for full airgap

Understanding Airgap Types

To set up Pextra CloudEnvironment® servers in an airgapped environment, it is essential to understand the two different types of airgaps:

Restricted/One-way Airgap

The offline server cannot directly access public internet but can communicate with an outside server through a controlled endpoint. This allows for automated synchronization while maintaining security boundaries.

Full Airgap

Complete network isolation with no connectivity to external servers. Package updates require manual media transfer (e.g. with USB drives, portable storage).

note

A full airgap is the most secure option, but it requires a considerable amount of manual work to keep the offline servers updated. A restricted airgap allows for more automation and is recommended if possible.

Setup Instructions

  1. Set up the mirror server:

  2. Follow the relevant setup instructions based on your airgap type:

Mirror Setup

This guide will help you set up a local mirror of the Pextra repository using aptly. This is the first step in creating an airgapped setup for Pextra CloudEnvironment®.

Install aptly

Run the following command on your online mirror server:

apt install aptly

You may need to run apt update to ensure the package list is up to date before installing.

Import Pextra GPG Key

Download and import the Pextra repository GPG key:

# Download the GPG key for the repository (signed by the master key)
curl -fSsLo /usr/share/keyrings/pextra-ce.gpg https://repo.pextra.cloud/debian/cloudenvironment/key.gpg

# Import the GPG key into trustedkeys.gpg (to be used by aptly)
gpg --no-default-keyring --keyring trustedkeys.gpg --import /usr/share/keyrings/pextra-ce.gpg

Configure the Mirror

To mirror only one architecture (recommended), use the following command:

aptly -architectures="<architecture>" mirror create pextra-ce-bookworm https://repo.pextra.cloud/debian/cloudenvironment bookworm common meta

where <architecture> can be amd64 or arm64, depending on your server’s architecture.

To mirror all architectures, omit the -architectures option:

aptly mirror create pextra-ce-bookworm https://repo.pextra.cloud/debian/cloudenvironment bookworm common meta

Run Initial Sync

At this point, you have created a mirror configuration but it is still empty. To perform the initial synchronization of the mirror, run:

aptly mirror update pextra-ce-bookworm

This command may take some time, depending on your internet connection. It will download all packages and metadata from the Pextra repository.

To verify the synchronization (after the update command completes), you can check the status of the mirror:

aptly mirror show -with-packages pextra-ce-bookworm

Sample output:

Name: pextra-ce-bookworm
Archive Root URL: https://repo.pextra.cloud/debian/cloudenvironment/
Distribution: bookworm
Components: common, meta
Architectures: amd64, arm64
Download Sources: no
Download .udebs: no
Last update: 2025-08-12 21:28:38 UTC
Number of packages: 17

Information from release file:
Architectures: amd64 arm64
Codename: bookworm
Components: common meta
Date: Tue, 12 Aug 2025 19:15:03 UTC
Description:  Pextra Inc. Debian repository

Label: Pextra Inc.
Origin: Pextra Inc.
Suite: stable
Version: 1.0
Packages:
  <list of packages>...

This will ensure that your mirror stays synchronized with the Pextra repository.

Prepare the Mirror for Publishing

To make the mirrored repository available for use, you need to take a snapshot. Taking a snapshot allows you to create a versioned point-in-time copy of the mirror, which can be useful for rollback or auditing purposes:

# Create a snapshot of the mirror (e.g. pextra-ce-bookworm-20250812)
aptly snapshot create pextra-ce-bookworm-$(date +%Y%m%d) from mirror pextra-ce-bookworm

Before publishing, a GPG key must be generated to sign the mirror, if you haven’t done so already. Refer to the above link for instructions on generating a GPG key.

warning

Make sure to keep your GPG key secure, as it will be used to cryptographically sign the repository metadata. If you lose access to your GPG key, you will need to create a new mirror and reconfigure your offline servers.

To retrieve the fingerprint of your GPG key, run:

gpg --list-secret-keys --keyid-format LONG

This will display your GPG keys, including their fingerprints. Copy the fingerprint (e.g. F6C824A95B510F49ED4B0D640B4F9057C7DBDC41) for use in the next step.

Publish the Mirror

To publish the mirror, you can use the following command:

aptly publish snapshot -gpg-key=<your-gpg-key-fingerprint> pextra-ce-bookworm-$(date +%Y%m%d)

Sample output:

Loading packages...
Generating metadata files and linking package files...
Finalizing metadata files...
Signing file 'Release' with gpg, please enter your passphrase when prompted:
Clearsigning file 'Release' with gpg, please enter your passphrase when prompted:

Snapshot pextra-ce-bookworm-20250812 has been successfully published.
Please setup your webserver to serve directory '/home/user/.aptly/public' with autoindexing.
Now you can add following line to apt sources:
  deb http://your-server/ bookworm main
Don't forget to add your GPG key to apt with apt-key.

You can also use `aptly serve` to publish your repositories over HTTP quickly.

Mirror Maintenance

For additional documentation on how to manage your repository mirror, including updating and publishing snapshots, refer to the Aptly documentation.

Updating the Mirror

To keep your mirror up to date, you can set up a cron job to run an update script at a regular interval (e.g. daily):

cat << 'EOF' > /usr/local/bin/update-pextra-mirror.sh
#!/bin/bash
aptly mirror update pextra-ce-bookworm
aptly snapshot create pextra-ce-bookworm-$(date +%Y%m%d) from mirror pextra-ce-bookworm
aptly publish snapshot -gpg-key=<your-gpg-key-fingerprint> pextra-ce-bookworm-$(date +%Y%m%d)
EOF
chmod +x /usr/local/bin/update-pextra-mirror.sh

# Add a cron job to run this script daily at 2 AM
echo "0 2 * * * /usr/local/bin/update-pextra-mirror.sh" | crontab -

Next Steps

Export the public GPG key used to sign the mirror so that it can be imported on your offline servers:

gpg --armor --export <your-gpg-key-fingerprint> > /usr/share/keyrings/pextra-mirror-key.asc

Keep a copy of this key file, as it will be needed to configure your offline servers to use the mirror.

To use the mirror on your offline Pextra CloudEnvironment® servers, follow the relevant setup instructions based on your airgap type:

Restricted Airgap Setup

After setting up your Pextra repository mirror, you can configure your offline servers to use this mirror in a restricted airgap environment. This guide will walk you through the steps to set up your offline servers to access the mirrored repository.

Transfer this key file to your offline servers using your available transfer method.

Configure Repository on Offline Servers

On each offline Pextra CloudEnvironment® server:

# Backup original repository configuration
mv /etc/apt/sources.list.d/pextra-ce.list /etc/apt/sources.list.d/pextra-ce.list.backup

# Add your mirror server's GPG key to trusted keys
cp /path/to/pextra-mirror-key.asc /usr/share/keyrings/pextra-mirror-key.asc

# Update repository source to point to your mirror
echo "deb [signed-by=/usr/share/keyrings/pextra-mirror-key.asc] http://your-mirror-server/ bookworm common meta" | tee /etc/apt/sources.list.d/pextra-ce.list

# Update package cache
apt update

Verify Configuration

Test that the configuration is working correctly:

# Verify package availability
apt-cache policy pce-common

Sample output:

pce-common:
  Installed: <version>
  Candidate: <version>
  Version table:
 *** <version> 500
        500 http://your-mirror-server bookworm/meta amd64 Packages
        100 /var/lib/dpkg/status

A successful output indicates that your offline server can access the mirrored repository and retrieve package information. The setup is now complete, and your offline Pextra CloudEnvironment® servers are configured to use the repository mirror in a restricted airgap environment.

Full Airgap Setup

In a full airgap environment where no network connectivity exists between your mirror server and offline servers, you’ll need to transfer packages and configuration files using physical media. This guide covers the complete process of setting up Pextra CloudEnvironment® in a completely isolated environment.

The guide is coming soon, but here are the high-level steps:

  1. Archive the Pextra repository on your mirror server (with tar).
  2. Transfer the archive to removable media (USB drive, external HDD, etc.).
  3. Move the archive to your airgapped environment.
  4. Extract the archive on your airgapped server.
  5. Configure the repository on your fully airgapped Pextra CloudEnvironment servers to use the local file-based repository.

Feedback & Contributions

In this section, we encourage users to provide feedback on their experience with the product. This includes suggestions for new features, improvements to existing features, and any other comments or concerns they may have.

We value your feedback and take it seriously. It helps us understand what is working well and what needs improvement.

As an emerging solution, we also appreciate any contributions to our documentation, whether it’s fixing typos, adding examples, or suggesting new topics. If you have a suggestion or contribution, please refer to the Contributing section for guidelines on how to submit your feedback or contribution.

Feature Requests

We continuously improve Pextra CloudEnvironment® based on user feedback and suggestions. If you have ideas for new features or enhancements that would improve your experience, we encourage you to share them with us.

Submitting Feature Requests

The primary channel for submitting feature requests is through our community forums:

  1. Visit our community forum.

  2. Create an account or log in if you already have one.

  3. Navigate to the “Feature Requests” section.

  4. Click on “New Topic” to create a new post.

  5. Provide a clear, descriptive title for your feature request.

  6. In the description, include:

    • A detailed explanation of the requested feature
    • The problem it solves or the value it provides
    • Your use case and why this feature would be beneficial
    • Any relevant examples, screenshots, or mockups (if applicable)
  7. Submit your feature request.

What Happens After Submission

After submitting your feature request:

  1. Community Discussion: Other users may comment on your request, adding their perspectives or use cases.

  2. Feedback Collection: Pextra team members monitor the forums and gather feature requests.

  3. Evaluation: Our team evaluates requests based on factors such as:

    • Alignment with product vision
    • Number of users who would benefit
    • Technical feasibility
    • Implementation complexity
  4. Prioritization: Approved features are prioritized in our development roadmap.

  5. Implementation: When a feature is scheduled for development, we may reach out for additional information.

General Feedback

We value your opinions about Pextra CloudEnvironment® and are committed to continuously improving our product based on user feedback. Your insights help us understand what’s working well and where we can make enhancements to better serve your needs.

Providing General Feedback

The most effective way to share your general feedback is through our community forums:

  1. Visit our community forum.

  2. Create an account or log in if you already have one.

  3. Navigate to the “Feature Requests” section.

  4. Click on “New Topic” to create a new post.

  5. Provide a descriptive title that summarizes your feedback.

  6. In the description, include:

    • Your overall experience with the product
    • Specific aspects you find particularly useful or challenging
    • Any suggestions for improvements
    • Context about your use case and environment
  7. Submit your feedback.

Examples of Feedback

We encourage various types of feedback, including:

  • Comments on the user interface and experience

  • Suggestions for improving our guides and documentation

  • Reports about system performance in your environment

  • Feedback on how well Pextra CloudEnvironment® works with other tools

  • Overall thoughts about the product and its value

How We Use Your Feedback

When you share your feedback:

  1. Other users may respond with their own experiences or suggestions.

  2. Your feedback directly influences our product roadmap and development priorities.

  3. We use your insights to make incremental improvements to the product.

We appreciate you taking the time to share your thoughts with us. Your feedback is essential to helping us build a better product for all users.

Contributing

We welcome contributions to our documentation. Whether you want to fix a typo, add examples, or suggest new topics, your contributions are valuable to us. Below are the guidelines for contributing to our documentation.

How to Contribute

One-Click Contribution

  1. If you find a typo or want to suggest an improvement, click the notepad with a pencil icon at the top right of the page: Edit this page

    • This will take you to the GitHub page for that file.
    • If you are logged in to GitHub, you can edit the file directly in your browser. If you are not logged in, you will be prompted to log in or create an account.
  2. Make your changes in the online editor.

  3. Click “Propose changes” to create a pull request (PR) with your changes.

    • Commit your changes with a clear and descriptive commit message. We use Conventional Commits for commit messages, so please follow that format: One-click pull request
  4. Wait for feedback from the maintainers. They may request changes or approve your PR. Once approved, your changes will be merged.

  5. Celebrate your contribution! 🎉

Full Development Setup

  1. Create a GitHub account if you don’t have one.

  2. Fork the repository by clicking the “Fork” button at the top right of the page: Fork the repository

  3. Clone your forked repository to your local machine:

    git clone https://github.com/PextraCloud/documentation.git
    
  4. Create a new branch for your changes:

    git switch -c <name>/<feature>
    
  5. Set up your development environment:

    • Install the necessary dependencies.
    • Follow the instructions in the repository’s README for setting up your local environment.
  6. Make your changes to the documentation files.

    • Use Markdown for formatting.
    • Follow the existing style and structure of the documentation.
  7. Commit your changes with a clear and descriptive commit message. We use Conventional Commits for commit messages, so please follow that format:

    git add .
    git commit -m "fix: correct typo in installation guide"
    
  8. Push your changes to your forked repository:

    git push origin <name>/<feature>
    
  9. Create a pull request (PR) to the original repository: Create a pull request

    • Navigate to the original documentation repository.
    • Click on the “Pull Requests” tab.
    • Click on “New Pull Request.”
    • Select your branch and click “Create Pull Request.”
    • Provide a clear description of your changes and why they are needed.
  10. Wait for feedback from the maintainers. They may request changes or approve your PR. Once approved, your changes will be merged.

  11. Celebrate your contribution! 🎉