# Pextra Documentation > User documentation for the Pextra CloudEnvironment® private cloud management platform. # Introduction Pextra CloudEnvironment® is a modern private cloud management and virtualization platform. It is capable of managing globally-distributed datacenters and provides a unified, multi-tenant management interface for all resources. It is designed to be highly scalable and flexible, with a focus on security and ease of use. Storage, networking, and compute resources are completely abstracted and software-defined, allowing for easy management and automation of all aspects of the deployment. This guide provides rich user documentation on how to install, administer, and use Pextra CloudEnvironment®. This guide assumes minimal prior knowledge, and is designed to be accessible to users of all skill levels, from beginners to experts. ## License This documentation is licensed under the [Creative Commons Attribution-ShareAlike 4.0 International](https://creativecommons.org/licenses/by-sa/4.0/) license. # For AI Agents We support [the /llms.txt standard](https://llmstxt.org/) for providing structured context to help LLMs understand Pextra CloudEnvironment®. The following /llms.txt files are available for use: - [/llms.txt](/llms.txt): A shorter file that provides links and metadata about the documentation. - [/llms-full.txt](/llms-full.txt): The complete documentation in a single file. # Pre-Installation Steps Before installing Pextra CloudEnvironment®, ensure that you have completed all the items in this checklist. This will help ensure a smooth installation process and optimal performance of your private cloud environment. ## a) Check System Requirements 1. Review the [system requirements](./system-requirements/index.md) for Pextra CloudEnvironment®. 2. Check for any [unsupported configurations](./system-requirements/unsupported-configurations.md) that may affect your installation. 3. For production workloads, review the [officially-supported servers list](./system-requirements/supported-servers.md) for optimal performance. ## b) Obtain License Keys 1. Visit [portal.pextra.cloud](https://portal.pextra.cloud) to obtain a Pextra CloudEnvironment® license key (to get a free evaluation license, fill out the form [here](https://pextra.cloud/contact-us/#f)). One license per node is required. This license is required at installation time. 2. Visit [cockroachlabs.cloud](https://cockroachlabs.cloud) to obtain a CockroachDB license key. One license per complete deployment (spanning all datacenters, clusters, and nodes) is required. This license is required after installation. ## c) Prepare Installation Media 1. Download the Pextra CloudEnvironment® ISO from the portal or the link provided in your license email. 2. [Verify the ISO](./installation/verifying.md) checksum to ensure file integrity and authenticity. 3. [Prepare a bootable USB drive](./installation/preparing.md) for installation. ## d) Back Up Existing Data 1. Back up any existing data on the servers that will be used for installation, as the installation process may overwrite existing data. ## Additional Resources 1. Familiarize yourself with the [support subscriptions](https://portal.pextra.cloud) available for Pextra CloudEnvironment®. 2. Join the [community forums](https://forum.pextra.cloud/) for additional support and to connect with other users. 3. Review other documentation pages for detailed guides and troubleshooting tips. # System Requirements In this section, the system requirements, including CPU, memory, storage, and network requirements, are outlined for the Pextra CloudEnvironment® platform. # Hardware Requirements Every node running Pextra CloudEnvironment® must meet the following minimum hardware requirements. These requirements are designed to ensure optimal performance and reliability of the platform. ## Minimum Hardware Requirements > [!NOTE] > While it is possible to run the platform with these specifications, it is not recommended for deployment in production environments. | Component | Requirement | |-----------|---------------------| | CPU | 4 cores, x86_64/aarch64[^1], VT-x/AMD-V, AVX2[^2] | | Memory | 8 GB | | Storage | 16 GB HDD | | Network | 1 Gbps | ## Recommended Hardware Requirements > [!NOTE] > The recommended hardware requirements are based on the average workload of a small to medium-sized business. For larger deployments, consider scaling up the hardware specifications accordingly. | Component | Requirement | |-----------|---------------------| | CPU | 8 cores | | Memory | 32 GB | | Storage | 128 GB SSD | | Network | 1 Gbps | ## Notes [^1]: The platform is only supported on 64-bit CPUs with the `x86_64` (`amd64`) or `aarch64` (`arm64`) architectures. `arm64` support was added in release `1.10.5+6816a0c`. 32-bit CPUs will never be supported. [^2]: These extensions are supported by all modern CPUs. The platform may function without virtualization extensions (VT-x/AMD-V), but AVX2 is a strict requirement. Running the platform without virtualization extensions is not supported nor recommended. # Officially-Supported Servers The following enterprise-grade servers are officially supported by Pextra CloudEnvironment® and have been tested for compatibility and performance. These servers are recommended for production environments and are known to work well with the platform. | Server Model | Manufacturer | CPU | Memory | Storage[^1] | |--------------|--------------|-------------|----------------|----------------| | PowerEdge R620 | Dell EMC | Dual Xeon E5-2600 | 128GB RAM | 600GB SSD | | PowerEdge R640 | Dell EMC | Dual Xeon | 128GB RAM | 1TB NVMe | | PowerEdge R740 | Dell EMC | Dual Xeon | 256GB RAM | 2TB NVMe | | ProLiant DL360 Gen10 | HPE | Dual Xeon | 128GB RAM | 1TB NVMe | | ProLiant DL380 Gen10 | HPE | Dual Xeon | 256GB RAM | 2TB NVMe | | ThinkSystem SR630 | Lenovo | Dual Xeon | 128GB RAM | 1TB NVMe | | ThinkSystem SR650 | Lenovo | Dual Xeon | 256GB RAM | 2TB NVMe | | SYS-6019P-WTR | Supermicro | Dual Xeon | 128GB RAM | 1TB NVMe | | SYS-6029P-TNRT | Supermicro | Dual Xeon | 256GB RAM | 2TB NVMe | --- Generally, any server that meets the minimum hardware requirements should work with Pextra CloudEnvironment®. However, we recommend using enterprise-grade servers for production environments to ensure optimal performance and reliability. ## Notes [^1]: Hardware-based RAID cards are **NOT** supported. Please see the [Unsupported Configurations](./unsupported-configurations.md) section for more information. # Unsupported Configurations Pextra CloudEnvironment® runs on a variety of hardware configurations, but there are certain configurations that are not supported. This list is not exhaustive, but it covers the most common unsupported configurations. If you encounter any issues with your server configuration, please contact support for assistance. ## Hardware-Based RAID Cards Hardware-based RAID cards are **NOT** supported. The platform requires direct access to the underlying storage devices for optimal performance and reliability. Hardware RAID can introduce complexity and potential issues with data integrity, especially in virtualized environments. **Workaround:** For each disk, create a RAID0 (striped) array with a single disk. > [!WARNING] > This has been reported to work, but it is not officially supported. Use this workaround at your own risk. ## 32-Bit CPUs Pextra CloudEnvironment® does not support 32-bit CPUs. The platform requires a 64-bit CPU. 32-bit CPUs will never be supported. **Workaround:** Use a different server with a 64-bit CPU architecture. # Installation This section provides instructions for downloading the installation ISO, and preparing it for installation. It includes steps for creating the bootable USB drives or DVD, as well as running the Pextra CloudEnvironment® installer on your server. # Downloading the Installer > [!NOTE] > The ISO file is approximately 2 GB in size. Make sure you have enough disk space before downloading, and a stable internet connection to avoid download interruptions. 1. Log into the [Pextra Customer Portal](https://portal.pextra.cloud). 2. Click on "Download ISO", then click on "Generate" to generate download links for the latest version of Pextra CloudEnvironment®: ![Pextra Customer Portal](./images/00-downloadiso.png) 3. Click on the download link to download the ISO file. After the download is complete, it is strongly recommended to [verify the integrity of the downloaded ISO file](./verifying.md) using the SHA256 and GPG signatures provided on the download page. # Verify File Integrity > [!NOTE] > This step is optional but highly recommended. Verifying the integrity of the downloaded ISO file ensures that the file came from Pextra Inc. and has not been tampered with. Follow the instructions below for your operating system to verify the file integrity. If at any point, file integrity verification fails, do not proceed with the installation. Before verifying GPG signatures, [download our GPG public key](https://pextra.cloud/pextra-gpg-key.asc). ## Linux Linux users can use the `sha256sum` and `gpg` commands to verify the SHA256 checksum and GPG signature of the downloaded ISO file. `sha256sum` is usually pre-installed on most Linux distributions, while `gpg` is also commonly available. If you do not have `gpg` installed, you can install it using your package manager (e.g., `apt`, `pacman`, `yum`, etc.). ### SHA256 Checksum 1. Make sure to download the SHA256 checksum file (the file that ends with `.sha256`) from the Pextra Customer Portal. 2. Open a terminal and navigate to the directory where the downloaded ISO file and SHA256 checksum file are located. 3. Calculate the SHA256 checksum of the downloaded ISO file using the following command: ```bash sha256sum pextra-ce.iso ``` 4. Compare the output with the SHA256 checksum provided on the download page. If they match, the file is intact. ### GPG Signature Two signatures are provided: one for the SHA256 checksum file and one for the ISO file itself. Verifying the SHA256 checksum file is sufficient and faster. 1. Make sure to download the GPG signature file (the file that ends with `.sha256.asc`) from the Pextra Customer Portal. 2. Import the Pextra Inc. GPG public key using the following command: ```bash gpg --import pextra-gpg-key.asc ``` 3. Verify the SHA256 checksum file using the following command: ```bash gpg --verify pextra-ce.iso.sha256.asc pextra-ce.iso.sha256 ``` 4. If the output indicates that the signature is valid, the file is intact. If it indicates that the signature is not valid, do not proceed with the installation. Verifying the ISO file itself is similar: 1. Make sure to download the GPG signature file (the file that ends with `.iso.asc`) from the Pextra Customer Portal. 2. Verify the ISO file using the following command: ```bash gpg --verify pextra-ce.iso.asc pextra-ce.iso ``` 3. If the output indicates that the signature is valid, the file is intact. If it indicates that the signature is not valid, do not proceed with the installation. ## MacOS MacOS users can use the `shasum` and `gpg` commands to verify the SHA256 checksum and GPG signature of the downloaded ISO file. ### SHA256 Checksum 1. Make sure to download the SHA256 checksum file (the file that ends with `.sha256`) from the Pextra Customer Portal. 2. Open a terminal and navigate to the directory where the downloaded ISO file and SHA256 checksum file are located. 3. Calculate the SHA256 checksum of the downloaded ISO file using the following command: ```bash shasum -a 256 pextra-ce.iso ``` 4. Compare the output with the SHA256 checksum provided on the download page. If they match, the file is intact. ### GPG Signature 1. Make sure to download the GPG signature file (the file that ends with `.sha256.asc`) from the Pextra Customer Portal. 2. Import the Pextra Inc. GPG public key using the following command: ```bash gpg --import pextra-gpg-key.asc ``` 3. Verify the SHA256 checksum file using the following command: ```bash gpg --verify pextra-ce.iso.sha256.asc pextra-ce.iso.sha256 ``` 4. If the output indicates that the signature is valid, the file is intact. If it indicates that the signature is not valid, do not proceed with the installation. Verifying the ISO file itself is similar: 1. Make sure to download the GPG signature file (the file that ends with `.iso.asc`) from the Pextra Customer Portal. 2. Verify the ISO file using the following command: ```bash gpg --verify pextra-ce.iso.asc pextra-ce.iso ``` 3. If the output indicates that the signature is valid, the file is intact. If it indicates that the signature is not valid, do not proceed with the installation. ## Windows Windows users can use the `CertUtil` PowerShell command to verify the SHA256 checksum. For GPG signatures, [GPG4Win](https://gpg4win.org/) can be used, as Windows does not have a built-in method to verify GPG signatures. GPG4Win is free and open source software. ### SHA256 Checksum 1. Make sure to download the SHA256 checksum file (the file that ends with `.sha256`) from the Pextra Customer Portal. 2. Open PowerShell and navigate to the directory where the downloaded ISO file and SHA256 checksum file are located. 3. Calculate the SHA256 checksum of the downloaded ISO file using the following command: ```powershell CertUtil -hashfile pextra-ce.iso SHA256 ``` 4. Compare the output with the SHA256 checksum provided on the download page. If they match, the file is intact. 5. If the output does not match, do not proceed with the installation. ### GPG Signature 1. Download the latest version of [GPG4Win](https://gpg4win.org/) and install it. 2. Make sure to download the GPG signature file (the file that ends with `.sha256.asc`) from the Pextra Customer Portal. 3. Open PowerShell and navigate to the directory where the downloaded ISO file and GPG signature file are located. 4. Import the Pextra Inc. GPG public key using the following command: ```powershell gpg --import pextra-gpg-key.asc ``` 5. Verify the SHA256 checksum file using the following command: ```powershell gpg --verify pextra-ce.iso.sha256.asc pextra-ce.iso.sha256 ``` 6. If the output indicates that the signature is valid, the file is intact. If it indicates that the signature is not valid, do not proceed with the installation. Verifying the ISO file itself is similar: 1. Make sure to download the GPG signature file (the file that ends with `.iso.asc`) from the Pextra Customer Portal. 2. Verify the ISO file using the following command: ```powershell gpg --verify pextra-ce.iso.asc pextra-ce.iso ``` 3. If the output indicates that the signature is valid, the file is intact. If it indicates that the signature is not valid, do not proceed with the installation. # Preparing Installation Media Now that you have downloaded the ISO installer, you need to create a bootable USB drive or DVD. Follow the instructions below for your operating system to create the installation media. > [!WARNING] > Creating a bootable USB drive will erase all data on the selected drive. Make sure to back up any important data before proceeding. ## Linux Linux users can use the `dd` command to create a bootable USB drive. `dd` is a built-in command and does not require any additional software. 1. Insert a USB drive with at least 8GB of space. Make sure to back up any important data on the drive, as it will be formatted. 2. Open a terminal and run the command `lsblk` to identify the device name of the USB drive (e.g., `/dev/sdX`, where `X` is the letter assigned to your USB drive). 3. Unmount the USB drive using the command (you may need to use `sudo`): ```bash umount /dev/sdX* ``` 4. Use the `dd` command to create a bootable USB drive. Replace `/path/to/pextra-ce.iso` with the path to the downloaded ISO file and `/dev/sdX` with the device name of your USB drive (e.g., `/dev/sdb`): ```bash dd if=/path/to/pextra-ce.iso of=/dev/sdX bs=4M status=progress ``` 5. After the process is complete, run the following command to ensure all data is written to the USB drive: ```bash sync ``` 6. Safely eject the USB drive using the command (you may need to use `sudo`): ```bash eject /dev/sdX ``` Your USB drive is now ready to be used for installation. ## MacOS MacOS users can also use the `dd` command to create a bootable USB drive. The process is similar to Linux, but with some differences in the commands used. 1. Insert a USB drive with at least 8GB of space. Make sure to back up any important data on the drive, as it will be formatted. 2. Open a terminal and run the command `diskutil list` to identify the device name of the USB drive (e.g., `/dev/diskX`, where `X` is the number assigned to your USB drive). 3. Unmount the USB drive using the command (you may need to use `sudo`): ```bash diskutil unmountDisk /dev/diskX ``` 4. Use the `dd` command to create a bootable USB drive. Replace `/path/to/pextra-ce.iso` with the path to the downloaded ISO file and `/dev/diskX` with the device name of your USB drive (e.g., `/dev/disk2`): ```bash sudo dd if=/path/to/pextra-ce.iso of=/dev/diskX bs=4m status=progress ``` 5. After the process is complete, run the following command to ensure all data is written to the USB drive: ```bash sync ``` 6. Safely eject the USB drive using the command (you may need to use `sudo`): ```bash diskutil eject /dev/diskX ``` Your USB drive is now ready to be used for installation. ## Windows Windows users can use [Rufus](https://rufus.ie/) in DD mode to create a bootable USB drive, as there is no built-in mechanism to create bootable USB drives from ISO files. Rufus is free and open source software. 1. Download the latest version of [Rufus](https://rufus.ie/) and run it. 2. Insert a USB drive with at least 8GB of space. Make sure to back up any important data on the drive, as it will be formatted. In Rufus, select the USB drive by clicking on the "Device" dropdown menu: ![Rufus](./images/00-rufus.png) 3. Select the downloaded ISO file by clicking on the "SELECT" button. Navigate to the location where you saved the ISO file and select it: ![Rufus select ISO](./images/01-rufus-iso.png) 4. With the USB and ISO ready, the window should similar to this. Click on "START" button to begin the process: ![Rufus ready](./images/02-rufus-ready.png) 5. A pop-up window will appear. Select "Write in DD Image mode" and click "OK": ![Rufus select DD](./images/03-rufus-dd.png) 6. Another pop-up window will appear, warning you that all data on the USB drive will be erased. Click "OK" to proceed: ![Rufus confirm](./images/04-rufus-confirm.png) 7. Once the process is complete, the bar will be green and say "READY". You can close Rufus: ![Rufus complete](./images/05-rufus-complete.png) 8. Safely eject the USB drive from your computer. Your USB drive is now ready to be used for installation. # Booting from the Installation Media 1. Insert the bootable USB drive or DVD into the server. 2. Power on the server and enter the BIOS/UEFI settings (usually by pressing `F2`, `F10`, or `DEL` during startup). 3. Change the boot order to prioritize the USB drive or DVD. 4. Save the changes and exit the BIOS/UEFI settings. 5. The server should boot from the installation media, and you will see the bootloader screen: ![Pextra CloudEnvironment® Installer](./images/00-installer.png) Press the `Enter` key to start the installation process. You can now proceed with the installation steps. # Installation Steps Follow the steps below to install Pextra CloudEnvironment® on your server. ## Steps 1. Acknowledge the End User License Agreement (EULA). 2. Configure the management network. - The installer will automatically detect network interface configuration from DHCP. - The server IP **must not** change after installation. It **will** cause breakage. - If your network interface does not appear, please [let us know](../../issues/reporting/index.md). 3. Enter your license key. - If you do not have a license key, refer to the [Pre-Installation](../pre-installation.md) section for more information on obtaining a license. 4. Configure the default organization and timezone. - This is the **root** organization (the owner of the deployment) that has access to all resources. - Additional organizations can be created later. - It is highly recommended to set the timezone to `Etc/UTC`, however, you can choose your local timezone if needed. 5. Configure the administrator user. - This user is the **root** user of the deployment and has access to all resources. - Choose a strong password and make sure to remember it. - After the installation, it is recommended to create an additional user with limited permissions for day-to-day operations. - For command-line access, the Linux user `root`'s password is set to the same password as the administrator user. 6. Configure the boot disk. - The installer will automatically detect available disks. Choose the disk where you want to install the operating system. - The installer will format the selected disk, so make sure to back up any important data before proceeding. 7. Finalize the installation. - A summary of your configuration will be displayed. Review the settings and click "Install" to begin the installation process. 8. Wait for the installation to complete. - The installation process may take some time, depending on your network speed and hardware configuration. Typically, it takes about 20-30 minutes. - If you see any errors during the installation, please [let us know](../../issues/reporting/index.md). 9. Reboot the server. - If you did not select "Auto-reboot" during the installation, you will need to click "Reboot" to restart the server. - Remove the installation media (USB drive or DVD) before rebooting, otherwise the server may boot from the installation media again. Your server is now ready to use! To access the web interface, please refer to the [Accessing the Web Interface](../../user-guide/web-interface/index.md) section. You can now proceed to perform [post-installation steps](../post-installation.md) to configure your deployment. # Post-Installation Steps After the installation is complete, some additional steps must be performed to ensure that your Pextra CloudEnvironment® deployment is fully functional and optimized for your needs. ## a) Upgrade to the latest version: Refer to the [System Upgrade](../user-guide/nodes/system-upgrade.md) section for instructions on how to upgrade to the latest version. ## b) Set CockroachDB license key Refer to the [Set CockroachDB License Key](../user-guide/nodes/set-cockroachdb-license-key.md) section for instructions on how to set the CockroachDB license key. This is **not** required if your node will join an existing, licensed cluster. ## c) Join the node to an existing cluster (if applicable): Refer to the [Cluster Management](../user-guide/clusters.md) section for instructions on how to join a node to a cluster if you are deploying a cluster or joining a node to an existing cluster. ## d) Configure user accounts: Refer to the [Identity Access Management (IAM)](../user-guide/organizations/iam.md) section for instructions on how to create and manage user accounts and permissions. ## e) Configure networking: Refer to the [Network Management](../user-guide/networks.md) section for instructions on how to configure network settings. ## f) Configure storage pools: Refer to the [Storage Management](../user-guide/storage/index.md) section for instructions on how to create and manage storage pools. ## g) Configure AI providers Refer to the [AI Providers](../user-guide/organizations/ai-providers/index.md) section for instructions on how to add and configure AI providers. ## h) Monitor system performance: Refer to the [Monitoring & Metrics](../user-guide/monitoring-metrics.md) section for instructions on how to monitor system performance. # Web Interface This section describes how to access and navigate the web interface of Pextra CloudEnvironment®. The web interface is the primary tool for managing your deployment, allowing you to perform various tasks, monitor system metrics, and configure settings. > [!NOTE] > The web interface requires a modern web browser with JavaScript enabled. It is recommended to use Mozilla Firefox or Google Chrome for the best experience. # Accessing the Web Interface You can access the Pextra CloudEnvironment® web interface by entering the management IP address in your web browser. The default URL is `https://:5007`, where `` is the IP address you configured during the installation process. > [!NOTE] > The web interface uses HTTPS for secure communication. You will see a self-signed certificate warning in your browser. This is normal, as the certificate is generated during the installation process. You can safely ignore this warning and proceed to the web interface. # Logging In To log in to the web interface, use the credentials you set during the installation process. The default username is `pceadmin`, and the password is the one you specified during installation: ![Login Page](./images/00-login.png) Once logged in, you will be directed to the current node's page[^1]. ## Notes [^1]: The IP address that you connect to (this is especially relevant for nodes in a cluster). The node that you are currently connected to is shown with a light green dot next to the node's entry in the resource tree. All requests are proxied to the node that you are connected to. # Resource Tree On the left side of the web interface, you will find a tree view that displays the hierarchy of your deployment. This view provides a complete overview of all organizations, datacenters, clusters, nodes, and instances within your deployment. You can expand and collapse the tree's nodes to navigate through the different levels of your infrastructure: ![Resource Tree](./images/01-tree.png) # AI Assist Throughout the Pextra CloudEnvironment® web interface, you will find the **AI Assist** button, which provides context-sensitive suggestions and assistance. Describe your task in natural language, and the AI Assist feature will generate relevant suggestions to facilitate your work[^1]. See the example below for a demonstration of how to use the AI Assist feature. *Before*: ![AI Assist Button](./images/02-ai-assist-nl.png) *After*: ![AI Assist Suggestions](./images/02-ai-assist-result.png) An administrator of the organization [must configure at least one AI provider](../organizations/ai-providers/add.md) for the AI Assist feature to function. If no AI providers are configured, the AI Assist feature will not be available in the web interface. ## Notes [^1]: The AI Assist feature is powered by third-party AI providers, as configured in your organization settings. The quality and accuracy of the suggestions may vary based on the provider and the specific task at hand. Always review AI-generated suggestions before applying them to your environment. # Node Management This section provides a guide to managing individual nodes within your deployment. Nodes are the physical or virtual servers[^1] that run Pextra CloudEnvironment®. They serve as the foundation of your infrastructure, providing the compute, storage, and network resources required by your deployment. The ID prefix for nodes is `node-`[^2]. ## Notes [^1]: Running Pextra CloudEnvironment® in a virtual machine is in beta. Try running Pextra CloudEnvironment® inside of Pextra CloudEnvironment®! [^2]: Resources in Pextra CloudEnvironment® are identified by unique IDs. Node IDs will have the prefix `node-`, followed by a unique identifier (e.g., `node-qthm_iLrflJ_DtSS1l4Gx`). # System Upgrade System upgrades should be routinely performed in order to ensure that the latest bug fixes, security patches, and features are available. > [!NOTE] > A valid license key must be present when upgrading Pextra CloudEnvironment®. To set the node's license key, refer to the [Set License Key](./set-license-key.md) section. > [!WARNING] > System upgrades will fail if they are not run as the `root` Linux user. ## Console 1. Access the node's console through SSH or through the "Console" tab in the node view. 2. First, update the node's package index by running the following command: ```bash apt update ``` This command may take some time to finish depending on the node's connection speed. 3. If any system upgrades are available, the following message will be shown: ``` [xx] packages can be upgraded. Run 'apt list --upgradable' to see them. ``` If this message is *not* shown, the node is on the latest version. No action is required. 4. If the above message is shown, the node can be upgraded to the latest version by running the following command: ```bash apt upgrade ``` This command may take a while to finish depending on the number of upgrades and the node's connection speed. # Set License Key License keys are long-lived and typically do not need to be changed. However, if you need to change the license key, you can do so by following these steps: > [!TIP] > License keys can be purchased from the [Pextra Customer Portal](https://portal.pextra.cloud). Support subscriptions are also available for purchase. ## Web Interface 1. Right-click on the node in the resource tree and select **Set License Key**: ![Right-Click](./images/00-rightclick.png) 2. A modal will appear. The current license key along with its validity will be displayed. Enter the new license key in the text box and click **Confirm**: ![Set License Key](./images/01-modal.png) 3. If any errors occur, they will be displayed, otherwise, the modal will close. For example: ![Error](./images/02-error.png) To confirm that the license key has been set, you can [check the licensing status of the node's cluster](../clusters/check-licensing-status.md). # Set CockroachDB License Key Pextra CloudEnvironment®'s highly-scalable private cloud is built on CockroachDB's distributed architecture. One license per complete deployment (spanning all datacenters, clusters, and nodes) is required. > [!WARNING] > Pextra CloudEnvironment® will not be functional after one week (7 days) without a valid CockroachDB license key. > [!TIP] > CockroachDB license keys can be obtained from [cockroachlabs.cloud](https://cockroachlabs.cloud). ## Console 1. Access the node’s console through SSH or through the “Console” tab in the node view. 2. First, enter the CockroachDB console by running the following command: ```bash sudo cockroach sql --certs-dir=/usr/local/lib/cockroach/certs -u pextra_ce_pcedaemon ``` 3. Set the license key by running the following command in the CockroachDB console: ```sql SET CLUSTER SETTING enterprise.license = ''; ``` 4. To exit the CockroachDB console, click `CTRL+C`. For more information, visit the [CockroachDB licensing FAQs](https://www.cockroachlabs.com/docs/stable/licensing-faqs). # Cluster Management # Check Licensing Status The licensing status of a cluster can be checked to ensure that the license keys for all nodes in the cluster are valid. > [!TIP] > License keys can be purchased from the [Pextra Customer Portal](https://portal.pextra.cloud). Support subscriptions are also available for purchase. ## Web Interface 1. Select the cluster in the resource tree and view the page on the right. A card with a quick overview of the licensing status will be displayed: ![Cluster Page](./images/00-cluster-license-view.png) 2. For a detailed view, click on the **Cluster** tab in the right pane. The licensing status of each node in the cluster will be displayed: ![Cluster Page](./images/01-detailed-license-view.png) # Datacenter Management # Organization Management # Identity Access Management (IAM) # AI Providers AI providers are organization-wide connections to cloud or self-hosted AI services. These providers power the [AI Assist](../../web-interface/ai-assist.md) feature, enabling users to use natural language to interact with the Pextra CloudEnvironment® web interface. At least one AI provider must be configured and enabled for the AI Assist feature to function. If no AI providers are configured, the AI Assist feature will not be available in the web interface. For a list of supported AI providers, see the [Supported AI Providers](./supported.md) section. The ID prefix for AI providers is `orgai-`[^1]. ## Notes [^1]: Resources in Pextra CloudEnvironment® are identified by unique IDs. AI providers will have the prefix `orgai-`, followed by a unique identifier (e.g., `orgai-qthm_iLrflJ_DtSS1l4Gx`). # Supported AI Providers The following AI providers are supported in Pextra CloudEnvironment®: | Name | ID | Cloud-Hosted | |-|-|-| | [OpenAI](https://openai.com)| `openai` | ✅ | | [Anthropic](https://www.anthropic.com)| `anthropic` | ✅ | | [Google](https://ai.google)| `google` | ✅ | | [xAI](https://x.ai)| `xai` | ✅ | | [Mistral](https://mistral.ai)| `mistral` | ✅ | | [DeepInfra](https://deepinfra.com)| `deepinfra` | ✅ | | [DeepSeek](https://deepseek.com)| `deepseek` | ✅ | | [Cerebras](https://www.cerebras.ai)| `cerebras` | ✅ | | [Groq](https://groq.com)| `groq` | ✅ | | [Perplexity](https://www.perplexity.ai)| `perplexity` | ✅ | | [Cohere](https://cohere.com)| `cohere` | ✅ | | [Ollama](https://ollama.com)| `ollama` | ❌ (self-hosted) | | [LM Studio](https://lmstudio.ai)| `lmstudio` | ❌ (self-hosted) | ## OpenAI-Compatible Providers For AI providers that are OpenAI-compatible but are not explicitly listed above, use the `openai` provider type and configure a custom base URL. For more information, refer to the [Add AI Provider](./add.md#adding-openai-compatible-providers) section. # List AI Providers List AI providers to ensure that Pextra CloudEnvironment® AI features are properly configured and available for use. ## Web Interface 1. Select the organization in the resource tree and view the page on the right. Click on the **AI Providers** tab in the right pane. The AI providers will be listed: ![AI Providers Page](./images/00-ai-providers.png) > [!NOTE] > For security reasons, the API keys for AI providers are not displayed in the web interface. API keys cannot be retrieved once set. Store your API keys securely. To edit properties of an AI provider, refer to the [Edit AI Provider](./edit.md) section. # Add AI Provider Add an AI provider to your organization to enable AI features in the Pextra CloudEnvironment® web interface. At least one AI provider must be configured and enabled for the AI Assist feature to function. If no AI providers are configured, the AI Assist feature will not be available in the web interface. > [!NOTE] > For security reasons, the API keys for AI providers are not displayed in the web interface. API keys cannot be retrieved once set. Store your API keys securely. ## Web Interface 1. Select the organization in the resource tree and view the page on the right. Click on the **AI Providers** tab in the right pane. ![AI Providers Page](./images/00-ai-providers.png) 2. Click the **Add AI Provider** button. ![Add AI Provider button](./images/01-add-ai-provider-button.png) 3. Choose the AI provider type from the dropdown list. A list of supported AI providers is available in the [Supported AI Providers](./supported.md) section. ![AI Provider type selection](./images/02-ai-provider-type-selection.png) 4. Enter the API key and custom base URL (if applicable) for the selected AI provider. ![Set API key and custom base URL](./images/03-ai-provider-api-key.png) > [!IMPORTANT] > When using a self-hosted AI provider (such as `ollama` or `lmstudio`), a custom base URL **must** be specified. For cloud-hosted providers, the base URL is pre-configured and does not need to be changed. 5. Enter a name for the AI provider, and an optional description. Disable the provider if you do not want it to be available for use immediately. ![AI Provider name and description](./images/04-ai-provider-name-desc.png) 6. Enter the name of the model to use with this provider. This model will be used for all AI Assist features unless overridden in specific configurations. ![AI Provider model](./images/05-ai-provider-model.png) 7. Click **Create** to add the AI provider to your organization. The new AI provider will be listed on the AI Providers page. ![Create AI Provider](./images/06-ai-provider-create.png) ## Adding OpenAI-Compatible Providers For AI providers that are OpenAI-compatible but are not explicitly listed above, use the `openai` provider type and configure a custom base URL. This allows you to connect to any service that implements the [OpenAI API specification](https://github.com/openai/openai-openapi/tree/master). When configuring an OpenAI-compatible provider: 1. Select `openai` as the provider type 2. Set the custom base URL to point to your provider's API endpoint 3. Use the appropriate API key for your chosen provider This approach works with many third-party AI services and self-hosted solutions that implement OpenAI-compatible APIs. # Edit AI Provider > [!NOTE] > For security reasons, the API keys for AI providers are not displayed in the web interface. API keys cannot be retrieved once set. Store your API keys securely. ## Web Interface 1. Select the organization in the resource tree and view the page on the right. Click on the **AI Providers** tab in the right pane. The AI providers will be listed. ![AI Providers Page](./images/00-ai-providers.png) 2. Click the pencil icon next to the AI provider you want to edit. ![Edit AI Provider button](./images/09-edit-ai-provider-button.png) 3. Update any fields as needed. ![Edit AI Provider form](./images/10-edit-ai-provider-form.png) > [!NOTE] > The API key field will be empty for security reasons. If you need to change the API key, you must enter the new key in this field. The previous key will not be displayed. 4. Click **Edit** to save your changes. The AI provider will be updated with the new configuration. ![Edit AI Provider confirm dialog](./images/11-edit-ai-provider-confirm.png) # Delete AI Provider > [!WARNING] > If you delete the last AI provider in your organization, the AI Assist feature will no longer be available in the web interface. At least one AI provider must be configured and enabled for AI Assist to function. ## Web Interface 1. Select the organization in the resource tree and view the page on the right. Click on the **AI Providers** tab in the right pane. The AI providers will be listed: ![AI Providers Page](./images/00-ai-providers.png) 2. Click the trash can icon next to the AI provider you want to delete: ![Delete AI Provider button](./images/07-delete-ai-provider-button.png) 3. A confirmation dialog will appear. Type in “DESTROY” and click **Confirm** to confirm the deletion of the AI provider: ![Delete AI Provider confirmation dialog](./images/08-delete-ai-provider-confirmation.png) # Network Management # Storage Management This section provides a guide to managing storage within your deployment. The Pextra CloudEnvironment® storage engine supports a variety of storage technologies, from local disks to distributed storage systems. # Storage Pools Storage pools are software-defined storage resources. Storage pools are configured on clusters and propagated to nodes. One configuration can be used across multiple nodes, allowing for flexible storage management. The ID prefix for storage pools is `pool-`[^2]. ## Storage Pool Types The following storage pool configurations are supported in Pextra CloudEnvironment®: | Pool Type | Volume Backing[^1] | Networked | Description | Notes | |-----------|------------------|------------|-------------|-------| | Directory (`directory`) | File | ❌ | Uses a directory on the node. | The default `local` storage pool is a `directory` pool. | | NetFS (`netfs`) | File | ✅ | Mounts a network filesystem (NFS or CIFS/SMB) on the node as a storage pool. Similar to a directory pool, but allows for networked storage. | Target path must not conflict with other `netfs` or `directory` pools. | | iSCSI (`iscsi`) | Block | ✅ | Uses an iSCSI target. | N/A | | Ceph RBD (`rbd`) | Block | ✅ | Uses a Ceph RBD pool. | N/A | | ZFS (`zfs`) | Block | ❌ | Uses a ZFS pool. | A ZFS pool with the same name as the storage pool must exist on each enabled node. | | LVM (`lvm`) | Block | ❌ | Uses an LVM volume group. | An LVM volume group with the same name as the storage pool must exist on each enabled node. | ## Notes [^1]: A storage pool can back volumes using either file-based or block-based storage. File-based storage pools use files to store data, while block-based storage pools use raw data blocks. Block-based storage pools typically provide better performance, while file-based storage pools are easier to manage. [^2]: Resources in Pextra CloudEnvironment® are identified by unique IDs. Storage pool IDs will have the prefix `pool-`, followed by a unique identifier (e.g., `pool-qthm_iLrflJ_DtSS1l4Gx`). # List Storage Pools Storage pools can be listed to view the current storage configuration in the cluster. This includes details about the storage pools, their status, and the nodes they are associated with. ## Web Interface 1. Select the cluster in the resource tree and view the page on the right. Click on the **Storage** tab in the right pane. The storage pools will be listed: ![Storage Page](./images/00-cluster-storage-pools.png) To edit associated nodes of a storage pool, refer to the [Edit Storage Pool](./edit.md) section. ### Storage Pool Status Each storage pool has a status indicator that provides information about its availability and configuration across the nodes in the cluster. The status can be one of the following: ![Storage Pool Grey Dash](./images/01-status-grey.png)
The storage pool has not been enabled on any nodes. ![Storage Pool Green Checkmark](./images/01-status-green.png)
The storage pool is available on all enabled nodes. ![Storage Pool Red X](./images/01-status-red.png)
An error has occurred while propagating the storage pool configuration to enabled nodes. Manual intervention may be required to resolve the issue. # Create Storage Pool ## Web Interface 1. Select the cluster in the resource tree and view the page on the right. Click on the **Storage** tab in the right pane. ![Storage Page](./images/00-cluster-storage-pools.png) 2. Click the **Create Pool** button. ![Create Pool Button](./images/02-create-pool-button.png) 3. Choose the [storage pool type](./index.md#storage-pool-types), enter a name, and enter the required configuration metadata. ![Create Pool Form](./images/03-create-pool-form.png) 4. Click **Create** to create the storage pool. Initially, the new storage pool will not be enabled on any nodes. To enable the new storage pool on nodes, refer to the [Edit Storage Pool](./edit.md) section. # Edit Storage Pool Currently, only node associations with storage pools can be modified. The name of a storage pool cannot be changed after creation. ## Web Interface 1. Select the cluster in the resource tree and view the page on the right. Click on the **Storage** tab in the right pane. ![Storage Page](./images/00-cluster-storage-pools.png) 2. Click on the pencil icon in the card of the storage pool you want to edit. ![Edit Pool Button](./images/06-edit-pool-button.png) 3. In the edit form, you can select the nodes on which this storage pool should be enabled. The nodes that are already associated with this storage pool will be selected by default. ![Edit Pool Form](./images/07-edit-pool-form.png) 4. Click **Save** to apply the changes. The storage pool will be enabled on the selected nodes, and the changes will be propagated according to the [storage pool propagation algorithm](./propagation.md). This may take some time. # Destroy Storage Pool > [!NOTE] > Storage pools cannot be destroyed if there are volumes on enabled nodes. All volumes must be destroyed, or all enabled nodes with volumes [must have their associations removed](./edit.md). This limitation will be addressed in the future. ## Web Interface 1. Select the cluster in the resource tree and view the page on the right. Click on the **Storage** tab in the right pane. ![Storage Page](./images/00-cluster-storage-pools.png) 2. Click on the X icon in the card of the storage pool you want to destroy. ![Destroy Pool Button](./images/04-destroy-pool-button.png) 3. A confirmation dialog will appear. Type in "DESTROY" and click **Confirm** to confirm the destruction of the storage pool. ![Destroy Pool Confirmation](./images/05-destroy-pool-confirmation.png) 4. The storage pool will be marked for destruction, and will be cleaned up according to the [storage pool propagation algorithm](./propagation.md). This may take some time. During this time, the storage pool's name will be unavailable for reuse. # Storage Pool Propagation Storage pools are propagated across the cluster at regular intervals by a system job, ensuring that all nodes have the latest configuration and state. This propagation process is crucial for maintaining consistency and availability of storage resources. > [!NOTE] > Creating a storage pool with the same name as a storage pool that is marked for deletion is not allowed. If you need to reuse the name, you must wait for the storage pool to be fully cleaned up. ## Propagation Algorithm The propagation algorithm is illustrated in the following diagram: View source # Storage Volumes Storage volumes (or just "volumes") are virtual storage devices that can be attached to instances. Volumes are allocated from storage pools, stored on individual nodes, and are used to store instance disks, snapshots, and other data. Volumes can be attached to instances to provide additional storage capacity. Volumes can be resized, detached, and destroyed as needed. The ID prefix for volumes is `vol-`[^1]. ## Notes [^1]: Resources in Pextra CloudEnvironment® are identified by unique IDs. Storage volume IDs will have the prefix `vol-`, followed by a unique identifier (e.g., `vol-qthm_iLrflJ_DtSS1l4Gx`). # List Volumes ## Web Interface 1. Select the node in the resource tree and view the page on the right. Click on the **Storage** tab in the right pane. ![Storage Page](./images/00-node-storage-volumes.png) 2. Click on the **Volumes** tab to view the list of volumes associated with the node. 3. The list displays all volumes associated with the node. To filter by storage pool, use the **Storage Pool** dropdown at the top of the list. ![Filter by Storage Pool](./images/01-filter-by-storage-pool.png) # Create Volume ## Web Interface Currently, volumes can only be created when creating a new instance, or through the Volumes API. This will change in the future. # Resize Volume > [!WARNING] > Resizing a volume while it is in use may lead to data corruption or loss. Proceed with caution. > [!NOTE] > After resizing, you may need to resize the filesystem on the volume to utilize the new size, which must be done inside the instance. ## Web Interface Currently, volumes can only be resized through the **Resources** tab in the instance details page. This will change in the future. 1. Select the instance in the resource tree and view the page on the right. Click on the **Resources** tab in the right pane. ![Instance Resources Page](./images/04-instance-resources.png) 2. Click on the resize icon next to the volume you want to resize. ![Resize Volume Button](./images/05-resize-volume-button.png) 3. In the resize form, enter the delta size in GiB. This value will be added to the current size of the volume. ![Resize Volume Form](./images/06-resize-volume-form.png) 4. If the instance is running, the **Live Resize** option will be checked. This allows the volume to be resized without stopping the instance. ![Live Resize Option](./images/07-live-resize-option.png) 5. Click **Resize Volume** to apply the changes. The volume will be resized according to the specified delta size. > [!TIP] > To perform a *cold resize*, you can stop the instance first, then follow the same steps as above without selecting the **Live Resize** option. This will ensure that the volume is resized safely without any risk of data corruption. # Attach Volume to Instance To attach a volume to an instance, refer to the [Attach Device](../../instance.md) section. # Detach Volume from Instance To detach a volume from an instance, refer to the [Detach Device](../../instance.md) section. # Destroy Volume > [!NOTE] > A volume cannot be destroyed if it is attached to an instance. You must first detach the volume from the instance before destroying it. ## Web Interface 1. Select the node in the resource tree and view the page on the right. Click on the **Storage** tab in the right pane. ![Storage Page](./images/00-node-storage-volumes.png) 2. Click on the **Volumes** tab to view the list of volumes associated with the node. 3. Click on the X icon next to the volume you want to destroy. ![Destroy Volume Button](./images/02-destroy-volume-button.png) 4. In the confirmation dialog, type "DESTROY" and click **Confirm** to confirm the destruction of the volume. ![Destroy Volume Confirmation](./images/03-destroy-volume-confirmation.png) 5. The volume will be destroyed, and it will no longer be available in the list of volumes. > [!NOTE] > If the volume was attached to an instance, you may need to restart the instance to ensure it no longer references the destroyed volume. # Instance Management # Monitoring & Metrics # Repository Mirrors for Airgapped Environments In standard deployments, servers connects to the [Pextra repository](https://repo.pextra.cloud) to download updates. However, airgapped environments require special consideration for package management, as they lack direct internet access. This tutorial provides a guide to managing an offline repository mirror in airgapped environments using `aptly`. This approach ensures Pextra CloudEnvironment systems remain updatable and secure even in the most restrictive network environments. ## Before You Begin **Hardware Requirements:** - Mirror server (online system with internet access) with sufficient storage space - The Pextra repository is approximately 100MiB in size per architecture (`amd64` and `arm64`) - USB drive or removable media (for full airgap transfers only) - Network connectivity between mirror and offline servers (for restricted airgap only) **Software Requirements:** - Debian-based system with administrative privileges - `curl`, `tar`, and standard Unix utilities - Administrative (`sudo`) access **Estimated Setup Time:** - 30 minutes for restricted airgap - 1 hour for full airgap ## Understanding Airgap Types To set up Pextra CloudEnvironment® servers in an airgapped environment, it is essential to understand the two different types of airgaps: ### Restricted/One-way Airgap The offline server cannot directly access public internet but can communicate with an outside server through a controlled endpoint. This allows for automated synchronization while maintaining security boundaries. ### Full Airgap Complete network isolation with no connectivity to external servers. Package updates require manual media transfer (e.g. with USB drives, portable storage). > [!NOTE] > A full airgap is the most secure option, but it requires **a considerable amount** of manual work to keep the offline servers updated. A restricted airgap allows for more automation and is recommended if possible. ## Setup Instructions 1. Set up the mirror server: - [Mirror Setup](./mirror-setup.md) 2. Follow the relevant setup instructions based on your airgap type: - [Restricted Airgap Setup](./restricted-airgap.md) - [Full Airgap Setup](./full-airgap.md) # Mirror Setup This guide will help you set up a local mirror of the Pextra repository using `aptly`. This is the first step in creating an airgapped setup for Pextra CloudEnvironment®. ## Install `aptly` Run the following command on your online mirror server: ```bash apt install aptly ``` You may need to run `apt update` to ensure the package list is up to date before installing. ## Import Pextra GPG Key Download and import the Pextra repository GPG key: ```bash # Download the GPG key for the repository (signed by the master key) curl -fSsLo /usr/share/keyrings/pextra-ce.gpg https://repo.pextra.cloud/debian/cloudenvironment/key.gpg # Import the GPG key into trustedkeys.gpg (to be used by aptly) gpg --no-default-keyring --keyring trustedkeys.gpg --import /usr/share/keyrings/pextra-ce.gpg ``` ## Configure the Mirror To mirror only one architecture (recommended), use the following command: ```bash aptly -architectures="" mirror create pextra-ce-bookworm https://repo.pextra.cloud/debian/cloudenvironment bookworm common meta ``` where `` can be `amd64` or `arm64`, depending on your server's architecture. To mirror all architectures, omit the `-architectures` option: ```bash aptly mirror create pextra-ce-bookworm https://repo.pextra.cloud/debian/cloudenvironment bookworm common meta ``` ## Run Initial Sync At this point, you have created a mirror configuration but it is still empty. To perform the initial synchronization of the mirror, run: ```bash aptly mirror update pextra-ce-bookworm ``` This command may take some time, depending on your internet connection. It will download all packages and metadata from the Pextra repository. To verify the synchronization (after the `update` command completes), you can check the status of the mirror: ```bash aptly mirror show -with-packages pextra-ce-bookworm ``` Sample output: ``` Name: pextra-ce-bookworm Archive Root URL: https://repo.pextra.cloud/debian/cloudenvironment/ Distribution: bookworm Components: common, meta Architectures: amd64, arm64 Download Sources: no Download .udebs: no Last update: 2025-08-12 21:28:38 UTC Number of packages: 17 Information from release file: Architectures: amd64 arm64 Codename: bookworm Components: common meta Date: Tue, 12 Aug 2025 19:15:03 UTC Description: Pextra Inc. Debian repository Label: Pextra Inc. Origin: Pextra Inc. Suite: stable Version: 1.0 Packages: ... ``` This will ensure that your mirror stays synchronized with the Pextra repository. ## Prepare the Mirror for Publishing To make the mirrored repository available for use, you need to take a snapshot. Taking a snapshot allows you to create a versioned point-in-time copy of the mirror, which can be useful for rollback or auditing purposes: ```bash # Create a snapshot of the mirror (e.g. pextra-ce-bookworm-20250812) aptly snapshot create pextra-ce-bookworm-$(date +%Y%m%d) from mirror pextra-ce-bookworm ``` Before publishing, [a GPG key must be generated](https://docs.github.com/en/authentication/managing-commit-signature-verification/generating-a-new-gpg-key) to sign the mirror, if you haven't done so already. Refer to the above link for instructions on generating a GPG key. > [!WARNING] > Make sure to keep your GPG key secure, as it will be used to cryptographically sign the repository metadata. If you lose access to your GPG key, you will need to create a new mirror and reconfigure your offline servers. To retrieve the fingerprint of your GPG key, run: ```bash gpg --list-secret-keys --keyid-format LONG ``` This will display your GPG keys, including their fingerprints. Copy the fingerprint (e.g. `F6C824A95B510F49ED4B0D640B4F9057C7DBDC41`) for use in the next step. ## Publish the Mirror To publish the mirror, you can use the following command: ```bash aptly publish snapshot -gpg-key= pextra-ce-bookworm-$(date +%Y%m%d) ``` Sample output: ``` Loading packages... Generating metadata files and linking package files... Finalizing metadata files... Signing file 'Release' with gpg, please enter your passphrase when prompted: Clearsigning file 'Release' with gpg, please enter your passphrase when prompted: Snapshot pextra-ce-bookworm-20250812 has been successfully published. Please setup your webserver to serve directory '/home/user/.aptly/public' with autoindexing. Now you can add following line to apt sources: deb http://your-server/ bookworm main Don't forget to add your GPG key to apt with apt-key. You can also use `aptly serve` to publish your repositories over HTTP quickly. ``` ## Mirror Maintenance For additional documentation on how to manage your repository mirror, including updating and publishing snapshots, refer to the [Aptly documentation](https://www.aptly.info/doc/aptly/mirror/). ### Updating the Mirror To keep your mirror up to date, you can set up a cron job to run an update script at a regular interval (e.g. daily): ```bash cat << 'EOF' > /usr/local/bin/update-pextra-mirror.sh #!/bin/bash aptly mirror update pextra-ce-bookworm aptly snapshot create pextra-ce-bookworm-$(date +%Y%m%d) from mirror pextra-ce-bookworm aptly publish snapshot -gpg-key= pextra-ce-bookworm-$(date +%Y%m%d) EOF chmod +x /usr/local/bin/update-pextra-mirror.sh # Add a cron job to run this script daily at 2 AM echo "0 2 * * * /usr/local/bin/update-pextra-mirror.sh" | crontab - ``` ## Next Steps Export the public GPG key used to sign the mirror so that it can be imported on your offline servers: ```bash gpg --armor --export > /usr/share/keyrings/pextra-mirror-key.asc ``` Keep a copy of this key file, as it will be needed to configure your offline servers to use the mirror. To use the mirror on your offline Pextra CloudEnvironment® servers, follow the relevant setup instructions based on your airgap type: - [Restricted Airgap Setup](./restricted-airgap.md) - [Full Airgap Setup](./full-airgap.md) # Restricted Airgap Setup After setting up your Pextra repository mirror, you can configure your offline servers to use this mirror in a restricted airgap environment. This guide will walk you through the steps to set up your offline servers to access the mirrored repository. Transfer this key file to your offline servers using your available transfer method. ## Configure Repository on Offline Servers On each offline Pextra CloudEnvironment® server: ```bash # Backup original repository configuration mv /etc/apt/sources.list.d/pextra-ce.list /etc/apt/sources.list.d/pextra-ce.list.backup # Add your mirror server's GPG key to trusted keys cp /path/to/pextra-mirror-key.asc /usr/share/keyrings/pextra-mirror-key.asc # Update repository source to point to your mirror echo "deb [signed-by=/usr/share/keyrings/pextra-mirror-key.asc] http://your-mirror-server/ bookworm common meta" | tee /etc/apt/sources.list.d/pextra-ce.list # Update package cache apt update ``` ### Verify Configuration Test that the configuration is working correctly: ```bash # Verify package availability apt-cache policy pce-common ``` Sample output: ``` pce-common: Installed: Candidate: Version table: *** 500 500 http://your-mirror-server bookworm/meta amd64 Packages 100 /var/lib/dpkg/status ``` A successful output indicates that your offline server can access the mirrored repository and retrieve package information. The setup is now complete, and your offline Pextra CloudEnvironment® servers are configured to use the repository mirror in a restricted airgap environment. # Full Airgap Setup In a full airgap environment where no network connectivity exists between your mirror server and offline servers, you'll need to transfer packages and configuration files using physical media. This guide covers the complete process of setting up Pextra CloudEnvironment® in a completely isolated environment. The guide is coming soon, but here are the high-level steps: 1. Archive the Pextra repository on your mirror server (with `tar`). 2. Transfer the archive to removable media (USB drive, external HDD, etc.). 3. Move the archive to your airgapped environment. 4. Extract the archive on your airgapped server. 5. Configure the repository on your fully airgapped Pextra CloudEnvironment servers to use the local file-based repository. # Troubleshooting # Known Issues # Logs & Diagnostics # Reporting Issues This section provides guidance on how to effectively report issues, bugs, and feature requests related to the software. It includes instructions on collecting logs, creating support tickets, and gathering diagnostic information to assist in troubleshooting and resolution. # Collecting Logs # Seeking Help This section provides guidance on how to seek help through the appropriate channels, including the official helpdesk and community forum. It also includes tips for effective support tickets to ensure a smooth resolution process. # Official Helpdesk > [!IMPORTANT] > An active support subscription is required to create a support ticket through our official helpdesk. If you would like to purchase a support subscription, please visit our [customer portal](https://portal.pextra.cloud). > [!NOTE] > If you do not have an active support subscription, you can still seek help through our [community forum](https://forum.pextra.cloud). See the [Community Forum](./community-forum.md) section for more details. If you have an active support subscription, you can create a support ticket through our official helpdesk: 1. Navigate to [our helpdesk](https://helpdesk.pextra.cloud). 2. Log in with your credentials. 3. Click on "Create Ticket". 4. Complete the ticket submission form with the following information: - A clear, descriptive title - Detailed description of the issue - Steps to reproduce the issue - Screenshots or error messages (if applicable) - System information and logs (see [Collecting Logs](../collecting-logs.md)) - Any troubleshooting steps you've already attempted 5. Select the appropriate priority level based on the impact to your operations. 6. Submit the ticket. Our support team will respond according to the Service Level Agreement (SLA) associated with your support subscription level. ## Support Ticket Lifecycle Each support ticket goes through a lifecycle, which includes the following stages: 1. **Submission**: You create and submit a ticket. 2. **Acknowledgment**: Support team acknowledges receipt. 3. **Investigation**: Support team investigates the issue. 4. **Resolution or Escalation**: Issue is either resolved or escalated to engineering. 5. **Verification**: You verify the solution works. 6. **Closure**: Ticket is closed once you confirm the issue is resolved. > [!TIP] > Check your email regularly for updates on your support ticket. The support team may request additional information or provide solutions that require your input. # Community Forum > [!TIP] > Pextra has a dedicated support team available to assist you with any issues you may encounter. If you have an active support subscription, we recommend using the official helpdesk for the fastest response times. If you do not have an active support subscription or prefer community-based assistance, you can post your issue on our community forum: 1. Visit [our community forum](https://forum.pextra.cloud). 2. Create an account or log in if you already have one. 3. Navigate to the appropriate section (e.g., "Installation Issues," "Configuration Help," etc.). 4. Click on "New Topic" to create a new post. 5. Provide a clear title and detailed description of your issue. 6. Include relevant system information, logs, and any troubleshooting steps you've already taken. 7. Submit your post. Our community members and Pextra staff monitor the forums regularly and will respond as soon as possible. While this option does not have a formal SLA, the community is active and helpful. # Tips for Support Tickets To ensure that your support ticket is effective and leads to a quick resolution, follow these tips: 1. **Be specific**: Provide precise details about what you were doing when the issue occurred. 2. **Include context**: Mention your environment details, such as hardware specifications, current version, and any recent changes. 3. **Attach logs**: Always include relevant logs (see [Gathering Diagnostic Information](../diagnostic-information.md)). 4. **Document steps to reproduce**: List the exact steps someone would need to follow to encounter the same issue. 5. **Describe expected vs. actual behavior**: Explain what you expected to happen and what actually happened. 6. **Add screenshots**: Visual evidence can help the support team understand the issue more quickly. # Gathering Diagnostic Information # Feedback & Contributions In this section, we encourage users to provide feedback on their experience with the product. This includes suggestions for new features, improvements to existing features, and any other comments or concerns they may have. **We value your feedback and take it seriously. It helps us understand what is working well and what needs improvement.** As an emerging solution, we also appreciate any contributions to our documentation, whether it's fixing typos, adding examples, or suggesting new topics. If you have a suggestion or contribution, please refer to the [Contributing](./contributing/index.md) section for guidelines on how to submit your feedback or contribution. # Feature Requests We continuously improve Pextra CloudEnvironment® based on user feedback and suggestions. If you have ideas for new features or enhancements that would improve your experience, we encourage you to share them with us. ## Submitting Feature Requests The primary channel for submitting feature requests is through our community forums: 1. Visit [our community forum](https://forum.pextra.cloud). 2. Create an account or log in if you already have one. 3. Navigate to the "Feature Requests" section. 4. Click on "New Topic" to create a new post. 5. Provide a clear, descriptive title for your feature request. 6. In the description, include: - A detailed explanation of the requested feature - The problem it solves or the value it provides - Your use case and why this feature would be beneficial - Any relevant examples, screenshots, or mockups (if applicable) 7. Submit your feature request. ## What Happens After Submission After submitting your feature request: 1. **Community Discussion**: Other users may comment on your request, adding their perspectives or use cases. 2. **Feedback Collection**: Pextra team members monitor the forums and gather feature requests. 3. **Evaluation**: Our team evaluates requests based on factors such as: - Alignment with product vision - Number of users who would benefit - Technical feasibility - Implementation complexity 4. **Prioritization**: Approved features are prioritized in our development roadmap. 5. **Implementation**: When a feature is scheduled for development, we may reach out for additional information. # General Feedback We value your opinions about Pextra CloudEnvironment® and are committed to continuously improving our product based on user feedback. Your insights help us understand what's working well and where we can make enhancements to better serve your needs. ## Providing General Feedback The most effective way to share your general feedback is through our community forums: 1. Visit [our community forum](https://forum.pextra.cloud). 2. Create an account or log in if you already have one. 3. Navigate to the "Feature Requests" section. 4. Click on "New Topic" to create a new post. 5. Provide a descriptive title that summarizes your feedback. 6. In the description, include: - Your overall experience with the product - Specific aspects you find particularly useful or challenging - Any suggestions for improvements - Context about your use case and environment 7. Submit your feedback. ## Examples of Feedback We encourage various types of feedback, including: - Comments on the user interface and experience - Suggestions for improving our guides and documentation - Reports about system performance in your environment - Feedback on how well Pextra CloudEnvironment® works with other tools - Overall thoughts about the product and its value ## How We Use Your Feedback When you share your feedback: 1. Other users may respond with their own experiences or suggestions. 2. Your feedback directly influences our product roadmap and development priorities. 3. We use your insights to make incremental improvements to the product. We appreciate you taking the time to share your thoughts with us. Your feedback is essential to helping us build a better product for all users. # Contributing We welcome contributions to our documentation. Whether you want to fix a typo, add examples, or suggest new topics, your contributions are valuable to us. Below are the guidelines for contributing to our documentation. ## How to Contribute ### One-Click Contribution 1. If you find a typo or want to suggest an improvement, click the notepad with a pencil icon at the top right of the page: ![Edit this page](./images/00-edit.png) - This will take you to the GitHub page for that file. - If you are logged in to GitHub, you can edit the file directly in your browser. If you are not logged in, you will be prompted to log in or create an account. 2. Make your changes in the online editor. 3. Click "Propose changes" to create a pull request (PR) with your changes. - Commit your changes with a clear and descriptive commit message. We use [Conventional Commits](https://www.conventionalcommits.org/en/v1.0.0/) for commit messages, so please follow that format: ![One-click pull request](./images/01-edit-pr.png) 4. Wait for feedback from the maintainers. They may request changes or approve your PR. Once approved, your changes will be merged. 5. Celebrate your contribution! 🎉 ### Full Development Setup 1. [Create a GitHub account](https://github.com/signup) if you don't have one. 2. Fork the repository by clicking the "Fork" button at the top right of the page: ![Fork the repository](./images/02-fork.png) 3. Clone your forked repository to your local machine: ```bash git clone https://github.com/PextraCloud/documentation.git ``` 4. Create a new branch for your changes: ```bash git switch -c / ``` 5. Set up your development environment: - Install the necessary dependencies. - Follow the instructions in the repository's README for setting up your local environment. 6. Make your changes to the documentation files. - Use Markdown for formatting. - Follow the existing style and structure of the documentation. 7. Commit your changes with a clear and descriptive commit message. We use [Conventional Commits](https://www.conventionalcommits.org/en/v1.0.0/) for commit messages, so please follow that format: ```bash git add . git commit -m "fix: correct typo in installation guide" ``` 8. Push your changes to your forked repository: ```bash git push origin / ``` 9. Create a pull request (PR) to the original repository: ![Create a pull request](./images/03-pr.png) - Navigate to [the original documentation repository](https://github.com/PextraCloud/documentation). - Click on the "Pull Requests" tab. - Click on "New Pull Request." - Select your branch and click "Create Pull Request." - Provide a clear description of your changes and why they are needed. 10. Wait for feedback from the maintainers. They may request changes or approve your PR. Once approved, your changes will be merged. 11. Celebrate your contribution! 🎉