Continuous Integration (CI) FAQs
Can I use Harness CI for mobile app development?
Yes. Harness CI offers many options for mobile app development.
I have a MacOS build, do I have to use homebrew as the installer?
No. Your build infrastructure can be configured to use whichever tools you like. For example, Harness Cloud build infrastructure includes pre-installed versions of xcode and other tools, and you can install other tools or versions of tools that you prefer to use. For more information, go to the CI macOS and iOS development guide.
Build infrastructure
What is build infrastructure and why do I need it for Harness CI?
A build stage's infrastructure definition, the build infrastructure, defines "where" your stage runs. It can be a Kubernetes cluster, a VM, or even your own local machine. While individual steps can run in their own containers, your stage itself requires a build infrastructure to define a common workspace for the entire stage. For more information about build infrastructure and CI pipeline components go to:
What kind of build infrastructure can I use? Which operating systems are supported?
For support operating systems, architectures, and cloud providers, go to Which build infrastructure is right for me.
Can I use multiple build infrastructures in one pipeline?
Yes, each stage can have a different build infrastructure. Additionally, depending on your stage's build infrastructure, you can also run individual steps on containers rather than the host. This flexibility allows you to choose the most suitable infrastructure for each part of your CI pipeline.
Local runner build infrastructure
Can I run builds locally? Can I run builds directly on my computer?
Yes. For instructions, go to Set up a local runner build infrastructure.
How do I check the runner status for a local runner build infrastructure?
To confirm that the runner is running, send a cURL request like curl http://localhost:3000/healthz
.
If the running is running, you should get a valid response, such as:
{
"version": "0.1.2",
"docker_installed": true,
"git_installed": true,
"lite_engine_log": "no log file",
"ok": true
}
How do I check the delegate status for a local runner build infrastructure?
The delegate should connect to your instance after you finish the installation workflow above. If the delegate does not connect after a few minutes, run the following commands to check the status:
docker ps
docker logs --follow <docker-delegate-container-id>
The container ID should be the container with image name harness/delegate:latest
.
Successful setup is indicated by a message such as Finished downloading delegate jar version 1.0.77221-000 in 168 seconds
.
Runner can't find an available, non-overlapping IPv4 address pool.
The following runner error can occur during stage setup (the Initialize step in build logs):
Could not find an available, non-overlapping IPv4 address pool among the defaults to assign to the network.
This error means the number of Docker networks has exceeded the limit. To resolve this, you need to clean up unused Docker networks. To get a list of existing networks, run docker network ls
, and then remove unused networks with docker network rm
or docker network prune
.
Docker daemon fails with invalid working directory path on Windows local runner build infrastructure
The following error can occur in Windows local runner build infrastructures:
Error response from daemon: the working directory 'C:\harness-DIRECTORY_ID' is invalid, it needs to be an absolute path
This error indicates there may be a problem with the Docker installation on the host machine.
-
Run the following command (or a similar command) to check if the same error occurs:
docker run -w C:\blah -it -d mcr.microsoft.com/windows/servercore:ltsc2022
-
If you get the
working directory is invalid
error again, uninstall Docker and follow the instructions in the Windows documentation to Prepare Windows OS containers for Windows Server. -
Restart the host machine.
How do I check if the Docker daemon is running in a local runner build infrastructure?
To check if the Docker daemon is running, use the docker info
command. An error response indicates the daemon is not running. For more information, go to the Docker documentation on Troubleshooting the Docker daemon
Runner process quits after terminating SSH connection for local runner build infrastructure
If you launch the Harness Docker Runner binary within an SSH session, the runner process can quit when you terminate the SSH session.
To avoid this with macOS runners, use this command when you start the runner binary:
./harness-docker-runner-darwin-amd64 server >log.txt 2>&1 &
disown
For Linux runners, you can use a tool such as nohup
when you start the runner, for example:
nohup ./harness-docker-runner-darwin-amd64 server >log.txt 2>&1 &
Where does the harness-docker-runner create the hostpath volume directories on macOS?
The harness-docker-runner creates the host volumes under /tmp/harness-*
on macOS platforms.
Why do I get a "failed to create directory" error when trying to run a build on local build infra?
failed to create directory for host volume path: /addon: mkdir /addon: read-only file system
This error could occur when there's a mismatch between the OS type of the local build infrastructure and the OS type selected in the pipeline's infrastructure settings. For example, if your local runner is on a macOS platform, but the pipeline's infrastructure is set to Linux, this error can occur.
Self-managed VM build infrastructure
Can I use the same build VM for multiple CI stages?
No. The build VM terminates at the end of the stage and a new VM is used for the next stage.
Why are build VMs running when there are no active builds?
With self-managed VM build infrastructure, the pool
value in your pool.yml
specifies the number of "warm" VMs. These VMs are kept in a ready state so they can pick up build requests immediately.
If there are no warm VMs available, the runner can launch additional VMs up to the limit
in your pool.yml
.
If you don't want any VMs to sit in a ready state, set your pool
to 0
. Note that having no ready VMs can increase build time.
For AWS VMs, you can set hibernate
in your pool.yml
to hibernate warm VMs when there are no active builds. For more information, go to Configure the Drone pool on the AWS VM.
Do I need to install Docker on the VM that runs the Harness Delegate and Runner?
Yes. Docker is required for self-managed VM build infrastructure.
AWS build VM creation fails with no default VPC
When you run the pipeline, if VM creation in the runner fails with the error no default VPC
, then you need to set subnet_id
in pool.yml
.
AWS VM builds stuck at the initialize step on health check
If your CI build gets stuck at the initialize step on the health check for connectivity with lite engine, either lite engine is not running on your build VMs or there is a connectivity issue between the runner and lite engine.
- Verify that lite-engine is running on your build VMs.
- SSH/RDP into a VM from your VM pool that is in a running state.
- Check whether the lite-engine process is running on the VM.
- Check the cloud init output logs to debug issues related to startup of the lite engine process. The lite engine process starts at VM startup through a cloud init script.
- If lite-engine is running, verify that the runner can communicate with lite-engine from the delegate VM.
- Run
nc -vz <build-vm-ip> 9079
from the runner. - If the status is not successful, make sure the security group settings in
runner/pool.yml
are correct, and make sure your security group setup in AWS allows the runner to communicate with the build VMs. - Make sure there are no firewall or anti-malware restrictions on your AMI that are interfering with the cloud init script's ability to download necessary dependencies. For details about these dependencies, go to Set up an AWS VM Build Infrastructure - Start the runner.
- Run
AWS VM delegate connected but builds fail
If the delegate is connected but your AWS VM builds are failing, check the following:
- Make sure your the AMIs, specified in
pool.yml
, are still available.- Amazon reprovisions their AMIs every two months.
- For a Windows pool, search for an AMI called
Microsoft Windows Server 2022 Base with Containers
and updateami
inpool.yml
.
- Confirm your security group setup and security group settings in
runner/pool.yml
.
Use internal or custom AMIs with self-managed AWS VM build infrastructure
If you are using an internal or custom AMI, make sure it has Docker installed.
Additionally, make sure there are no firewall or anti-malware restrictions interfering with initialization, as described in CI builds stuck at the initialize step on health check.
Where can I find logs for self-managed AWS VM lite engine and cloud init output?
- Linux
- Lite engine logs:
/var/log/lite-engine.log
- Cloud init output logs:
/var/log/cloud-init-output.log
- Lite engine logs:
- Windows
- Lite engine logs:
C:\Program Files\lite-engine\log.out
- Cloud init output logs:
C:\ProgramData\Amazon\EC2-Windows\Launch\Log\UserdataExecution.log
- Lite engine logs:
What does it mean if delegate.task throws a "ConnectException failed to connect" error?
Before submitting a task to a delegate, Harness runs a capability check to confirm that the delegate is connected to the runner. If the delegate can't connect, then the capability check fails and that delegate is ignored for the task. This can cause failed to connect
errors on delegate task assignment, such as:
INFO io.harness.delegate.task.citasks.vm.helper.HttpHelper - [Retrying failed to check pool owner; attempt: 18 [taskId=1234-DEL] \
java.net.ConnectException: Failed to connect to /127.0.0.1:3000\
To debug this issue, investigate delegate connectivity in your VM build infrastructure configuration:
- Verify connectivity for AWS VM build infra
- Verify connectivity for Microsoft Azure VM build infra
- Verify connectivity for GCP VM build infra
- Verify connectivity for Anka macOS VM build infra
Harness Cloud
What is Harness Cloud?
Harness Cloud lets you run builds on Harness-managed runners that are preconfigured with tools, packages, and settings commonly used in CI pipelines. It is one of several build infrastructure options offered by Harness. For more information, go to Which build infrastructure is right for me.
How do I use Harness Cloud build infrastructure?
Configuring your pipeline to use Harness Cloud takes just a few minutes. Make sure you meet the requirements for connectors and secrets, then follow the quick steps to use Harness Cloud.
Account verification error with Harness Cloud on Free plan
Recently Harness has been the victim of several Crypto attacks that use our Harness-managed build infrastructure (Harness Cloud) to mine cryptocurrencies. Harness Cloud is available to accounts on the Free tier of Harness CI. Unfortunately, to protect our infrastructure, Harness now limits the use of the Harness Cloud build infrastructure to business domains and block general-use domains, like Gmail, Hotmail, Yahoo, and other unverified domains.
To address these issues, you can do one of the following:
- Use the local runner build infrastructure option, or upgrade to a paid plan to use the self-managed VM or Kubernetes cluster build infrastructure options. There are no limitations on builds using your own infrastructure.
- Create a Harness account with your work email and not a generic email address, like a Gmail address.
What is the Harness Cloud build credit limit for the Free plan?
The Free plan allows 2,000 build minutes per month. For more information, go to Harness Cloud billing and build credits.
Can I use xcode for a MacOS build with Harness Cloud?
Yes. Harness Cloud macOS runners include several versions of xcode as well as homebrew. For details, go to the Harness Cloud image specifications. You can also install additional tools at runtime.
Can I use my own secrets manager with Harness Cloud build infrastructure?
No. To use Harness Cloud build infrastructure, you must use the built-in Harness secrets manager.
Connector errors with Harness Cloud build infrastructure
To use Harness Cloud build infrastructure, all connectors used in the stage must connect through the Harness Platform. This means that:
- GCP connectors can't inherit credentials from the delegate. They must be configured to connect through the Harness Platform.
- Azure connectors can't inherit credentials from the delegate. They must be configured to connect through the Harness Platform.
- AWS connectors can't use IRSA, AssumeRole, or delegate connectivity mode. They must connect through the Harness Platform with access key authentication.
For more information, go to Use Harness Cloud build infrastructure - Requirements for connectors and secrets.
To change the connector's connectivity mode:
- Go to the Connectors page at the account, organization, or project scope. For example, to edit account-level connectors, go to Account Settings, select Account Resources, and then select Connectors.
- Select the connector that you want to edit.
- Select Edit Details.
- Select Continue until you reach Select Connectivity Mode.
- Select Change and select Connect through Harness Platform.
- Select Save and Continue and select Finish.
Built-in Harness Docker Connector doesn't work with Harness Cloud build infrastructure
Depending on when your account was created, the built-in Harness Docker Connector (account.harnessImage
) might be configured to connect through a Harness Delegate instead of the Harness Platform. In this case, attempting to use this connector with Harness Cloud build infrastructure generates the following error:
While using hosted infrastructure, all connectors should be configured to go via the Harness platform instead of via the delegate. \
Please update the connectors: [harnessImage] to connect via the Harness platform instead. \
This can be done by editing the connector and updating the connectivity to go via the Harness platform.
To resolve this error, you can either modify the Harness Docker Connector or use another Docker connector that you have already configured to connect through the Harness Platform.
To change the connector's connectivity settings:
- Go to Account Settings and select Account Resources.
- Select Connectors and select the Harness Docker Connector (ID:
harnessImage
). - Select Edit Details.
- Select Continue until you reach Select Connectivity Mode.
- Select Change and select Connect through Harness Platform.
- Select Save and Continue and select Finish.
Can I change the CPU/memory allocation for steps running on Harness cloud?
Unlike with other build infrastructures, you can't change the CPU/memory allocation for steps running on Harness Cloud. Step containers running on Harness Cloud build VMs automatically use as much as CPU/memory as required up to the available resource limit in the build VM.
Does gsutil work with Harness Cloud?
No, gsutil is deprecated. You should use gcloud-equivalent commands instead, such as gcloud storage cp
instead of gsutil cp
.
However, neither gsutil nor gcloud are recommended with Harness Cloud build infrastructure. Harness Cloud sources build VMs from a variety of cloud providers, and it is impossible to predict which specific cloud provider hosts the Harness Cloud VM that your build uses for any single execution. Therefore, avoid using tools (such as gsutil or gcloud) that require a specific cloud provider's environment.
Can't use STO steps with Harness Cloud macOS runners
Currently, STO scan steps aren't compatible with Harness Cloud macOS runners, because Apple's M1 CPU doesn't support nested virtualization. You can use STO scan steps with Harness Cloud Linux and Windows runners.
How do I configure OIDC with GCP WIF for Harness Cloud builds?
Go to Configure OIDC with GCP WIF for Harness Cloud builds.
When I run a build on Harness cloud, which delegate is used? Do I need to install a delegate to use Harness Cloud?
Harness Cloud builds use a delegate hosted in the Harness Cloud runner. You don't need to install a delegate in your local infrastructure to use Harness Cloud.
Can I use Harness Cloud run CD steps/stages?
No. Currently, you can't use Harness Cloud build infrastructure to run CD steps or stages. Currently, Harness Cloud is specific to Harness CI.
Can I connect to services running in a private corporate network when using Harness Cloud?
Yes. You can use Secure Connect for Harness Cloud.
Kubernetes clusters
What is the difference between a Kubernetes cluster build infrastructure and other build infrastructures?
For a comparison of build infrastructures go to Which build infrastructure is right for me.
For requirements, recommendations, and settings for using a Kubernetes cluster build infrastructure, go to:
- Set up a Kubernetes cluster build infrastructure
- Build and push artifacts and images - Kubernetes cluster build infrastructures require root access
- CI Build stage settings - Infrastructure - Kubernetes tab
Can I run Docker commands on a Kubernetes cluster build infrastructure?
If you want to run Docker commands when using a Kubernetes cluster build infrastructure, Docker-in-Docker (DinD) with privileged mode is required. For instructions, go to Run DinD in a Build stage.
If your cluster doesn't support privileged mode, you must use a different build infrastructure option, such as Harness Cloud, where you can run Docker commands directly on the host without the need for Privileged mode. For more information, go to Set up a Kubernetes cluster build infrastructure - Privileged mode is required for Docker-in-Docker.
Can I use Istio MTLS STRICT mode with Harness CI?
Yes, but you must create a headless service for Istio MTLS STRICT mode.