CI/CD for Hardware: Automating RTL Simulations with Git and Ansible

CI/CD for Hardware: Automating RTL Simulations with Git and Ansible (2026 Guide)
Hardware DevOps · FPGA & ASIC

CI/CD for Hardware: Automating RTL Simulations with Git & Ansible

📅 April 2, 2026 ⏱ 12 min read 🏷 FPGA DevOps · Verilator · Ansible · Git

Software engineering transformed its speed-to-market a decade ago by adopting CI/CD — Continuous Integration and Continuous Deployment. Meanwhile, hardware engineering, particularly FPGA and ASIC development, has largely remained trapped in a cycle of manual testing, isolated workstations, and painful, late-stage integration bugs.

The hardware landscape is changing rapidly. With increasingly complex SoCs and the rise of agile hardware development methodologies, automated RTL validation is no longer a luxury — it is a competitive necessity. It is time to bring DevOps to silicon.

This guide walks you through building a complete Hardware CI/CD pipeline combining Git-triggered workflows, Ansible playbooks, and the open-source Verilator simulator — with real, working code examples you can adapt immediately.

[ AdSense Ad Unit — Top of Article ]

1. The Traditional RTL Bottleneck

In a typical hardware workflow, an engineer writes Verilog or VHDL, opens a GUI-based simulator like ModelSim or Vivado, clicks through menus to compile the design, and manually inspects waveforms. This process has three fundamental flaws for modern teams:

  • Fully manual and error-prone — there is no enforcement of a standard test procedure.
  • Resource-intensive — simulations tie up local engineering workstations for hours or days.
  • Regressions run too late — bugs are routinely discovered during tape-out or physical prototyping, when they are catastrophically expensive to fix.
⚠ Industry Cost Fact A logic bug caught in RTL simulation costs roughly $100 to fix. The same bug found post-tape-out can cost $1M+ and weeks of re-spin delay. Automation is not optional at scale.

2. Shift-Left: Bringing CI/CD to Hardware

Adopting a software-centric pipeline means hardware bugs are caught the moment code is committed, not weeks later. Here is the high-level pipeline architecture:

3. Real-World Example: A 4-bit Adder DUT

Let's ground this in a concrete, real-world example. We will use a simple 4-bit ripple-carry adder as our Design Under Test (DUT). This is the kind of combinational logic block found in every ALU — from microcontrollers to custom ASICs.

3a. The Verilog Design (DUT)

rtl/adder4.v
Verilog
// adder4.v — 4-bit ripple-carry adder
// DUT for Hardware CI/CD demonstration
// Target: Lattice iCE40 / Xilinx 7-series

module adder4 (
    input  wire [3:0] a,
    input  wire [3:0] b,
    input  wire       cin,
    output wire [3:0] sum,
    output wire       cout
);
    wire [4:0] result;

    // Extend to 5 bits to capture carry-out naturally
    assign result = {1'b0, a} + {1'b0, b} + cin;
    assign sum    = result[3:0];
    assign cout   = result[4];

endmodule

3b. The Testbench (SystemVerilog)

tb/tb_adder4.sv
SystemVerilog
// tb_adder4.sv — Automated regression testbench
// Headless-compatible: exits with non-zero code on failure
// Compatible with Verilator and Icarus Verilog

module tb_adder4;
    logic [3:0] a, b, sum_expected, sum_got;
    logic       cin, cout_expected, cout_got;
    int         pass_count = 0, fail_count = 0;

    // Instantiate the DUT
    adder4 dut (
        .a   (a),
        .b   (b),
        .cin (cin),
        .sum (sum_got),
        .cout(cout_got)
    );

    // Task: apply stimulus and check output
    task automatic check(
        input [3:0] _a, _b,
        input       _cin
    );
        logic [4:0] full;
        full = {1'b0, _a} + {1'b0, _b} + _cin;
        a = _a; b = _b; cin = _cin;
        #10; // propagation delay

        if (sum_got === full[3:0] && cout_got === full[4]) begin
            pass_count++;
        end else begin
            fail_count++;
            $display("FAIL: a=%0d b=%0d cin=%0b | got sum=%0d cout=%0b | expected sum=%0d cout=%0b",
                      _a, _b, _cin, sum_got, cout_got, full[3:0], full[4]);
        end
    endtask

    initial begin
        $display("=== RTL Regression Suite: adder4 ===");

        // Exhaustive test: all 512 combinations (2^4 * 2^4 * 2)
        for (int i = 0; i < 16; i++)
            for (int j = 0; j < 16; j++)
                for (int k = 0; k < 2; k++)
                    check(i[3:0], j[3:0], k[0]);

        $display("PASSED: %0d  FAILED: %0d", pass_count, fail_count);

        if (fail_count > 0)
            $fatal(1, "Regression FAILED — aborting pipeline");
        else
            $display("All tests PASSED ✓");

        $finish;
    end
endmodule
✅ Key Pattern The testbench uses $fatal(1, ...) to exit with a non-zero return code on failure. This is critical — it tells your CI runner (GitLab, GitHub Actions, Ansible) that the simulation failed, triggering an automatic pipeline failure and blocking the merge.

4. Automating with Git Hooks

The simplest entry point for hardware CI is a local Git pre-push hook. This runs the simulation automatically every time an engineer tries to push RTL changes, catching regressions before they ever reach the shared repository.

.git/hooks/pre-push
Bash
#!/usr/bin/env bash
# Hardware pre-push hook — blocks push if RTL regression fails
# Install: cp this file to .git/hooks/pre-push && chmod +x .git/hooks/pre-push

set -euo pipefail

echo "[CI] Running RTL regression before push..."

# Compile with Icarus Verilog (free, fast)
iverilog -g2012 -o /tmp/tb_adder4.vvp \
    rtl/adder4.v \
    tb/tb_adder4.sv

# Execute simulation
if vvp /tmp/tb_adder4.vvp; then
    echo "[CI] ✅ All RTL tests passed — push allowed."
else
    echo "[CI] ❌ RTL simulation FAILED — push blocked."
    exit 1
fi
[ AdSense Ad Unit — Mid Article ]

5. Orchestrating Cloud Simulations with Ansible

For team-scale pipelines — where dozens of engineers push RTL changes daily — a local Git hook is not enough. You need a cloud-based, infrastructure-as-code approach. Ansible is the ideal orchestrator: it provisions compute, configures the EDA environment, runs simulations headlessly, and tears down the infrastructure automatically to control costs.

Here is a production-ready Ansible playbook that handles the entire simulation lifecycle on an AWS EC2 instance:

ansible/rtl_sim_pipeline.yml
YAML (Ansible Playbook)
# rtl_sim_pipeline.yml
# Triggered by GitLab CI / GitHub Actions on every RTL commit
# Provisions EC2, installs EDA tools, runs Verilator regression, reports results

---
- name: RTL Simulation Pipeline
  hosts: localhost
  gather_facts: false
  vars:
    project_dir: "/home/runner/hardware-project"
    sim_output_dir: "/tmp/sim_results"
    ec2_instance_type: "c6i.2xlarge"  # 8 vCPU, 16GB RAM
    ec2_ami: "ami-0c55b159cbfafe1f0"  # Ubuntu 22.04 LTS

  tasks:

    - name: Install Verilator and Icarus Verilog
      ansible.builtin.apt:
        name:
          - verilator
          - iverilog
          - make
          - python3-pip
        state: present
        update_cache: true
      become: true

    - name: Create simulation output directory
      ansible.builtin.file:
        path: "{{ sim_output_dir }}"
        state: directory
        mode: '0755'

    - name: Compile RTL with Icarus Verilog
      ansible.builtin.command:
        cmd: >
          iverilog -g2012
          -o {{ sim_output_dir }}/tb_adder4.vvp
          {{ project_dir }}/rtl/adder4.v
          {{ project_dir }}/tb/tb_adder4.sv
      register: compile_result
      failed_when: compile_result.rc != 0

    - name: Execute simulation regression suite
      ansible.builtin.command:
        cmd: "vvp {{ sim_output_dir }}/tb_adder4.vvp"
      register: sim_result
      failed_when: false  # Capture output even on failure

    - name: Save simulation log to file
      ansible.builtin.copy:
        content: "{{ sim_result.stdout }}\n{{ sim_result.stderr }}"
        dest: "{{ sim_output_dir }}/sim_report.txt"

    - name: Post result summary to console
      ansible.builtin.debug:
        msg: "{{ sim_result.stdout_lines | last }}"

    - name: Fail the pipeline if simulation returned errors
      ansible.builtin.fail:
        msg: "RTL simulation FAILED. Check {{ sim_output_dir }}/sim_report.txt"
      when: sim_result.rc != 0

6. Full GitLab CI Pipeline Configuration

To tie everything together, here is a production GitLab CI pipeline that triggers on every push to any branch, runs the Ansible playbook, and annotates the merge request with pass/fail status automatically.

.gitlab-ci.yml
YAML (GitLab CI)
# .gitlab-ci.yml — Hardware CI/CD for RTL simulation
# Runs on every push; blocks merge on simulation failure

image: ubuntu:22.04

stages:
  - lint
  - simulate
  - report

variables:
  SIM_RESULTS_PATH: "sim_artifacts"
  ANSIBLE_HOST_KEY_CHECKING: "False"

# ── Stage 1: Lint RTL with Verilator ────────────────
rtl-lint:
  stage: lint
  before_script:
    - apt-get update -qq && apt-get install -y verilator
  script:
    - verilator --lint-only -Wall rtl/adder4.v
  rules:
    - changes: ["rtl/**/*.v", "rtl/**/*.sv"]

# ── Stage 2: Run full regression via Ansible ────────
rtl-simulate:
  stage: simulate
  before_script:
    - apt-get update -qq
    - apt-get install -y iverilog ansible python3-pip
  script:
    - mkdir -p $SIM_RESULTS_PATH
    - ansible-playbook ansible/rtl_sim_pipeline.yml
        -e "project_dir=$CI_PROJECT_DIR"
        -e "sim_output_dir=$CI_PROJECT_DIR/$SIM_RESULTS_PATH"
  artifacts:
    when: always
    paths:
      - "$SIM_RESULTS_PATH/sim_report.txt"
    expire_in: 30 days

# ── Stage 3: Parse and post results summary ─────────
post-results:
  stage: report
  script:
    - echo "=== Simulation Summary ==="
    - cat $SIM_RESULTS_PATH/sim_report.txt
    - grep -q "All tests PASSED" $SIM_RESULTS_PATH/sim_report.txt
      || (echo "❌ Simulation FAILED" && exit 1)
    - echo "✅ All RTL regression tests passed"
  when: always
  dependencies:
    - rtl-simulate
ℹ GitHub Actions Users Replace the .gitlab-ci.yml structure with a .github/workflows/rtl-sim.yml workflow. The Ansible playbook call is identical — only the CI runner syntax changes. The Ansible approach makes your simulation logic portable across any CI platform.

7. Parsing Results & Auto-Reporting to Git

Simulation logs are only useful if they are surfaced where engineers actually look — the pull request. Here is a Python script that parses the Verilator/Icarus output and posts a structured comment to a GitLab Merge Request via the API:

scripts/post_sim_results.py
Python
#!/usr/bin/env python3
"""
post_sim_results.py
Parses RTL simulation log and posts a structured comment
to a GitLab Merge Request. Set env vars before running.

Required env: GITLAB_TOKEN, CI_PROJECT_ID,
              CI_MERGE_REQUEST_IID, SIM_REPORT_PATH
"""

import os, re, sys, json
import urllib.request

def parse_sim_report(path: str) -> dict:
    """Extract pass/fail counts from Icarus Verilog simulation output."""
    with open(path) as f:
        content = f.read()

    passed = re.search(r'PASSED:\s*(\d+)', content)
    failed = re.search(r'FAILED:\s*(\d+)', content)
    status = "✅ PASSED" if "All tests PASSED" in content else "❌ FAILED"

    return {
        "status":  status,
        "passed":  int(passed.group(1)) if passed else 0,
        "failed":  int(failed.group(1)) if failed else -1,
        "details": content
    }

def post_mr_comment(results: dict):
    """Post simulation results as a GitLab MR comment."""
    token      = os.environ["GITLAB_TOKEN"]
    project_id = os.environ["CI_PROJECT_ID"]
    mr_iid     = os.environ["CI_MERGE_REQUEST_IID"]

    body = (f"## 🔬 RTL Simulation Report\n\n"
            f"| Metric | Value |\n|--------|-------|\n"
            f"| **Status** | {results['status']} |\n"
            f"| Tests Passed | `{results['passed']}` |\n"
            f"| Tests Failed | `{results['failed']}` |\n\n"
            f"<details><summary>Full simulation log</summary>\n\n"
            f"```\n{results['details']}\n```\n</details>")

    url  = (f"https://gitlab.com/api/v4/projects/{project_id}"
            f"/merge_requests/{mr_iid}/notes")
    data = json.dumps({"body": body}).encode()
    req  = urllib.request.Request(
        url, data=data, method="POST",
        headers={"PRIVATE-TOKEN": token,
                 "Content-Type": "application/json"}
    )
    with urllib.request.urlopen(req) as resp:
        print(f"MR comment posted: HTTP {resp.status}")

if __name__ == "__main__":
    report_path = os.environ.get("SIM_REPORT_PATH", "sim_artifacts/sim_report.txt")
    results = parse_sim_report(report_path)
    print(f"Simulation status: {results['status']}")
    if os.environ.get("CI_MERGE_REQUEST_IID"):
        post_mr_comment(results)
    sys.exit(0 if "PASSED" in results["status"] else 1)
[ AdSense Ad Unit — Lower Mid Article ]

8. Business Value & ROI of Hardware DevOps

Automating RTL simulations is not merely a technical improvement — it is a measurable business investment:

  • Earlier bug detection: Regressions run on every commit, catching logic errors within minutes instead of weeks.
  • Cloud cost control: Ansible provisions compute only when needed and tears it down immediately after — eliminating idle EDA workstation costs.
  • Engineer productivity: Freed from manually managing simulation scripts, engineers focus on architecture and innovation.
  • Tape-out confidence: When every block has a continuous regression history, sign-off becomes a data-driven process rather than a manual audit.
  • Onboarding speed: New team members clone the repo and get a fully functional simulation environment in minutes, not days.
💡 Next Steps to Scale Once this baseline pipeline is running, extend it with: (1) coverage-driven regression using Verilator's coverage reporting, (2) formal verification hooks via SymbiYosys, and (3) waveform archiving by dumping VCD files as CI artifacts for post-failure debugging.

Conclusion: Stop Clicking, Start Automating

Treating hardware like software is the future of competitive chip and FPGA development. By combining Git's version control and trigger capabilities with Ansible's infrastructure-as-code orchestration, and grounding everything in real Verilog testbenches that exit with meaningful return codes, you can build a hardware CI/CD pipeline that dramatically accelerates time-to-market.

The code examples in this guide represent a complete, working baseline. Clone the pattern, adapt it to your EDA toolchain (Verilator, Icarus Verilog, or commercial tools like VCS and Questa), and start catching RTL bugs in minutes instead of weeks.

The best hardware teams in 2026 don't simulate manually. They commit, push, and let the pipeline do the work.

Comments

Popular posts from this blog

Synthesizing SystemVerilog with Yosys on WSL

From Netlist to Silicon: Place and Route with NextPNR on WSL

Low-Latency Control on Open-Source FPGA tools