<?xml version="1.0" encoding="utf-8"?><feed xmlns="http://www.w3.org/2005/Atom" ><generator uri="https://jekyllrb.com/" version="3.10.0">Jekyll</generator><link href="https://kkvelan.github.io/blog/feed.xml" rel="self" type="application/atom+xml" /><link href="https://kkvelan.github.io/blog/" rel="alternate" type="text/html" /><updated>2026-04-06T09:32:12+00:00</updated><id>https://kkvelan.github.io/blog/feed.xml</id><title type="html">kkvelan</title><subtitle>Blog</subtitle><entry><title type="html">ISO 42001 Learning Notes: What Documents Are Really Required?</title><link href="https://kkvelan.github.io/blog/2026/03/24/iso-42001-learning-notes-what-documents-required.html" rel="alternate" type="text/html" title="ISO 42001 Learning Notes: What Documents Are Really Required?" /><published>2026-03-24T12:00:00+00:00</published><updated>2026-03-24T12:00:00+00:00</updated><id>https://kkvelan.github.io/blog/2026/03/24/iso-42001-learning-notes-what-documents-required</id><content type="html" xml:base="https://kkvelan.github.io/blog/2026/03/24/iso-42001-learning-notes-what-documents-required.html"><![CDATA[<p>ISO/IEC 42001 is the international standard for an <strong>AI management system</strong> (AIMS). It sets out what an organisation should do to govern AI systems in a structured way: context, leadership, planning, support, operation, performance evaluation, and improvement. Documentation matters because an AIMS is not only “what you do,” but <strong>what you can show</strong> you do. Auditors and internal reviewers look for <strong>documented information</strong> where the standard requires it, and for <strong>evidence</strong> that processes actually run. Confusion starts when people mix up <strong>mandatory outputs of the standard</strong>, <strong>sensible artefacts that support those outputs</strong>, and <strong>consultant-pack templates</strong> that look official but are not from ISO at all.</p>

<h2 id="documents-you-will-hear-about-in-implementations">Documents You Will Hear About in Implementations</h2>

<p>In real projects, teams and consultants often talk about a set of artefacts. Names vary, but you will regularly see something like the following:</p>

<ul>
  <li><strong>AI risk register</strong> (or AI-specific risk treatment records linked to the main risk process)</li>
  <li><strong>AI model register</strong> (inventory of models and AI systems in scope)</li>
  <li><strong>Stakeholder impact assessments</strong> (how AI use affects interested parties)</li>
  <li><strong>AI policy and ethical guidelines</strong> (direction and rules from leadership)</li>
  <li><strong>Audit logs for AI decisions</strong> (traceability of automated or AI-assisted decisions, where applicable)</li>
  <li><strong>Data provenance logs</strong> (where training or operational data came from and how it is governed)</li>
  <li><strong>AI-specific incident handling procedures</strong> (what to do when something goes wrong)</li>
  <li><strong>Continual improvement records</strong> (actions from monitoring, audits, management review)</li>
</ul>

<p>This list is <strong>useful as a checklist of conversation topics</strong>. It is <strong>not</strong> a list copied from ISO as “these eight forms with these exact titles.” Treat it as what many organisations end up needing to <strong>satisfy requirements and operate safely</strong>, not as a standard-mandated table of contents.</p>

<h2 id="the-key-idea-requirements-not-templates">The Key Idea: Requirements, Not Templates</h2>

<p>ISO/IEC 42001 states <strong>requirements</strong> (what the management system must achieve). It does <strong>not</strong> ship spreadsheets, clause-by-clause forms, or mandatory column headers. The implementer (with the organisation) <strong>designs</strong> how documented information looks: one workbook, a tool, a wiki, or an integrated GRC platform. What must hold is that you can <strong>demonstrate</strong> conformity: roles, risks, controls, monitoring, and improvement are <strong>real</strong>, <strong>owned</strong>, and <strong>traceable</strong>.</p>

<p>If someone sells “the ISO 42001 template pack,” they are selling <strong>their</strong> way of meeting requirements. It may be good or average. It is <strong>not</strong> “the ISO format,” because ISO does not define that level of format for most topics.</p>

<h2 id="what-is-explicit-what-is-implied-what-is-best-practice">What Is Explicit, What Is Implied, What Is Best Practice</h2>

<p><strong>Explicit (in the standard):</strong> Clauses refer to documented information where the organisation must maintain or retain specific types of information (for example, scope, policies, objectives, evidence of competence, operational planning and control, results of monitoring and measurement, internal audit and management review records, nonconformity and corrective action). The standard uses defined terms; your training or a good clause-by-clause guide maps each to <strong>what must exist as documented information</strong>, not to a single global template.</p>

<p><strong>Implied:</strong> If you say you control AI system lifecycle or risk, an auditor will expect <strong>records</strong> that show it happened: approvals, changes, assessments, not only a policy PDF that nobody follows.</p>

<p><strong>Industry best practice:</strong> Registers, structured impact assessments, decision logs, and clear incident playbooks reduce operational risk and make audits smoother. They are <strong>not</strong> all named in one numbered list in 42001 the way consultants list them, but they are how mature teams <strong>implement</strong> the requirements.</p>

<h2 id="if-you-are-preparing-for-lead-implementer-certification">If You Are Preparing for Lead Implementer Certification</h2>

<p><strong>Do you need to memorise templates?</strong> No. Exams focus on <strong>requirements</strong>, <strong>process</strong>, and <strong>how documented information supports the AIMS</strong>, not on reproducing a vendor’s register layout.</p>

<p><strong>What does “designing documents” mean?</strong> It means: given Clause X, you can state <strong>what information</strong> must be captured, <strong>who owns</strong> it, <strong>how often</strong> it is updated, and <strong>how</strong> it links to risk, objectives, and operation. “Design” is about <strong>content and control</strong>, not font and logo.</p>

<p><strong>How deep does the exam go?</strong> Typically to the level of <strong>knowing which areas need documented information</strong>, <strong>why</strong>, and <strong>how</strong> that ties to leadership, planning, operation, and improvement. Not to filling in every cell of a sample risk register from memory.</p>

<h2 id="different-consultants-different-folder-structures">Different Consultants, Different Folder Structures</h2>

<p><strong>Where flexibility exists:</strong> Format, tool, naming, number of documents (you may merge or split, as long as control and traceability remain clear). One organisation’s “AI model register” may be a tab in a larger asset system; another may use a dedicated database.</p>

<p><strong>Where requirements are non-negotiable in spirit (even if the shape varies):</strong> You must be able to show <strong>risk identification and treatment</strong> linked to AI systems and context; <strong>accountability and roles</strong>; <strong>operational control</strong> of AI activities in scope; <strong>monitoring and measurement</strong>; <strong>evidence</strong> of performance, audit, and management review; and <strong>continual improvement</strong> when things fail or drift. If an auditor cannot follow the thread from <strong>risk</strong> to <strong>control</strong> to <strong>evidence</strong>, the style of the document will not save you.</p>

<h2 id="example-ai-risk-register-logical-shape-not-a-mandatory-form">Example: AI Risk Register (Logical Shape, Not a Mandatory Form)</h2>

<p><strong>Purpose:</strong> To record AI-related risks (safety, bias, security, privacy, reliability, legal, reputational, etc.) in scope, align them to owners and treatments, and support review and reporting.</p>

<p><strong>Key fields (illustrative):</strong> Risk description; AI system or process reference; cause and consequences; existing controls; treatment (accept, mitigate, transfer, avoid); owner; target date; residual risk; link to incidents or changes when relevant.</p>

<p><strong>Lifecycle use:</strong> Created and updated during <strong>planning</strong> and <strong>change</strong>; referenced in <strong>operation</strong> when systems change; reviewed after <strong>incidents</strong> or <strong>monitoring</strong> signals; inputs to <strong>management review</strong> and <strong>improvement</strong>.</p>

<p>The standard does not say “column 7 must be residual risk.” It says the organisation must address risks and opportunities in a way that the management system can <strong>plan, implement, and check</strong>. A register is one practical way to do that; the <strong>logic</strong> matters more than the <strong>layout</strong>.</p>

<h2 id="what-iso-is-really-asking">What ISO Is Really Asking</h2>

<p>ISO is not a documentation beauty contest. It is about showing that <strong>risks are identified</strong>, <strong>controls and responsibilities are in place</strong>, <strong>performance is monitored</strong>, and <strong>the system improves</strong> when gaps appear. Documents and records are <strong>vehicles</strong> for that proof. If they are thin, duplicated without control, or disconnected from real operation, the organisation has a problem regardless of how polished the cover page is.</p>

<h2 id="takeaways">Takeaways</h2>

<ol>
  <li>
    <p><strong>Separate the standard from the template shop.</strong> ISO/IEC 42001 specifies requirements and documented information obligations; it does not prescribe universal forms for registers and logs.</p>
  </li>
  <li>
    <p><strong>Use common artefact lists as maps, not as law.</strong> They help you brainstorm what to build; they do not replace clause-by-clause understanding.</p>
  </li>
  <li>
    <p><strong>For certification exams and serious implementation, focus on traceability:</strong> what must be documented, who owns it, and how evidence shows the AIMS is operating.</p>
  </li>
  <li>
    <p><strong>Invest effort in design, not in copying layouts.</strong> One well-linked risk and control story beats ten decorative PDFs that teams do not use.</p>
  </li>
</ol>]]></content><author><name></name></author><summary type="html"><![CDATA[ISO/IEC 42001 is the international standard for an AI management system (AIMS). It sets out what an organisation should do to govern AI systems in a structured way: context, leadership, planning, support, operation, performance evaluation, and improvement. Documentation matters because an AIMS is not only “what you do,” but what you can show you do. Auditors and internal reviewers look for documented information where the standard requires it, and for evidence that processes actually run. Confusion starts when people mix up mandatory outputs of the standard, sensible artefacts that support those outputs, and consultant-pack templates that look official but are not from ISO at all.]]></summary></entry><entry><title type="html">Hands-On Technical Mentorship Program</title><link href="https://kkvelan.github.io/blog/2026/03/19/hands-on-technical-mentorship-program.html" rel="alternate" type="text/html" title="Hands-On Technical Mentorship Program" /><published>2026-03-19T12:00:00+00:00</published><updated>2026-03-19T12:00:00+00:00</updated><id>https://kkvelan.github.io/blog/2026/03/19/hands-on-technical-mentorship-program</id><content type="html" xml:base="https://kkvelan.github.io/blog/2026/03/19/hands-on-technical-mentorship-program.html"><![CDATA[<link rel="preconnect" href="https://fonts.googleapis.com">
  <link rel="preconnect" href="https://fonts.gstatic.com" crossorigin>
  <link href="https://fonts.googleapis.com/css2?family=DM+Sans:wght@400;500;600;700&family=Space+Grotesk:wght@600;700&display=swap" rel="stylesheet">
  <link rel="stylesheet" href="https://cdnjs.cloudflare.com/ajax/libs/font-awesome/6.5.1/css/all.min.css" crossorigin="anonymous">
  <style>
    :root {
      --bg: #1a1d24;
      --card: #242830;
      --text: #e8eaed;
      --muted: #9ca3af;
      --accent: #f59e0b;
      --accent-dim: #d97706;
    }

    * {
      margin: 0;
      padding: 0;
      box-sizing: border-box;
    }

    body {
      font-family: 'DM Sans', sans-serif;
      background: var(--bg);
      color: var(--text);
      line-height: 1.5;
      font-size: 15px;
      -webkit-print-color-adjust: exact;
      print-color-adjust: exact;
    }

    .page {
      min-height: 100vh;
      padding: 2rem 2.5rem;
      position: relative;
      page-break-after: always;
    }
    .page:last-of-type {
      page-break-after: auto;
    }

    h1, .headline {
      font-family: 'Space Grotesk', sans-serif;
      font-weight: 700;
      letter-spacing: -0.02em;
    }

    .page-1 {
      display: flex;
      flex-direction: column;
      justify-content: center;
    }
    .page-icon {
      color: var(--accent);
      font-size: 1.75rem;
      margin-bottom: 0.5rem;
      display: block;
    }
    .page-1 .tagline {
      font-size: 1.5rem;
      color: var(--accent);
      letter-spacing: 0.2em;
      text-transform: uppercase;
      margin-bottom: 0.5rem;
    }
    .page-1 .main-title {
      font-size: 2.25rem;
      line-height: 1.2;
      margin-bottom: 0.35rem;
    }
    .page-1 .subtitle {
      font-size: 1.1rem;
      color: var(--muted);
      margin-bottom: 1.5rem;
    }
    .page-1 .tracks {
      font-size: 0.95rem;
      color: var(--muted);
      margin-bottom: 1.75rem;
      max-width: 36em;
    }
    .page-1 .opening {
      font-size: 1.05rem;
      font-weight: 600;
      color: var(--accent);
      margin-bottom: 2rem;
    }
    .page-1 .section {
      margin-bottom: 1.5rem;
    }
    .page-1 .track-groups p {
      margin-bottom: 0.35rem;
    }
    .page-1 .section h2 {
      font-size: 1rem;
      font-weight: 600;
      color: var(--accent);
      margin-bottom: 0.6rem;
      text-transform: uppercase;
      letter-spacing: 0.05em;
    }
    .page-1 .section p {
      margin-bottom: 0.4rem;
      color: var(--text);
    }
    .page-1 .section ul {
      list-style: none;
      margin-top: 0.5rem;
    }
    .page-1 .section li {
      padding-left: 1.2rem;
      position: relative;
      margin-bottom: 0.25rem;
    }
    .page-1 .section li::before {
      content: "•";
      position: absolute;
      left: 0;
      color: var(--accent);
    }
    .gist-line {
      margin-top: 1.25rem;
      font-style: italic;
      color: var(--muted);
      font-size: 0.95rem;
    }
    .page-8 .gist-line {
      margin-top: 1rem;
      margin-bottom: 0.5rem;
    }

    .track-page .page-title {
      font-size: 1.75rem;
      color: var(--accent);
      margin-bottom: 0.25rem;
    }
    .track-page .page-subtitle {
      font-size: 0.95rem;
      color: var(--muted);
      margin-bottom: 1.25rem;
    }
    .track-page .tagline-block {
      font-size: 1rem;
      margin-bottom: 1.25rem;
      line-height: 1.5;
    }
    .track-page h3 {
      font-size: 0.85rem;
      font-weight: 600;
      text-transform: uppercase;
      letter-spacing: 0.06em;
      color: var(--accent);
      margin: 1rem 0 0.5rem;
      border-bottom: 1px solid rgba(245, 158, 11, 0.3);
      padding-bottom: 0.25rem;
    }
    .track-page p {
      margin-bottom: 0.5rem;
      color: var(--text);
    }
    .track-page ul {
      list-style: none;
      margin: 0.35rem 0;
    }
    .track-page ul li {
      padding-left: 1.2rem;
      position: relative;
      margin-bottom: 0.2rem;
    }
    .track-page ul li::before {
      content: "•";
      position: absolute;
      left: 0;
      color: var(--accent);
    }

    .page-programmes-list .page-title {
      font-family: 'Space Grotesk', sans-serif;
      font-size: 1.75rem;
      font-weight: 700;
      color: var(--accent);
      margin-bottom: 0.5rem;
    }
    .page-programmes-list .page-subtitle {
      font-size: 1rem;
      color: var(--muted);
      margin-bottom: 1.5rem;
    }
    .page-programmes-list .programme-list {
      list-style: decimal;
      padding-left: 1.5rem;
      margin: 0;
    }
    .page-programmes-list .programme-item {
      margin-bottom: 1.1rem;
    }
    .page-programmes-list .programme-item .name {
      display: block;
      font-family: 'Space Grotesk', sans-serif;
      font-weight: 600;
      font-size: 1.1rem;
      color: var(--accent);
      margin-bottom: 0.15rem;
    }
    .page-programmes-list .programme-item .desc {
      display: block;
      font-size: 0.95rem;
      color: var(--muted);
      margin-bottom: 0.35rem;
    }
    .page-programmes-list .programme-item .catchy-lines {
      font-size: 0.85rem;
      color: var(--muted);
      line-height: 1.4;
    }
    .page-programmes-list .programme-item .catchy-lines span {
      display: block;
    }
    .page-8 .page-title {
      font-family: 'Space Grotesk', sans-serif;
      font-size: 1.75rem;
      font-weight: 700;
      color: var(--accent);
      margin-bottom: 0.25rem;
    }
    .page-8 .page-subtitle {
      font-size: 0.95rem;
      color: var(--muted);
      margin-bottom: 1rem;
    }
    .page-8 .section {
      margin-bottom: 1.25rem;
    }
    .page-8 h2 {
      font-size: 1rem;
      font-weight: 600;
      color: var(--accent);
      margin-bottom: 0.5rem;
      text-transform: uppercase;
      letter-spacing: 0.05em;
    }
    .page-8 p, .page-8 ul {
      margin-bottom: 0.4rem;
    }
    .page-8 ul {
      list-style: none;
    }
    .page-8 ul li {
      padding-left: 1.2rem;
      position: relative;
      margin-bottom: 0.2rem;
    }
    .page-8 ul li::before {
      content: "•";
      position: absolute;
      left: 0;
      color: var(--accent);
    }
    .page-8 .contact-box {
      background: var(--card);
      padding: 1.25rem;
      border-radius: 8px;
      border-left: 4px solid var(--accent);
      margin: 1.25rem 0;
    }
    .page-8 .contact-box a {
      color: var(--accent);
      text-decoration: none;
    }
    .page-8 .contact-box .qr-wrap {
      display: flex;
      align-items: center;
      gap: 1rem;
      flex-wrap: wrap;
    }
    .page-8 .contact-box .qr-code {
      width: 100px;
      height: 100px;
      flex-shrink: 0;
      background: #fff;
      padding: 4px;
      border-radius: 4px;
    }
    .page-8 .contact-box .qr-code img {
      display: block;
      width: 100%;
      height: 100%;
    }
    .page-8 .footer-line {
      margin-top: 1.5rem;
      font-size: 0.95rem;
      color: var(--muted);
      font-style: italic;
    }

    @media print {
      @page {
        margin: 0;
        size: A4;
      }
      html, body {
        margin: 0 !important;
        padding: 0 !important;
        background: #1a1d24 !important;
        min-height: 100%;
        -webkit-print-color-adjust: exact !important;
        print-color-adjust: exact !important;
      }
      .page {
        margin: 0;
        min-height: 100vh;
        min-height: 297mm;
        padding: 1.2rem 1.5rem;
        page-break-after: always;
        background: #1a1d24 !important;
        -webkit-print-color-adjust: exact !important;
        print-color-adjust: exact !important;
      }
      .page:last-of-type {
        page-break-after: auto;
      }
      /* Fit Page 1, Page 2, Red Team, Join the Program on one printed page each */
      .page-1 {
        padding: 1rem 1.5rem;
      }
      .page-1 .tagline { font-size: 1.2rem; margin-bottom: 0.25rem; }
      .page-1 .main-title { font-size: 1.85rem; margin-bottom: 0.2rem; }
      .page-1 .subtitle { font-size: 1rem; margin-bottom: 0.9rem; }
      .page-1 .tracks { font-size: 0.85rem; margin-bottom: 0.9rem; }
      .page-1 .opening { margin-bottom: 0.9rem; font-size: 1rem; }
      .page-1 .section { margin-bottom: 0.7rem; }
      .page-1 .section h2 { font-size: 0.9rem; margin-bottom: 0.35rem; }
      .page-1 .section p { margin-bottom: 0.25rem; font-size: 0.9rem; line-height: 1.35; }
      .page-1 .section ul { margin-top: 0.25rem; }
      .page-1 .section li { margin-bottom: 0.15rem; font-size: 0.9rem; }
      .page-1 .gist-line { margin-top: 0.6rem; font-size: 0.85rem; }

      .page-programmes-list {
        padding: 0.6rem 1.25rem;
      }
      .page-programmes-list .page-icon { font-size: 1.25rem; margin-bottom: 0.25rem; }
      .page-programmes-list .page-title { font-size: 1.35rem; margin-bottom: 0.2rem; }
      .page-programmes-list .page-subtitle { font-size: 0.8rem; margin-bottom: 0.5rem; }
      .page-programmes-list .programme-list { padding-left: 1.25rem; }
      .page-programmes-list .programme-item { margin-bottom: 0.28rem; }
      .page-programmes-list .programme-item .name { font-size: 0.9rem; margin-bottom: 0.05rem; }
      .page-programmes-list .programme-item .desc { font-size: 0.78rem; margin-bottom: 0.1rem; }
      .page-programmes-list .programme-item .catchy-lines { font-size: 0.68rem; line-height: 1.2; }
      .page-programmes-list .programme-item .catchy-lines span { display: inline; }
      .page-programmes-list .programme-item .catchy-lines span::after { content: " "; }
      .page-programmes-list .programme-item .catchy-lines span:last-child::after { content: none; }

      .page-red-team {
        padding: 1rem 1.5rem;
      }
      .page-red-team .page-title { font-size: 1.5rem; margin-bottom: 0.2rem; }
      .page-red-team .page-subtitle { font-size: 0.85rem; margin-bottom: 0.5rem; }
      .page-red-team .tagline-block { margin-bottom: 0.5rem; font-size: 0.9rem; line-height: 1.35; }
      .page-red-team h3 { font-size: 0.8rem; margin: 0.5rem 0 0.3rem; padding-bottom: 0.15rem; }
      .page-red-team p { margin-bottom: 0.3rem; font-size: 0.85rem; line-height: 1.35; }
      .page-red-team ul { margin: 0.2rem 0; }
      .page-red-team ul li { margin-bottom: 0.12rem; font-size: 0.85rem; }

      .page-8 {
        padding: 1rem 1.5rem;
      }
      .page-8 .page-title { font-size: 1.5rem; margin-bottom: 0.2rem; }
      .page-8 .page-subtitle { font-size: 0.85rem; margin-bottom: 0.5rem; }
      .page-8 h2 { font-size: 0.9rem; margin: 0.5rem 0 0.3rem; }
      .page-8 p, .page-8 ul { margin-bottom: 0.25rem; font-size: 0.85rem; line-height: 1.35; }
      .page-8 ul li { margin-bottom: 0.12rem; }
      .page-8 .contact-box { margin-top: 0.5rem; }
      .page-8 .gist-line, .page-8 .footer-line { margin-top: 0.5rem; font-size: 0.85rem; }
    }
  </style>

<!-- PAGE 1 -->
  <section class="page page-1">
    <i class="fa-solid fa-graduation-cap page-icon" aria-hidden="true"></i>
    <div class="tagline">Build • Break • Engineer</div>
    <h1 class="main-title">Hands-On Technical Mentorship Program</h1>
    <p class="subtitle">Learn differently. Learn the hard way. See the difference.</p>

    <div class="section track-groups">
      <h2>Tracks</h2>
      <p><strong>Track 1 – Programming:</strong> Under the Hood (C) · Built to Last (Rust)</p>
      <p><strong>Track 2 – Security:</strong> Think Like an Attacker · Break the App · Inside the Beast · Own the OS · Secure the Cloud · Shift Left · Red Team, AI Edge</p>
      <p><strong>Track 3 – AI:</strong> AI That Ships</p>
    </div>
    <p class="opening">Opening Summer Mentorship Slots - Only for College Graduates</p>

    <div class="section">
      <h2>What This Program Is</h2>
      <p>This is not online training. No one will teach you from slides or run you through labs. You learn by doing, getting stuck, and then working through it. That is the difference. Guidance is tailored to each individual. With more than two decades in the industry, the mentor knows what the industry expects; these programmes are aimed at that.</p>
      <p><strong>Programming:</strong> You will not learn by watching. You will write real code and build real systems in C and Rust.</p>
      <p><strong>Security:</strong> You will not only study vulnerabilities. You will find and fix them in networks, apps, malware, cloud, and pipelines. Think like an attacker; defend like an engineer.</p>
      <p><strong>AI:</strong> You will not just watch demos. You will build with real models and ship AI that works.</p>
    </div>

    <div class="section">
      <h2>Program Design</h2>
      <p>You attempt the problems first. You struggle with the code, tools, or techniques. Then we discuss the approach, correct mistakes, and improve the solution. The structure is there; the hard work is yours. We recommend 120 to 150 days for good coverage of a programme and completion of the capstone project.</p>
      <p>The difference comes from:</p>
      <ul>
        <li>Real-world problem statements you tackle yourself</li>
        <li>Direct technical discussions after you have tried</li>
        <li>Debugging and troubleshooting your own code</li>
        <li>Review of your work and progress when you have something to show</li>
      </ul>
      <p>This is not an online program. This is how you learn differently.</p>
    </div>

    <div class="section">
      <h2>Certification</h2>
      <p>Certificates of Completion are issued only after the capstone project is completed and demonstrated. This program is not attendance-based.</p>
      <p>Certification requires:</p>
      <ul>
        <li>Completion of the assigned capstone project</li>
        <li>Technical demonstration of the work</li>
        <li>Clear understanding of the technical implementation</li>
      </ul>
      <p><strong>No capstone. No certification.</strong></p>
    </div>
    <p class="gist-line">This is just the gist. Enroll to gain the full depth.</p>
  </section>

  <!-- PAGE 2: Programme list -->
  <section class="page page-programmes-list">
    <i class="fa-solid fa-list page-icon" aria-hidden="true"></i>
    <h1 class="page-title">Programmes</h1>
    <p class="page-subtitle">Ten programmes across three tracks. Hands-on.</p>
    <ol class="programme-list">
      <li class="programme-item">
        <span class="name">Under the Hood</span>
        <span class="desc">C Programming & System Programming</span>
        <div class="catchy-lines"><span>Write real code. Debug real crashes. Own the machine.</span><span>Build utilities that run at the core of systems.</span><span>Build a mini shell, log parser, or packet sniffer.</span></div>
      </li>
      <li class="programme-item">
        <span class="name">Built to Last</span>
        <span class="desc">Rust & Modern Systems Development</span>
        <div class="catchy-lines"><span>Safe, fast, reliable. No garbage collector, no compromise.</span><span>Backend services and CLI tools that ship.</span><span>Build a production-style API or async scanner.</span></div>
      </li>
      <li class="programme-item">
        <span class="name">Think Like an Attacker</span>
        <span class="desc">Network & Server Penetration Testing</span>
        <div class="catchy-lines"><span>Find what’s exposed. Pivot. Escalate. Document.</span><span>Real infrastructure, real attack chains, lab-only.</span><span>Deliver a full engagement report or enum toolkit.</span></div>
      </li>
      <li class="programme-item">
        <span class="name">Break the App</span>
        <span class="desc">Application & Web Security Testing</span>
        <div class="catchy-lines"><span>Auth, APIs, logic flaws. Break them before attackers do.</span><span>Manual testing and professional reports.</span><span>Build an app, then break it and fix it.</span></div>
      </li>
      <li class="programme-item">
        <span class="name">Inside the Beast</span>
        <span class="desc">Malware Analysis & Reverse Engineering</span>
        <div class="catchy-lines"><span>Static and dynamic analysis. PE, assembly, IOCs.</span><span>See how malware really behaves in a safe lab.</span><span>Reverse a sample and document the kill chain.</span></div>
      </li>
      <li class="programme-item">
        <span class="name">Own the OS</span>
        <span class="desc">Secure Systems - Linux, BSD & Windows</span>
        <div class="catchy-lines"><span>Harden and reason about Linux, BSD, and Windows.</span><span>Attack surface, configuration, scripting on real systems.</span><span>Deliver a hardening guide or secure baseline.</span></div>
      </li>
      <li class="programme-item">
        <span class="name">Secure the Cloud</span>
        <span class="desc">Cloud Security</span>
        <div class="catchy-lines"><span>IAM, misconfigs, network. Build secure, find weak.</span><span>AWS, Azure, or GCP in lab environments.</span><span>Design secure architecture or deliver an assessment report.</span></div>
      </li>
      <li class="programme-item">
        <span class="name">Shift Left</span>
        <span class="desc">DevSecOps / Secure SDLC</span>
        <div class="catchy-lines"><span>SAST, SCA, pipelines. Security in the build.</span><span>Fix findings, reduce risk before production.</span><span>Secure a pipeline and document the journey.</span></div>
      </li>
      <li class="programme-item">
        <span class="name">Red Team, AI Edge</span>
        <span class="desc">Classical Red Teaming Using AI</span>
        <div class="catchy-lines"><span>Phish, macros, C2. AI speeds the build; you run the op.</span><span>Authorised only. Lab environments, clear scope.</span><span>Deliver a full red team report with AI-assisted phases.</span></div>
      </li>
      <li class="programme-item">
        <span class="name">AI That Ships</span>
        <span class="desc">Hands-On AI & Systems Engineering</span>
        <div class="catchy-lines"><span>RAG, agents, local LLMs. Real models, real products.</span><span>From prototype to something users can run.</span><span>Build a RAG app, multi-agent workflow, or AI pipeline.</span></div>
      </li>
    </ol>
  </section>

  <!-- PAGE 3: C Programming -->
  <section class="page track-page">
    <i class="fa-solid fa-gears page-icon" aria-hidden="true"></i>
    <h1 class="page-title">Under the Hood</h1>
    <p class="page-subtitle">C Programming & System Programming</p>
    <p class="tagline-block">Build real system utilities. Debug crashes and memory problems. Understand how programs actually run.</p>

    <h3>Theme</h3>
    <p>Understanding how computers actually execute programs. Focus on memory behavior, pointers, debugging, system interaction, and writing efficient system-level code.</p>

    <h3>Core Topics</h3>
    <ul>
      <li>C program structure and compilation model</li>
      <li>Memory layout (stack vs heap)</li>
      <li>Pointers and pointer arithmetic</li>
      <li>Dynamic memory management (malloc / free)</li>
      <li>File I/O and system interaction</li>
      <li>Modular programming and multi-file projects</li>
      <li>Debugging techniques and program tracing</li>
      <li>Implementing core data structures directly in C</li>
      <li>Socket programming fundamentals (TCP / UDP basics)</li>
    </ul>

    <h3>Real System Implementation</h3>
    <p>You will implement core data structures and algorithms directly in C. These components will be integrated into small system utilities that process files, network data, or system input. Typical concepts explored include: linked list structures used in low-level system components; hash tables for fast lookup; buffer management techniques; file indexing approaches. You will implement these components yourself, integrate them into working programs, and debug them when they fail.</p>

    <h3>Sample Capstone Projects</h3>
    <ul>
      <li>Terminal-based text editor (similar to a simplified nano)</li>
      <li>Secure file transfer utility for Linux systems</li>
      <li>Mini shell for Linux command execution</li>
      <li>High-speed log parser for large log files</li>
      <li>Packet sniffer using libpcap (tcpdump-style)</li>
      <li>Simple network port scanner</li>
      <li>Configuration file parser and validator</li>
    </ul>
  </section>

  <!-- PAGE 4: Rust -->
  <section class="page track-page">
    <i class="fa-solid fa-shield-halved page-icon" aria-hidden="true"></i>
    <h1 class="page-title">Built to Last</h1>
    <p class="page-subtitle">Rust & Modern Systems Development</p>
    <p class="tagline-block">Build high-performance systems. Write memory-safe concurrent programs. Design reliable backend and system tools.</p>

    <h3>Theme</h3>
    <p>Building high-performance systems with memory safety. Focus on ownership, concurrency, and reliable systems programming.</p>

    <h3>Core Topics</h3>
    <ul>
      <li>Rust program structure and Cargo ecosystem</li>
      <li>Ownership and borrowing model</li>
      <li>Lifetimes and memory safety</li>
      <li>Error handling patterns (Result / Option)</li>
      <li>Modular project structure and crate design</li>
      <li>Concurrency and asynchronous programming (Tokio)</li>
      <li>Building command-line tools and system utilities</li>
      <li>Backend services and REST APIs in Rust</li>
    </ul>

    <h3>Real System Implementation</h3>
    <p>You will build real tools and services using Rust. Programs will focus on reliability, performance, and safe concurrency. Typical concepts explored include: safe memory management using the ownership model; concurrent task execution using async runtimes; efficient request handling in backend services; data processing pipelines with controlled resource usage.</p>

    <h3>Sample Capstone Projects</h3>
    <ul>
      <li>Production-style backend service with REST APIs and database integration (Axum / Actix)</li>
      <li>High-performance asynchronous network scanner using Tokio</li>
      <li>Terminal-based system monitoring dashboard using Ratatui</li>
      <li>Concurrent file processing pipeline using async Rust</li>
      <li>Network traffic analysis tool using asynchronous packet processing</li>
      <li>AI inference API service for high-performance request handling</li>
    </ul>
  </section>

  <!-- PAGE 5: Network & Server Pentest -->
  <section class="page track-page">
    <i class="fa-solid fa-user-secret page-icon" aria-hidden="true"></i>
    <h1 class="page-title">Think Like an Attacker</h1>
    <p class="page-subtitle">Network & Server Penetration Testing</p>
    <p class="tagline-block">Discover exposed services and hidden attack paths. Analyze systems the way attackers do. Understand how real infrastructure gets compromised.</p>

    <h3>Theme</h3>
    <p>Understanding how attackers discover weaknesses in networks and server infrastructure. Focus on reconnaissance, service analysis, exploitation techniques, and documenting security weaknesses.</p>

    <h3>Core Topics</h3>
    <ul>
      <li>Network discovery and host enumeration</li>
      <li>Service identification and attack surface analysis</li>
      <li>Linux and Windows server attack surfaces</li>
      <li>Credential attacks and lateral movement concepts</li>
      <li>Privilege escalation techniques</li>
      <li>PowerShell and shell environments</li>
      <li>Manual penetration testing workflow</li>
      <li>Vulnerability documentation and reporting</li>
    </ul>

    <h3>Real Attack Analysis</h3>
    <p>You will analyze how weaknesses in networks and servers are discovered and exploited. Exercises focus on identifying exposed services, misconfigurations, and privilege escalation paths. Typical areas explored include: service enumeration and version analysis; authentication weaknesses; misconfigured services and permissions; privilege escalation paths. Students reproduce attack paths in controlled lab environments and document the full attack chain.</p>

    <h3>Sample Capstone Projects</h3>
    <ul>
      <li>Simulated enterprise network compromise with full attack chain documentation</li>
      <li>Multi-host penetration testing scenario</li>
      <li>Server privilege escalation research and exploitation report</li>
      <li>Automated network enumeration toolkit</li>
      <li>Network attack simulation with complete security assessment report</li>
    </ul>
  </section>

  <!-- PAGE 6: Application Pentest -->
  <section class="page track-page">
    <i class="fa-solid fa-virus page-icon" aria-hidden="true"></i>
    <h1 class="page-title">Break the App</h1>
    <p class="page-subtitle">Application & Web Security Testing</p>
    <p class="tagline-block">Break authentication and access controls. Analyze how modern applications fail under attack. Understand the real impact of insecure design.</p>

    <h3>Theme</h3>
    <p>Understanding how web applications fail under attack. Focus on authentication, authorization, APIs, logic vulnerabilities, and analyzing the real-world impact of insecure application design.</p>

    <h3>Core Topics</h3>
    <ul>
      <li>Web architecture and HTTP fundamentals</li>
      <li>Authentication mechanisms and session management</li>
      <li>Authorization and access control failures</li>
      <li>REST API security testing and abuse scenarios</li>
      <li>Injection vulnerabilities and input handling failures</li>
      <li>Business logic vulnerabilities</li>
      <li>Manual web application testing workflow</li>
      <li>Vulnerability documentation and secure design practices</li>
    </ul>

    <h3>Real Application Analysis</h3>
    <p>You will analyze how modern web applications expose vulnerabilities. Exercises focus on identifying weaknesses in authentication flows, API endpoints, and application logic. Typical areas explored include: authentication bypass scenarios; access control weaknesses; API endpoint misuse; logic flaws in application workflows.</p>

    <h3>Sample Capstone Projects</h3>
    <ul>
      <li>Build a full-stack web application and analyze its security weaknesses</li>
      <li>Authentication and authorization bypass research project</li>
      <li>REST API abuse and security testing scenario</li>
      <li>Business logic vulnerability analysis</li>
      <li>Full vulnerability assessment of a web application with a professional security report</li>
    </ul>
  </section>

  <!-- PAGE 7: Malware Analysis -->
  <section class="page track-page">
    <i class="fa-solid fa-bug page-icon" aria-hidden="true"></i>
    <h1 class="page-title">Inside the Beast</h1>
    <p class="page-subtitle">Malware Analysis & Reverse Engineering</p>
    <p class="tagline-block">Analyze how malicious software behaves. Understand how malware evades detection. Learn how attackers design malicious code.</p>

    <h3>Theme</h3>
    <p>Understanding how malicious software works internally. Focus on reverse engineering, behavioral analysis, and identifying how malware interacts with operating systems.</p>

    <h3>Core Topics</h3>
    <ul>
      <li>Malware execution flow and attack stages</li>
      <li>Static malware analysis techniques</li>
      <li>Dynamic malware analysis techniques</li>
      <li>Portable Executable (PE) file structure</li>
      <li>Assembly fundamentals for malware analysts</li>
      <li>Sandbox environments and behavioral monitoring</li>
      <li>Indicators of compromise and artifact analysis</li>
      <li>Malware documentation and reporting</li>
    </ul>

    <h3>Real Malware Analysis</h3>
    <p>You will analyze how malware operates within controlled analysis environments. Exercises focus on identifying execution behavior, persistence mechanisms, and extracting indicators of compromise (IOCs). Typical concepts explored include: process execution and memory behavior; file system and registry modifications; network communication patterns; extraction of IOCs; process injection and process hollowing. Students analyze these behaviors in controlled lab environments and document the technical findings.</p>

    <h3>Sample Capstone Projects</h3>
    <ul>
      <li>Reverse engineer a malware sample and document execution behavior</li>
      <li>Ransomware behavior analysis report</li>
      <li>Malware analysis lab and sandbox environment setup</li>
      <li>Malicious document attack chain analysis</li>
    </ul>
  </section>

  <!-- PAGE 8: Own the OS -->
  <section class="page track-page">
    <i class="fa-solid fa-server page-icon" aria-hidden="true"></i>
    <h1 class="page-title">Own the OS</h1>
    <p class="page-subtitle">Secure Systems - Linux, BSD & Windows</p>
    <p class="tagline-block">Harden and understand Linux, BSD, and Windows from a security and systems perspective. Attack surface, configuration, scripting, and how things actually work on each OS. Build and break real systems across the three platforms.</p>

    <h3>Theme</h3>
    <p>Understanding how to secure and reason about Linux, BSD, and Windows at the systems level. Focus on hardening, secure configuration, attack surface, and misconfigurations. You will work with real installations (lab or VM) to harden systems, find weak spots, and document what matters.</p>

    <h3>Core Topics</h3>
    <ul>
      <li>Security fundamentals across Linux, BSD, and Windows</li>
      <li>Hardening and secure configuration (services, permissions, auth)</li>
      <li>Attack surface and common misconfigurations</li>
      <li>User and privilege model; authentication and access control</li>
      <li>Scripting and automation for security and ops (shell, basic automation)</li>
      <li>Logging and audit (what to enable, where to look)</li>
      <li>Comparing security posture and trade-offs across the three OSes</li>
      <li>Network and service exposure; firewall and access policies</li>
    </ul>

    <h3>Real Systems Work</h3>
    <p>You will work with real Linux, BSD, and/or Windows systems in lab or VM environments. Exercises focus on hardening systems, identifying misconfigurations, and documenting findings. Typical areas: unnecessary services and open ports; weak or default credentials; permission and privilege issues; logging and audit configuration.</p>

    <h3>Sample Capstone Projects</h3>
    <ul>
      <li>Hardening guide or hardened image for one or more of Linux, BSD, Windows</li>
      <li>Attack surface assessment and remediation report for a lab system</li>
      <li>Cross-platform automation or hardening script set (e.g. baseline checks)</li>
      <li>Secure baseline documentation for a specific use case (e.g. server, workstation)</li>
      <li>Comparison report: same workload secured on Linux vs BSD vs Windows with recommendations</li>
    </ul>
  </section>

  <!-- PAGE 9: Cloud Security -->
  <section class="page track-page">
    <i class="fa-solid fa-cloud page-icon" aria-hidden="true"></i>
    <h1 class="page-title">Secure the Cloud</h1>
    <p class="page-subtitle">Cloud Security</p>
    <p class="tagline-block">Secure and assess cloud environments (AWS, Azure, GCP). Understand identity, misconfigurations, and attack paths. Build and break real cloud setups.</p>

    <h3>Theme</h3>
    <p>Understanding how cloud infrastructure is secured and how it is attacked. Focus on identity and access, misconfigurations, network exposure, and secure design. You will work with real cloud providers in lab environments to build secure patterns and to find and fix weaknesses.</p>

    <h3>Core Topics</h3>
    <ul>
      <li>Cloud provider fundamentals (identity, regions, networking)</li>
      <li>Identity and access management (IAM, roles, policies, federation)</li>
      <li>Common misconfigurations and insecure defaults</li>
      <li>Storage and database security in the cloud</li>
      <li>Network security (VPCs, security groups, segmentation)</li>
      <li>Container and serverless security basics</li>
      <li>Cloud security assessment and reconnaissance</li>
      <li>Secure architecture patterns and hardening</li>
    </ul>

    <h3>Real Cloud Security Work</h3>
    <p>You will work with real cloud environments in lab or personal accounts. Exercises focus on building secure configurations, identifying misconfigurations, and documenting findings. Typical areas: overly permissive IAM; exposed storage or APIs; network misconfigurations; insecure identity patterns.</p>

    <h3>Sample Capstone Projects</h3>
    <ul>
      <li>Secure multi-account cloud architecture design and implementation</li>
      <li>Cloud security assessment report (IAM, storage, network) for a lab environment</li>
      <li>Automated misconfiguration scanner or checker for a cloud provider</li>
      <li>Incident response runbook for a cloud-based workload</li>
      <li>Comparison of secure patterns across AWS / Azure / GCP with recommendations</li>
    </ul>

    <h3>Cloud & Infrastructure Note</h3>
    <p>This track requires access to at least one major cloud provider (AWS, Azure, or GCP). You will need a cloud account for exercises and capstones. Free tier or trial accounts are often sufficient; some usage may incur cost beyond free tier. Participants should be prepared to manage minimal cloud usage for their own practice and projects.</p>
  </section>

  <!-- PAGE 10: DevSecOps / Secure SDLC -->
  <section class="page track-page">
    <i class="fa-solid fa-code-branch page-icon" aria-hidden="true"></i>
    <h1 class="page-title">Shift Left</h1>
    <p class="page-subtitle">DevSecOps / Secure SDLC</p>
    <p class="tagline-block">Build security into the development pipeline. Use SAST, DAST, and supply chain practices. Ship code that is secure by design.</p>

    <h3>Theme</h3>
    <p>Understanding how security is integrated into the software development lifecycle. Focus on secure SDLC, CI/CD security, automated testing (SAST, DAST, SCA), supply chain security, and secure coding practices. You will work with real pipelines and tools to add security checks and fix findings.</p>

    <h3>Core Topics</h3>
    <ul>
      <li>Secure SDLC and shift-left concepts</li>
      <li>CI/CD fundamentals and pipeline security</li>
      <li>Static application security testing (SAST) and code analysis</li>
      <li>Dynamic testing and dependency/SCA (software composition analysis)</li>
      <li>Supply chain security (dependencies, containers, signing)</li>
      <li>Secure coding practices and remediation</li>
      <li>Security gates and policy as code</li>
      <li>Vulnerability management and prioritisation in development</li>
    </ul>

    <h3>Real Pipeline and Code Security</h3>
    <p>You will work with real or sample codebases and pipelines. Exercises focus on adding security tooling, fixing reported issues, and understanding trade-offs. Typical areas: integrating SAST or SCA; fixing findings and reducing false positives; securing build and deploy; reviewing dependencies and upgrade policies.</p>

    <h3>Sample Capstone Projects</h3>
    <ul>
      <li>Add security tooling (SAST/SCA) to an existing CI/CD pipeline and document findings</li>
      <li>Secure a sample application and its pipeline end to end with a written report</li>
      <li>Supply chain security assessment (dependencies, container image) with recommendations</li>
      <li>Custom security gate or policy (e.g. branch protection, image signing) with documentation</li>
      <li>Vulnerability management playbook for a development team</li>
    </ul>
  </section>

  <!-- PAGE 11: Red Teaming Using AI -->
  <section class="page track-page page-red-team">
    <i class="fa-solid fa-robot page-icon" aria-hidden="true"></i>
    <h1 class="page-title">Red Team, AI Edge</h1>
    <p class="page-subtitle">Classical Red Teaming Using AI</p>
    <p class="tagline-block">Run real red team engagements. Use AI to build phishing lures, generate macros, and create C2 and implants. Learn classical attack chains with AI accelerating the build.</p>

    <h3>Prerequisites</h3>
    <p>This track requires access to commercial AI models. Participants should subscribe to commercial LLM services as needed to create artefacts, design and implement C2s, and complete the hands-on exercises.</p>

    <h3>Theme</h3>
    <p>Classical red teaming (phishing, initial access, C2, implants, persistence, lateral movement) using AI to generate and refine the technical artefacts. Focus on real engagement tradecraft: phishing and macro-enabled payloads, command-and-control design, implant and agent development. AI is used as a force multiplier for generating lures, VBA/macros, C2 components, and supporting code. All work is conducted in controlled lab environments with clear scope and authorisation.</p>

    <h3>Core Topics</h3>
    <ul>
      <li>Red team methodology: objectives, scope, rules of engagement, reporting</li>
      <li>Phishing and social engineering: lures, pretexting, AI-generated copy and scenarios</li>
      <li>Macro generation: VBA and Office macros for initial access, AI-assisted generation and obfuscation</li>
      <li>Command-and-control (C2): design, protocols, implants, and AI-assisted C2 creation</li>
      <li>Payload and implant development: shellcode, loaders, and LLM-assisted scaffolding</li>
      <li>Evasion and detection: bypassing AV/EDR, trade-offs when using generated code</li>
      <li>Post-exploitation and persistence in classical red team contexts</li>
      <li>Responsible use, authorisation, and controlled environments</li>
    </ul>

    <h3>Real Red Team Operations</h3>
    <p>You will run classical red team exercises and use AI to build the artefacts. Exercises focus on full attack chains: phishing and initial access (including macro-based), C2 setup, implant deployment, persistence, and exfiltration or impact. You will use AI to generate phishing content, macros, C2 agents or stagers, and supporting code - then test, refine, and document in lab environments. Typical areas explored include: phishing campaigns with AI-generated lures and macro-enabled documents; VBA and Office macro generation (and obfuscation) with AI assistance; building or adapting C2 frameworks and implants with AI-assisted code generation; payload and loader development with LLM-assisted generation and safe review; end-to-end red team scenario from phishing through to report.</p>

    <h3>Sample Capstone Projects</h3>
    <ul>
      <li>Full red team engagement: phishing (with AI-generated lures and macros), C2, and implant deployment with written report</li>
      <li>AI-assisted phishing and macro generation toolkit or playbook for authorised testing</li>
      <li>C2 and implant creation using AI-assisted code generation (lab environment)</li>
      <li>Macro-enabled initial access chain (document + macro + callback) with AI-generated components</li>
      <li>Red team tooling suite: phishing, macro generation, and C2 creation with documentation</li>
    </ul>

    <h3>Ethics and Scope Note</h3>
    <p>This track is for authorised red teaming only. All exercises and capstones are conducted in controlled environments, with explicit scope and rules of engagement. AI is used to build classical red team artefacts (phishing, macros, C2, implants) within legal and ethical boundaries.</p>
  </section>

  <!-- PAGE 12: Hands-On AI (last track before Mentorship) -->
  <section class="page track-page">
    <i class="fa-solid fa-brain page-icon" aria-hidden="true"></i>
    <h1 class="page-title">AI That Ships</h1>
    <p class="page-subtitle">Hands-On AI & Systems Engineering</p>
    <p class="tagline-block">Build practical AI-powered tools. Work with real models and real data. Design AI systems that solve real problems.</p>

    <h3>Theme</h3>
    <p>Understanding how modern AI systems are designed and engineered for real-world applications. Focus on building AI-powered tools, working with local models, creating agentic workflows, and designing practical AI systems that interact with real data and applications.</p>

    <h3>Core Topics</h3>
    <ul>
      <li>Working with open-source large language models</li>
      <li>Running local models using tools such as Ollama</li>
      <li>HuggingFace ecosystem and model usage</li>
      <li>Retrieval-Augmented Generation (RAG) systems</li>
      <li>Agentic workflows using frameworks such as LangChain and CrewAI</li>
      <li>Designing AI pipelines that process documents and structured data</li>
      <li>Responsible AI concepts and system-level risks</li>
    </ul>

    <h3>Sample Capstone Projects</h3>
    <ul>
      <li>Build a RAG application using local LLMs and vector databases</li>
      <li>Develop a multi-agent workflow coordinating multiple AI agents</li>
      <li>Build a local LLM application using Ollama and open-source models</li>
      <li>Create an AI-powered knowledge assistant using HuggingFace models</li>
      <li>Design an AI pipeline that processes documents and answers contextual queries</li>
    </ul>

    <h3>Hardware & Infrastructure Note</h3>
    <p>This track may require stronger hardware. Depending on the project, students may run models locally or use cloud-based AI platforms. Typical environments: local model execution (e.g. Ollama); systems with sufficient RAM or GPU; cloud platforms such as AWS Bedrock, AWS SageMaker, or similar.</p>
  </section>

  <!-- PAGE 13: Program, Trainer & Enquiry -->
  <section class="page page-8">
    <i class="fa-solid fa-handshake page-icon" aria-hidden="true"></i>
    <h1 class="page-title">Join the Program</h1>
    <p class="page-subtitle">Mentorship, Trainer & Enquiry</p>
    <h2>Limited Mentorship Program</h2>
    <p>This program is intentionally limited in size. Paid program; monthly participation fee. No stipend. You work on real technical problems; when you are stuck or have something to show, we engage. No one teaches you step by step. You learn the hard way; we discuss, review, and course-correct. That is why the program is limited to a small number of participants.</p>
    <p>You get:</p>
    <ul>
      <li>Discussion after you have attempted the problem</li>
      <li>Review of your code, projects, and implementation</li>
      <li>Structured problem tracks to work through</li>
      <li>Feedback on your approach, debugging, and decisions</li>
    </ul>

    <h2>Industry Exposure & Guidance</h2>
    <p>Beyond the programmes, the mentor shares perspective from over two decades in cybersecurity and software: how real engineering and security teams operate, and what matters when you are building or breaking systems.</p>
    <p>Participants may receive:</p>
    <ul>
      <li>Recommendations of important technical books and research material</li>
      <li>Guidance on useful conferences, communities, and professional learning resources</li>
      <li>Advice on building strong technical portfolios and project documentation</li>
      <li>Insight into industry expectations during technical interviews</li>
    </ul>
    <p>Students who demonstrate strong effort, curiosity, and technical discipline may also receive guidance on preparing for industry opportunities and career direction.</p>

    <h2>About the Trainer</h2>
    <p><strong>Founder - Tensor42 Technologies</strong></p>
    <p>Involved in Cybersecurity and Programming since 1994. Working in the Cybersecurity industry since 2003.</p>
    <p>Professional experience includes working with a major Fortune 10 organization and a leading global antivirus company. Hands-on exposure across offensive security, malware analysis, application security, and secure systems development. Has worked with 50+ students and professionals who learned by doing and built strong technical foundations.</p>

    <h2>About Tensor42</h2>
    <p>Tensor42 Technologies builds products in cybersecurity, AI, and software. Active development includes security tooling, red team and defensive platforms, and AI-assisted workflows. The mentorship runs alongside real product work, so participants get exposure to how commercial tools and systems are built.</p>

    <h2>Application & Enquiry</h2>
    <div class="contact-box">
      <div class="qr-wrap">
        <div class="qr-code">
          <img src="https://api.qrserver.com/v1/create-qr-code/?size=100x100&amp;data=https%3A%2F%2Fwww.linkedin.com%2Fin%2Fsenthilvelan%2F" alt="QR code: LinkedIn profile" width="100" height="100">
        </div>
        <div>
          <p><strong>LinkedIn</strong><br><a href="https://www.linkedin.com/in/senthilvelan/">https://www.linkedin.com/in/senthilvelan/</a></p>
          <p style="margin-top: 0.75rem;"><strong>Email</strong><br>senthilvelantraining@gmail.com</p>
        </div>
      </div>
    </div>

    <p class="gist-line">This is just the gist. Enroll to gain more.</p>
    <p class="footer-line">This program is designed for students who want real technical depth and practical engineering skills.</p>
  </section>]]></content><author><name></name></author><summary type="html"><![CDATA[:root { --bg: #1a1d24; --card: #242830; --text: #e8eaed; --muted: #9ca3af; --accent: #f59e0b; --accent-dim: #d97706; }]]></summary></entry><entry><title type="html">The Emotional Relationship We Have With Code</title><link href="https://kkvelan.github.io/blog/2026/03/11/emotional-relationship-with-code.html" rel="alternate" type="text/html" title="The Emotional Relationship We Have With Code" /><published>2026-03-11T12:00:00+00:00</published><updated>2026-03-11T12:00:00+00:00</updated><id>https://kkvelan.github.io/blog/2026/03/11/emotional-relationship-with-code</id><content type="html" xml:base="https://kkvelan.github.io/blog/2026/03/11/emotional-relationship-with-code.html"><![CDATA[<p><img src="/blog/emotional-relationship-with-code/image1.png" alt="The Emotional Relationship We Have With Code" /></p>

<p>One side effect of AI that we do not talk about enough is this, the emotional relationship people have with code.</p>

<p>For many of us, programming was never just about output.</p>

<p>There was a quiet satisfaction in typing every statement ourselves. Watching logic unfold line by line.</p>

<p>Back then there was no autocomplete. No AI suggestions. Often no mouse-heavy IDE workflow. Just a keyboard, a blinking cursor, and patience.</p>

<p>You typed everything.
You fixed everything.
You understood everything.</p>

<p>There was joy in seeing printf print exactly what you expected.
In zeroing a register with xor eax, eax.
In tightening a loop like while(index).</p>

<p>One piece of code that programmers have often held up as beautiful is the fast inverse square root from Quake III Arena: a tiny, hardware-aware hack that does one Newton–Raphson iteration after a magic constant to approximate 1/√x without a proper square root. It is the kind of craft that comes from knowing the machine.</p>

<div class="language-c highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="kt">float</span> <span class="nf">Q_rsqrt</span><span class="p">(</span><span class="kt">float</span> <span class="n">number</span><span class="p">)</span>
<span class="p">{</span>
  <span class="kt">long</span> <span class="n">i</span><span class="p">;</span>
  <span class="kt">float</span> <span class="n">x2</span><span class="p">,</span> <span class="n">y</span><span class="p">;</span>
  <span class="k">const</span> <span class="kt">float</span> <span class="n">threehalfs</span> <span class="o">=</span> <span class="mi">1</span><span class="p">.</span><span class="mi">5</span><span class="n">F</span><span class="p">;</span>
  <span class="n">x2</span> <span class="o">=</span> <span class="n">number</span> <span class="o">*</span> <span class="mi">0</span><span class="p">.</span><span class="mi">5</span><span class="n">F</span><span class="p">;</span>
  <span class="n">y</span>  <span class="o">=</span> <span class="n">number</span><span class="p">;</span>
  <span class="n">i</span>  <span class="o">=</span> <span class="o">*</span> <span class="p">(</span> <span class="kt">long</span> <span class="o">*</span> <span class="p">)</span> <span class="o">&amp;</span><span class="n">y</span><span class="p">;</span>
  <span class="n">i</span>  <span class="o">=</span> <span class="mh">0x5f3759df</span> <span class="o">-</span> <span class="p">(</span> <span class="n">i</span> <span class="o">&gt;&gt;</span> <span class="mi">1</span> <span class="p">);</span>
  <span class="n">y</span>  <span class="o">=</span> <span class="o">*</span> <span class="p">(</span> <span class="kt">float</span> <span class="o">*</span> <span class="p">)</span> <span class="o">&amp;</span><span class="n">i</span><span class="p">;</span>
  <span class="n">y</span>  <span class="o">=</span> <span class="n">y</span> <span class="o">*</span> <span class="p">(</span> <span class="n">threehalfs</span> <span class="o">-</span> <span class="p">(</span> <span class="n">x2</span> <span class="o">*</span> <span class="n">y</span> <span class="o">*</span> <span class="n">y</span> <span class="p">)</span> <span class="p">);</span>
  <span class="k">return</span> <span class="n">y</span><span class="p">;</span>
<span class="p">}</span>
</code></pre></div></div>

<p>Take Rust. Why do people love enums in Rust? Because they make invalid states unrepresentable. You do not have “maybe null” scattered everywhere; you have <code class="language-plaintext highlighter-rouge">Option&lt;T&gt;</code>. You do not hide errors in a magic value; you have <code class="language-plaintext highlighter-rouge">Result&lt;T, E&gt;</code>. The type system encodes the shape of your logic, and pattern matching forces you to handle every case. It is a different kind of craft: not bit-twiddling, but the pleasure of the compiler and the data structure in one.</p>

<div class="language-rust highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="k">enum</span> <span class="n">Message</span> <span class="p">{</span>
    <span class="n">Quit</span><span class="p">,</span>
    <span class="n">Move</span> <span class="p">{</span> <span class="n">x</span><span class="p">:</span> <span class="nb">i32</span><span class="p">,</span> <span class="n">y</span><span class="p">:</span> <span class="nb">i32</span> <span class="p">},</span>
    <span class="nf">Write</span><span class="p">(</span><span class="nb">String</span><span class="p">),</span>
<span class="p">}</span>

<span class="k">fn</span> <span class="nf">handle</span><span class="p">(</span><span class="n">msg</span><span class="p">:</span> <span class="n">Message</span><span class="p">)</span> <span class="p">{</span>
    <span class="k">match</span> <span class="n">msg</span> <span class="p">{</span>
        <span class="nn">Message</span><span class="p">::</span><span class="n">Quit</span> <span class="k">=&gt;</span> <span class="p">{</span> <span class="cm">/* ... */</span> <span class="p">}</span>
        <span class="nn">Message</span><span class="p">::</span><span class="n">Move</span> <span class="p">{</span> <span class="n">x</span><span class="p">,</span> <span class="n">y</span> <span class="p">}</span> <span class="k">=&gt;</span> <span class="p">{</span> <span class="cm">/* ... */</span> <span class="p">}</span>
        <span class="nn">Message</span><span class="p">::</span><span class="nf">Write</span><span class="p">(</span><span class="n">s</span><span class="p">)</span> <span class="k">=&gt;</span> <span class="p">{</span> <span class="cm">/* ... */</span> <span class="p">}</span>
    <span class="p">}</span>
<span class="p">}</span>
</code></pre></div></div>

<p>It was not just about shipping software. It was craft.</p>

<p>I remember attending ILUGC meetups in early 2000 at IIT Madras. People gathered to talk about Linux, kernels, patches, open-source. I would walk up to strangers asking how they got their modem working on Debian 3.1. That is where I first heard serious discussions about FreeBSD and OpenBSD.</p>

<p>There were similar communities around the world, Chaos Computer Club, BSD user groups, Linux User Groups everywhere. Pure meetups of craft work. No branding. No hype. Just people obsessed with understanding systems.</p>

<p>This was also the era of 5.25-inch floppy disks, thin magnetic media inside soft sleeves. 360 KB or 1.2 MB felt sufficient. Many systems had no hard disks. There was no internet. No Windows. Just MS-DOS 4.01, a blinking A:\ prompt, and whatever tools fit on a floppy.</p>

<p>Abstraction was thin. You could almost see the hardware through the code.</p>

<p>Now AI-assisted tools generate much of that surface layer.</p>

<p>Syntax appears instantly.
Boilerplate disappears.
Patterns complete before you finish thinking.</p>

<p>Productivity increases. And that is a good thing.</p>

<p>But I sometimes think about what changes in our relationship with code when we stop typing the details ourselves.</p>

<p>Maybe the craft does not disappear.
Maybe it moves upward, from writing syntax to designing systems.</p>

<p>I know this may not matter from a productivity standpoint, but there was something meaningful about typing every line ourselves.</p>

<p>If you have read “The Story of Mel” you may understand the kind of craft I am referring to. It was never really about assembly code. It was about intimacy with the machine.</p>

<h2 id="the-story-of-mel-summarized-by-ai">The Story of Mel Summarized by AI</h2>

<p>“The Story of Mel,” written by Ed Nather, describes a remarkable programmer named Mel who worked on the early LGP-30 in the late 1950s. The machine used a rotating drum for memory, so accessing instructions depended on the physical timing of the drum. Mel understood this timing so deeply that he wrote assembly programs arranged precisely to match the drum’s rotation. Instead of using conventional jumps, he positioned instructions so that when one finished executing, the next one would appear under the read head at exactly the right moment. His programs even used self-modifying code to adjust instruction addresses dynamically. Other programmers tried to rewrite his work in a cleaner, more understandable way, but their versions ran slower than Mel’s original code. What made Mel’s work extraordinary was that his code was not just logical; it was synchronized with the physical behavior of the hardware. The story shows a programmer who treated the machine almost like a mechanical instrument. It illustrates a time when deep knowledge of hardware and software together defined programming skill. Even today, the story is remembered as a symbol of craftsmanship and intimacy with the machine.</p>]]></content><author><name></name></author><summary type="html"><![CDATA[]]></summary></entry><entry><title type="html">When 50 Lines of Code Turn an Application into an AI System: Visibility and Governance for ISO/IEC 42001</title><link href="https://kkvelan.github.io/blog/2026/03/08/iso-42001-50-lines-ai.html" rel="alternate" type="text/html" title="When 50 Lines of Code Turn an Application into an AI System: Visibility and Governance for ISO/IEC 42001" /><published>2026-03-08T12:00:00+00:00</published><updated>2026-03-08T12:00:00+00:00</updated><id>https://kkvelan.github.io/blog/2026/03/08/iso-42001-50-lines-ai</id><content type="html" xml:base="https://kkvelan.github.io/blog/2026/03/08/iso-42001-50-lines-ai.html"><![CDATA[<p><img src="/blog/iso-42001-50-lines-ai/image1.webp" alt="When 50 Lines of Code Turn an Application into an AI System: Visibility and Governance for ISO/IEC 42001" /></p>

<p>A deterministic application can become an AI-enabled system through very small code changes. Sometimes as little as 50 lines that introduce a model call can transform a traditional application into one whose behaviour is influenced by AI; content, recommendations, or decisions then depend on the model. That shift creates a major governance challenge: the same system that was “just an application” is now in scope for ISO/IEC 42001 and must be identified, documented, and managed. This article is for cybersecurity leaders, AI governance professionals, architects, auditors, and implementers. It explains why identifying AI systems is harder than it sounds, what auditors are up against, and how implementers can build governance that makes AI usage visible and governable. This article focuses on identifying and inventorying AI systems and on governance practices that support that. It does not cover the full set of ISO/IEC 42001 requirements (e.g. lifecycle, risk treatment, documented information, or management review). The working definition of “AI system” used here is practical; for the formal definition, see the standard or ISO/IEC 22989.</p>

<h2 id="1-the-small-change-that-changes-everything">1. The Small Change That Changes Everything</h2>

<p>From a governance perspective, what matters is not how many lines of code an application has, but whether its behaviour is influenced by an AI model. A large, rule-based system remains deterministic. A small change that adds a single model call can make the same system non-deterministic and subject to AI governance. That shift is often invisible at the process or documentation level; only code and integration points reveal it.</p>

<p>Consider a support ticket router. Originally it might classify tickets using fixed rules: keyword matching, category lookup tables, and priority thresholds. The behaviour is predictable; the system is not an AI system. A product team then adds a call to an external API that uses a language model to suggest category and priority. A developer adds a small module: an HTTP client, a prompt, and a call to the model API. The change may be a few dozen lines. Functionally, the system is now “smarter”; from a governance standpoint, it is now an AI system.</p>

<p>The following illustrates the kind of change that turns a deterministic path into an AI-influenced one. First, a purely rule-based classification:</p>

<div class="language-python highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="k">def</span> <span class="nf">classify_ticket</span><span class="p">(</span><span class="n">subject</span><span class="p">:</span> <span class="nb">str</span><span class="p">,</span> <span class="n">body</span><span class="p">:</span> <span class="nb">str</span><span class="p">)</span> <span class="o">-&gt;</span> <span class="nb">dict</span><span class="p">:</span>
    <span class="c1"># Deterministic: rules only
</span>    <span class="k">if</span> <span class="s">"billing"</span> <span class="ow">in</span> <span class="n">subject</span><span class="p">.</span><span class="n">lower</span><span class="p">()</span> <span class="ow">or</span> <span class="s">"invoice"</span> <span class="ow">in</span> <span class="n">body</span><span class="p">.</span><span class="n">lower</span><span class="p">():</span>
        <span class="k">return</span> <span class="p">{</span><span class="s">"category"</span><span class="p">:</span> <span class="s">"billing"</span><span class="p">,</span> <span class="s">"priority"</span><span class="p">:</span> <span class="s">"high"</span><span class="p">}</span>
    <span class="k">if</span> <span class="s">"login"</span> <span class="ow">in</span> <span class="n">subject</span><span class="p">.</span><span class="n">lower</span><span class="p">():</span>
        <span class="k">return</span> <span class="p">{</span><span class="s">"category"</span><span class="p">:</span> <span class="s">"access"</span><span class="p">,</span> <span class="s">"priority"</span><span class="p">:</span> <span class="s">"medium"</span><span class="p">}</span>
    <span class="k">return</span> <span class="p">{</span><span class="s">"category"</span><span class="p">:</span> <span class="s">"general"</span><span class="p">,</span> <span class="s">"priority"</span><span class="p">:</span> <span class="s">"low"</span><span class="p">}</span>
</code></pre></div></div>

<p>After the change, the same function delegates to a model for edge cases. The surface area of the change is small; the governance impact is large.</p>

<div class="language-python highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="k">def</span> <span class="nf">classify_ticket</span><span class="p">(</span><span class="n">subject</span><span class="p">:</span> <span class="nb">str</span><span class="p">,</span> <span class="n">body</span><span class="p">:</span> <span class="nb">str</span><span class="p">)</span> <span class="o">-&gt;</span> <span class="nb">dict</span><span class="p">:</span>
    <span class="c1"># Try rules first
</span>    <span class="k">if</span> <span class="s">"billing"</span> <span class="ow">in</span> <span class="n">subject</span><span class="p">.</span><span class="n">lower</span><span class="p">()</span> <span class="ow">or</span> <span class="s">"invoice"</span> <span class="ow">in</span> <span class="n">body</span><span class="p">.</span><span class="n">lower</span><span class="p">():</span>
        <span class="k">return</span> <span class="p">{</span><span class="s">"category"</span><span class="p">:</span> <span class="s">"billing"</span><span class="p">,</span> <span class="s">"priority"</span><span class="p">:</span> <span class="s">"high"</span><span class="p">}</span>
    <span class="k">if</span> <span class="s">"login"</span> <span class="ow">in</span> <span class="n">subject</span><span class="p">.</span><span class="n">lower</span><span class="p">():</span>
        <span class="k">return</span> <span class="p">{</span><span class="s">"category"</span><span class="p">:</span> <span class="s">"access"</span><span class="p">,</span> <span class="s">"priority"</span><span class="p">:</span> <span class="s">"medium"</span><span class="p">}</span>
    <span class="c1"># AI-influenced path: model call
</span>    <span class="n">response</span> <span class="o">=</span> <span class="n">model_client</span><span class="p">.</span><span class="n">complete</span><span class="p">(</span>
        <span class="n">prompt</span><span class="o">=</span><span class="sa">f</span><span class="s">"Classify support ticket. Subject: </span><span class="si">{</span><span class="n">subject</span><span class="si">}</span><span class="s">. Body: </span><span class="si">{</span><span class="n">body</span><span class="p">[</span><span class="si">:</span><span class="mi">500</span><span class="p">]</span><span class="si">}</span><span class="s">."</span><span class="p">,</span>
        <span class="n">max_tokens</span><span class="o">=</span><span class="mi">50</span>
    <span class="p">)</span>
    <span class="k">return</span> <span class="n">parse_model_response</span><span class="p">(</span><span class="n">response</span><span class="p">)</span>  <span class="c1"># category, priority from model
</span></code></pre></div></div>

<p>The application is now subject to AI governance: model behaviour affects decisions, and the organisation must identify, document, and manage it under ISO/IEC 42001.</p>

<h2 id="2-why-this-creates-an-ai-governance-problem">2. Why This Creates an AI Governance Problem</h2>

<p>Once the model is in the path, the system’s outputs depend on the model’s behaviour. That behaviour can change with model updates, prompt changes, or input distribution shift. The organisation needs to treat it as an AI system: inventory it, assess risk, define policies, and maintain documentation. If the change was never flagged to compliance or architecture, the system will not appear in the AI inventory. Auditors cannot validate what is not identified; implementers cannot govern what they do not know exists. ISO/IEC 42001 is not only about auditing AI systems; it is about building management processes that make AI usage visible and governable. Without a deliberate process, small, incremental additions of AI create invisible governance gaps.</p>

<h2 id="3-why-identifying-ai-systems-is-harder-than-traditional-asset-identification">3. Why Identifying AI Systems Is Harder Than Traditional Asset Identification</h2>

<p>Many organisations already maintain an asset register. In frameworks such as ISO/IEC 27001, the focus is on information assets within the scope of the ISMS: applications, infrastructure, data, and often people or roles. You list them, classify them, assign ownership, and link them to risks and controls. The unit of account is the asset itself (e.g. “Support Ticket System,” “CRM”).</p>

<p>ISO/IEC 42001 asks for something different: an inventory of AI systems. The unit of account is not “any asset” but “a system in which AI influences decisions or outcomes.” The same application may appear in both registers; in 42001 it is in scope only if and where AI is used. So 42001 requires an extra dimension: not just “what systems we have,” but “where AI is used inside them.” A 27001 asset register rarely captures that; it does not usually tag “contains model call” or “AI in decision path.” AI may be introduced via third-party APIs, SaaS features, or internal microservices, with no central list of “AI projects.” The inventory has to be discovered, not simply read from an existing register. Relying on the 27001 register alone is therefore insufficient for 42001.</p>

<h2 id="4-auditor-perspective-the-visibility-and-inventory-challenge">4. Auditor Perspective: The Visibility and Inventory Challenge</h2>

<p>Auditors must validate that the organisation has identified its AI systems in scope. That means assessing whether the discovery process is repeatable, documented, and sufficient to support the stated scope. If the organisation has not looked in the right places (e.g. codebases, API integrations, vendor capabilities), the auditor cannot rely on the inventory. The challenge is to design procedures that catch systems like the ticket router above: small changes with large governance implications. AI often appears at integration points (calls to model APIs, embeddings, SaaS ML features) or in internal services that are not named or documented as “AI.” Auditors should check that the organisation has a defined process for discovering and maintaining the AI system inventory, that the process was applied consistently, and that the resulting scope is plausible given the organisation’s size, industry, and use of technology. The objective is to gain confidence that the inventory is a reasonable basis for the AI management system, not that every possible AI use has been found. Perfect visibility is unrealistic; a repeatable discovery process and evidence that it is followed are what auditors should validate.</p>

<h2 id="5-auditor-risk-when-hidden-ai-systems-undermine-the-audit">5. Auditor Risk: When Hidden AI Systems Undermine the Audit</h2>

<p>A change in an application that turns it into an AI system (e.g. a few dozen lines adding a model call) is exactly the kind of change that creates auditor risk. The auditor is asked to form an opinion on whether the organisation’s AI management system is adequate: whether AI systems are identified, risks are assessed, and controls are in place. If the organisation (or the auditor) does not account for the fact that small, local code or integration changes can turn a previously deterministic system into an AI system, the scope of the management system can be materially understated. Systems that should be in the inventory are missing; risks attached to those systems are not assessed; the auditor may be validating a picture that is incomplete. That gap is auditor risk: the risk that the audit conclusion is based on an inventory or scope that omits AI systems that fall within the intended scope of the management system. Auditors therefore need to understand this dynamic and to design procedures that address it (e.g. sampling codebases or integration points, challenging the discovery process, and assessing whether the organisation has considered “small change” scenarios). Acknowledging that a small change can create a large governance shift is part of assessing and mitigating auditor risk.</p>

<h2 id="6-implementor-perspective-how-to-build-governance-around-this-reality">6. Implementor Perspective: How to Build Governance Around This Reality</h2>

<p>Implementers face the same reality from the inside: they must set up AI governance so that the organisation can identify, document, and manage AI systems even when those systems emerge from small code changes. That starts with defining what qualifies as an AI system. A practical definition is: a system in which an AI model (internal or external) influences content, recommendations, or decisions that affect the organisation or its stakeholders. Rule-based logic alone does not qualify; once a model call influences the outcome, the system is in scope. With that definition in place, the implementer establishes an AI system inventory as the single source of truth: system name, purpose, where AI is used (e.g. classification, recommendation, generation), how it is implemented (internal model, external API, SaaS feature), and ownership. The inventory should be linked to architecture and risk assessments so it drives the rest of the management system. AI review should be integrated into architecture review and change management: when new services, integrations, or features are proposed, there is a checkpoint to ask whether AI is involved and whether the system belongs in the inventory. Awareness training is important: developers and product teams need to know that introducing a model API or an AI-backed SaaS feature triggers governance steps. Some organisations add AI governance office hours or quick security review paths so that teams can get fast, lightweight approvals for low-risk AI use without blocking delivery. Without awareness and accessible approval paths, AI will continue to appear in production without being captured in the inventory.</p>

<h2 id="7-governance-checkpoints-and-approval-workflow">7. Governance Checkpoints and Approval Workflow</h2>

<p>To prevent AI from being introduced without oversight, organisations should define clear checkpoints. Before a new external AI service or model integration goes into production, it should go through an approval process: who is allowed to sign off, what information must be documented (vendor, data flows, purpose, risk level), and how the system is added to the AI system inventory. The same applies to internal model deployments or material changes to existing AI use (e.g. prompt changes, model upgrades). Checkpoints can be embedded in architecture review boards, change advisory boards, or a dedicated AI governance review. The goal is not to block innovation but to ensure that every AI system is known, scoped, and managed.</p>

<h3 id="shadow-ai-a-consequence-of-weak-governance-and-missing-checkpoints">Shadow AI: A Consequence of Weak Governance and Missing Checkpoints</h3>

<p>When there are no clear approval steps or when developers are unaware that AI use triggers governance, “Shadow AI” appears: AI usage that is not in the inventory, not approved, and not subject to the organisation’s policies or risk controls. Examples include teams subscribing to external model APIs on a credit card, embedding SaaS features that use ML without checking data or compliance implications, or adding a small model call in a legacy application without telling anyone. Shadow AI is not necessarily malicious; it is often the result of speed and lack of awareness. The consequence is the same: the organisation cannot govern what it does not know about. Strong checkpoints and approval workflows, combined with discovery (see below), reduce Shadow AI by making AI use visible and expected to be registered.</p>

<h2 id="8-discovery-methods-to-uncover-hidden-or-unapproved-ai-usage">8. Discovery Methods to Uncover Hidden or Unapproved AI Usage</h2>

<p>Technical discovery complements governance checkpoints by finding AI that was introduced without going through them. Implementers and auditors can use a combination of methods:</p>

<ul>
  <li>Code scanning for AI SDKs: Scan codebases for imports or dependencies that indicate model usage (e.g. OpenAI client libraries, Hugging Face, LangChain, vendor SDKs). Search for prompt construction, completion calls, and embedding APIs.</li>
  <li>Dependency analysis: Review dependency lists (e.g. package.json, requirements.txt, go.mod, Cargo.toml and Rust crates) for ML/AI libraries and API clients. Flag new or updated dependencies that suggest AI integration.</li>
  <li>API integration review: Identify outbound calls to model APIs, embeddings APIs, or vendor endpoints that document AI features. Use API inventories, network egress reviews, or integration documentation.</li>
  <li>Infrastructure and service usage monitoring: Review usage of internal ML platforms, model serving endpoints, or shared AI services. Monitor which applications or teams consume these services.</li>
  <li>Vendor and SaaS capability review: Periodically check whether purchased applications or platforms have introduced or expanded AI features (e.g. suggested replies, content moderation, summarisation) that process organisational data.</li>
</ul>

<p>Architecture review and developer or product team interviews remain important: trace data flows, ask where classification, recommendation, or generation is performed, and which products call external AI services. Discovery should be repeatable and documented so that auditors can assess whether it was applied consistently.</p>

<h2 id="9-building-and-maintaining-the-ai-system-inventory">9. Building and Maintaining the AI System Inventory</h2>

<p>The AI system inventory is the central artefact. It should record, for each AI system: name, purpose, where AI is used (e.g. classification, recommendation, generation), how it is implemented (internal model, external API, SaaS feature), ownership, and linkage to risk assessment and controls. The inventory should be updated when new AI features are deployed, when integrations change, or when discovery finds previously unknown usage. It should be owned by a function that can enforce the process (e.g. AI governance, architecture, or risk). Linking the inventory to architecture and change management ensures that new systems are added at the right time and that the inventory stays actionable for the rest of the management system.</p>

<p>Some organisations go further with structures that, while not strictly required by ISO/IEC 42001, support consistent and governable AI use. An approved AI register (or approved AI services list) works like an approved list of base images (e.g. approved Docker or Linux images): only listed models, APIs, or vendors are permitted for production use unless an exception is documented. That makes discovery and policy enforcement easier. Where prompts drive material decisions, a prompt registry can be required: a central place to store, version, and review prompts used in production so that changes are visible and auditable. A centralised LLM orchestration layer is another option: all LLM calls are routed through one gateway or proxy. That gives a single point for logging, policy enforcement, and visibility into which applications use which models; it also simplifies discovery because outbound model traffic is concentrated in one place. These practices are organisational choices that complement the AI system inventory and make governance more tractable at scale.</p>

<h2 id="10-why-continuous-discovery-matters">10. Why Continuous Discovery Matters</h2>

<p>Enterprises will never have perfect, real-time visibility into every line of code or every API call. New integrations and features are added continuously; 50 lines of code can turn an application into an AI system at any time. So discovery cannot be a one-off. It should be periodic (e.g. quarterly or as part of architecture or risk cycles) and triggered by significant events (e.g. new vendor onboarding, major releases, or post-incident reviews). The aim is not perfection but a repeatable, risk-based process that keeps the inventory accurate enough to support risk management and compliance. Continuous discovery, combined with governance checkpoints and developer awareness, is what makes AI usage visible and governable over time.</p>

<h2 id="11-conclusion-ai-governance-begins-the-moment-ai-influences-system-behaviour">11. Conclusion: AI Governance Begins the Moment AI Influences System Behaviour</h2>

<p>A deterministic application that gains a model call becomes an AI system under ISO/IEC 42001. The governance impact is large even when the code change is small. Identifying such systems is harder than traditional asset identification because 42001 requires knowing not only what systems exist but where AI is used inside them. Auditors validate that the organisation has a repeatable discovery process and a plausible inventory; implementers build that process by defining what counts as an AI system, establishing the inventory, introducing governance checkpoints and approval workflows, integrating AI review into architecture and change management, training developers, and applying technical discovery methods. Shadow AI is the consequence of weak governance and missing checkpoints; reducing it depends on making AI use visible and expected to be registered. ISO/IEC 42001 is not just about auditing AI systems; it is about building management processes that make AI usage visible and governable. AI governance begins the moment AI influences system behaviour, regardless of how many lines of code it took to get there.</p>]]></content><author><name></name></author><summary type="html"><![CDATA[]]></summary></entry><entry><title type="html">Penetration Testing Is Boring: A Perspective for Freshers</title><link href="https://kkvelan.github.io/blog/2026/03/05/penetration-testing-is-boring.html" rel="alternate" type="text/html" title="Penetration Testing Is Boring: A Perspective for Freshers" /><published>2026-03-05T12:00:00+00:00</published><updated>2026-03-05T12:00:00+00:00</updated><id>https://kkvelan.github.io/blog/2026/03/05/penetration-testing-is-boring</id><content type="html" xml:base="https://kkvelan.github.io/blog/2026/03/05/penetration-testing-is-boring.html"><![CDATA[<p>If you are a fresher looking to get into cybersecurity, especially penetration testing, here is a perspective from the other side: penetration testing is boring. Not in a bad way, but in a real way. The job rarely looks like what you see in labs or challenges. This post is for those who want a clearer picture of what the work actually looks like, at least from a lead’s perspective.</p>

<p><img src="/blog/penetration-testing-is-boring/image1.jpg" alt="Penetration Testing Is Boring: A Perspective for Freshers" /></p>

<h2 id="what-the-work-typically-looks-like">What the Work Typically Looks Like</h2>

<p>A typical engagement runs through the stages below. Each one is less like a movie and more like careful, repeatable project work.</p>

<h2 id="speak-to-customers-and-understand-the-scope">Speak to Customers and Understand the Scope</h2>

<p>In the movies, someone hands you a target and says “get in.” In reality, you spend a lot of time on calls and in meetings. You need to understand what is in scope (which systems, which types of tests, which environments) and what is explicitly off limits. You need to know what “success” means for the client: is it a compliance checkbox, a pre-go-live check, or a deep security review? Scope creep and scope fights are common; unclear scope leads to rework, disputes, or both. As a lead, you treat this phase as the foundation. Get it wrong here and the rest of the engagement is built on sand.</p>

<h2 id="read-the-requirement-make-a-proposal-interpret-it-clearly">Read the Requirement, Make a Proposal, Interpret It Clearly</h2>

<p>You will read statements of work, RFPs, and requirement documents. You will draft or contribute to proposals: what you will do, how long it will take, what you will deliver, and what you will not do. This is not glamorous; it is paperwork and interpretation. But clarity here avoids pain later. Clients often use vague language (“test our infrastructure,” “external penetration test”). You have to turn that into a concrete plan: which IP ranges, which application scope, black box or grey box, whether social engineering or physical testing is included. If you do not nail this down, you will either under-deliver in the client’s eyes or over-deliver and burn out. Neither is good.</p>

<h2 id="talk-to-stakeholders-and-understand-the-targets">Talk to Stakeholders and Understand the Targets</h2>

<p>You work with internal project managers, client IT teams, and sometimes compliance or risk owners. You need to understand the environment: what systems exist, how they are used, what is critical, and what is legacy. You need to know the boundaries: which networks you can touch, which credentials you will get (if any), and what hours or windows you have for testing. Again, this is communication and coordination, not keyboard wizardry. Stakeholders may not understand the difference between a vuln scan and a pen test; you explain, you align, and you document what was agreed. This phase sets the stage for the actual testing so that when you run your tools, you are testing the right things in the right way.</p>

<h2 id="run-the-scans-probes-and-exploits-most-of-them-fail">Run the Scans, Probes, and Exploits; Most of Them Fail</h2>

<p>This is the part that looks most like “hacking,” and it is still nothing like the movies. You run scanners, run manual probes, and try exploits. Most attempts do not land. Systems are patched, configurations are locked down, or the vulnerability you thought was there is not exploitable in this environment. You are not “getting in” every time; you are methodically testing and documenting what works and what does not. You track everything: what you ran, when, and what the result was. The job is as much about ruling out risks as it is about finding them. If you go in expecting to crack every engagement in an hour, you will be frustrated. If you go in expecting a mix of findings, dead ends, and careful note-taking, you will be prepared.</p>

<h2 id="create-screenshots-collect-logs-and-gather-proof">Create Screenshots, Collect Logs, and Gather Proof</h2>

<p>Evidence matters. Every finding that goes into the report needs to be provable. You spend a lot of time capturing screenshots, saving command output, saving logs, and organizing proof for every vulnerability. This is meticulous work. You label files, you note timestamps, and you make sure a reader can follow your steps and reproduce the issue. In the movies, the hacker moves on to the next target. In reality, you stop, document, and then move on. Poor evidence means findings get challenged, clients lose trust, or the report is unusable for remediation. Treat evidence collection as a core part of the job, not an afterthought.</p>

<h2 id="prepare-the-report">Prepare the Report</h2>

<p>Report writing is a large part of the job. You write executive summaries for people who will never read the full report. You list vulnerabilities with clear titles, descriptions, severity, and impact. You suggest mitigation strategies and recommendations. You make sure the language is consistent, the severity ratings are justified, and the report is actionable. This is not a one-page “we found some stuff” note; it is a deliverable that the client will use for compliance, for remediation planning, and for internal communication. Many technically strong testers struggle here because they prefer running tools to writing. If you want to lead engagements or be taken seriously, you need to be able to write clearly and structure a report that stands up to scrutiny.</p>

<h2 id="have-a-closure-call-with-the-customer">Have a Closure Call With the Customer</h2>

<p>When the report is done, you do not just send it and disappear. You have a closure call (or several) with the customer. You walk stakeholders through the findings, explain severity and impact in plain language, and go through mitigation steps. You answer questions: Why is this critical? What do we do first? Can you help us understand this? Some clients are technical; many are not. Your job is to make sure they understand what was found and what to do next. This is again communication and empathy, not hacking. How you present the results often matters as much as the results themselves. A poorly delivered message can cause panic or dismissal; a clear, calm delivery helps the client act.</p>

<h2 id="sometimes-re-test-after-the-customer-patches">Sometimes Re-Test After the Customer Patches</h2>

<p>After the client remediates, they often want you to run another round of tests to verify that the issues are fixed. You re-run the relevant checks, confirm that the vulnerability is no longer present (or is adequately mitigated), and document the outcome. Then you update the report or issue a short closure note. This is not a full new engagement; it is verification. But it is part of the lifecycle. Some engagements have two or three rounds of test, report, fix, re-test. You need to be comfortable with that rhythm and keep your evidence and documentation consistent so that “we fixed it” can be backed up by your retest results.</p>

<h2 id="move-on-to-the-next-project">Move On to the Next Project</h2>

<p>When the engagement is closed, you move on to the next one. Rinse and repeat. You might be on multiple engagements in parallel: one in scoping, one in testing, one in reporting. You switch context, you keep your notes organized, and you do it again. There is no single “big score”; there is a pipeline of projects, each with the same phases. The excitement is not in the drama of one hack; it is in getting good at the full cycle and in the moments when your work actually helps a client improve their security.</p>

<h2 id="its-not-like-the-labs">It’s Not Like the Labs</h2>

<p>None of this looks much like what you learned from CTF challenges, from platforms like Hack The Box or TryHackMe, or from “black screen, green matrix” ideas of hacking. Those are great for building skills: understanding vulnerabilities, using tools, and thinking like an attacker. But the day job is different. It is scoped, documented, and repeatable. It is meetings, reports, and evidence. It is often boring in the sense of being routine and process-driven.</p>

<p>If you go in expecting only technical thrills, you will be disappointed. If you go in knowing that the job is as much about communication, documentation, and consistency as it is about finding flaws, you will have a better start. Penetration testing is boring; that is the job. The excitement is in getting good at it and in the moments when your understanding and discipline actually help a client improve their security.</p>

<h2 id="where-the-real-opportunity-lies">Where the Real Opportunity Lies</h2>

<p>The pull of the job is real. Few roles put you in front of production systems with permission to break them. You probe, you test, and in scoped engagements you exploit or take down live infrastructure; with a contract and rules, not a hoodie in a basement. Along the way you pick up what most developers never see: hardware quirks, low-level code, OS internals, bypass techniques. There is real responsibility, real thrill, and real hardcore knowledge here; it just does not look the way you imagined.</p>]]></content><author><name></name></author><summary type="html"><![CDATA[If you are a fresher looking to get into cybersecurity, especially penetration testing, here is a perspective from the other side: penetration testing is boring. Not in a bad way, but in a real way. The job rarely looks like what you see in labs or challenges. This post is for those who want a clearer picture of what the work actually looks like, at least from a lead’s perspective.]]></summary></entry><entry><title type="html">CrewAI and Deep Agents for Agentic Discovery</title><link href="https://kkvelan.github.io/blog/2026/03/02/crewai-deepagents-discovery.html" rel="alternate" type="text/html" title="CrewAI and Deep Agents for Agentic Discovery" /><published>2026-03-02T12:00:00+00:00</published><updated>2026-03-02T12:00:00+00:00</updated><id>https://kkvelan.github.io/blog/2026/03/02/crewai-deepagents-discovery</id><content type="html" xml:base="https://kkvelan.github.io/blog/2026/03/02/crewai-deepagents-discovery.html"><![CDATA[<p>Agentic discovery needs more than a single model call: planning, task decomposition, tool use, and coordination with workers that do the actual probing. Frameworks like CrewAI and LangChain Deep Agents are built for that. This post explains both and uses a concrete system as the example.</p>

<p><img src="/blog/crewai-deepagents-discovery/image1.png" alt="CrewAI and Deep Agents for Agentic Discovery" /></p>

<h2 id="probescout-what-we-are-building-and-why">ProbeScout: What We Are Building and Why</h2>

<p>I am building an agentic vulnerability assessment and discovery system (ProbeScout) that ties together concurrent scanning, traffic shaping, and an AI layer for risk and prioritization. The pipeline uses nmap for port scanning and service discovery, traceroute and tcptraceroute for target intel and path analysis, and tools like hping3 for SYN probes, with results fed into an AI layer for reasoning about findings.</p>

<p>The system is designed to operate autonomously with minimal human intervention, scaling to thousands of IP addresses via continuous, batch, or scheduled execution. The goal is to reduce reliance on manual effort for probe orchestration, result analysis, and prioritization decisions.</p>

<p>To get there, the orchestration layer must plan campaigns, decompose work into batches or phases, keep context under control when scan output is large, and delegate analysis without overloading a single prompt. That is exactly the kind of workload agent frameworks target: multi-step tasks, tool use, and structured execution. Below we look at CrewAI and Deep Agents and where they fit in a setup like ProbeScout.</p>

<h2 id="where-the-agent-layer-runs">Where the Agent Layer Runs</h2>

<p>ProbeScout runs as three tiers:</p>

<ul>
  <li>Frontend (Node.js): Create campaigns, upload targets, initiate scans, monitor runs. Talks to the backend for orchestration and live status.</li>
  <li>Backend (Python): Orchestration, scheduling, coordination. Assigns work to Rust scanning agents, receives status and progress updates, aggregates results, and drives reasoning or prioritization. This is where CrewAI or Deep Agents run.- Scanning agents (Rust): Run on separate machines in the network. Perform probing (nmap, traceroute, tcptraceroute, etc.), coordinate with the backend, and report status, progress, and results.</li>
</ul>

<p>The backend uses tools to dispatch work to Rust agents (e.g. “run nmap on this target”, “run traceroute”, “store result in X”) and to push status and results to the frontend.</p>

<h2 id="crewai">CrewAI</h2>

<p><a href="https://www.crewai.com/">CrewAI</a> lets you define agents with roles and goals and crews that work together on tasks. In a system like ProbeScout, you can assign different agents to different stages in the Python backend: one agent for target selection or campaign planning, one for dispatching work to Rust agents and collecting progress, one for result analysis and prioritization. Tasks can be chained and delegated, which fits a pipeline where the frontend creates a campaign, the backend decomposes it and hands batches to Rust agents, and results flow back for analysis and display. Useful when you want a clear separation of roles and a crew that collaborates on a shared goal.</p>

<h2 id="langchain-deep-agents">LangChain Deep Agents</h2>

<p><a href="https://docs.langchain.com/oss/python/deepagents/overview">Deep Agents</a> (from the LangChain ecosystem) are built for complex, multi-step tasks with built-in support for:</p>

<ul>
  <li>Planning and task decomposition – e.g. break a campaign into batches or phases and track progress as Rust agents report back.</li>
  <li>Context management – file system tools (<code class="language-plaintext highlighter-rouge">read_file</code>, <code class="language-plaintext highlighter-rouge">write_file</code>, etc.) so the backend can offload scan results and logs instead of blowing the context window.</li>
  <li>Subagent spawning – delegate a subtask (e.g. “analyze this subnet’s results” or “correlate these ports”) to a dedicated agent and keep the main orchestrator’s context focused.</li>
  <li>Pluggable backends – in-memory, local disk, or durable stores for state and context, so the backend can scale and persist across restarts.</li>
  <li>Long-term memory – persist facts or preferences across runs for a more consistent scanning and prioritization strategy.</li>
</ul>

<p>The Deep Agents SDK is a standalone library on top of LangChain and uses LangGraph for execution, streaming, and human-in-the-loop. In ProbeScout, running in the Python backend, it would coordinate with Rust scanning agents via your existing APIs, aggregate status and results, and drive what the frontend shows and what work gets sent next.</p>

<h2 id="how-it-fits-today">How It Fits Today</h2>

<p>In the current setup, the Node.js frontend is where operators create campaigns, upload targets, and initiate scans. The Python backend runs the orchestration and reasoning; that is where you integrate CrewAI or Deep Agents. They call into your Rust agents (via the same coordination channel you already use for status, progress, and results), handle planning and decomposition, and use file system or backend storage to keep scan output and state manageable. Rust agents stay focused on probing; the backend stays focused on what to run and what it means. Either framework fits this split.</p>

<h2 id="summary">Summary</h2>

<p>CrewAI gives you role-based agents and crews in the Python backend for collaborative planning, dispatch, and analysis. Deep Agents give you planning, decomposition, context management, subagents, and pluggable backends in one stack, also in the backend. In a system like ProbeScout, both coordinate with Rust scanning agents on separate machines and with the Node.js frontend that creates campaigns, uploads targets, and initiates scans. The architecture stays: Node.js frontend, Python backend (with CrewAI or Deep Agents), Rust scanning agents; the agent layer is the brain in the middle.</p>]]></content><author><name></name></author><summary type="html"><![CDATA[Agentic discovery needs more than a single model call: planning, task decomposition, tool use, and coordination with workers that do the actual probing. Frameworks like CrewAI and LangChain Deep Agents are built for that. This post explains both and uses a concrete system as the example.]]></summary></entry><entry><title type="html">Rust for AI Infrastructure</title><link href="https://kkvelan.github.io/blog/2026/02/27/rust-ai-workloads-backend.html" rel="alternate" type="text/html" title="Rust for AI Infrastructure" /><published>2026-02-27T12:00:00+00:00</published><updated>2026-02-27T12:00:00+00:00</updated><id>https://kkvelan.github.io/blog/2026/02/27/rust-ai-workloads-backend</id><content type="html" xml:base="https://kkvelan.github.io/blog/2026/02/27/rust-ai-workloads-backend.html"><![CDATA[<p>Most conversations about AI focus on models and Python. But once you start building real systems around AI, concurrency, speed and performance become important.</p>

<p><img src="/blog/rust-ai-workloads-backend/image1.jpeg" alt="Rust for AI Infrastructure" /></p>

<h2 id="a-practical-scenario">A Practical Scenario</h2>

<p>Suppose you are building an agentic discovery system inside an enterprise network. The system continuously scans and enumerates thousands of internal systems, collects service details, tracks configuration changes, compares historical states, and feeds that data into an AI layer that reasons about risk or prioritization.</p>

<p>The challenge is not just inference.</p>

<p>You need to handle thousands of concurrent network operations, continuous scheduling, data parsing, state comparison, queue management, and long-running reliability. The system must run 24/7 without leaking memory, collapsing under load, or creating unpredictable latency spikes.</p>

<h2 id="where-rust-fits">Where Rust Fits</h2>

<p>Rust gives you high-performance networking, controlled concurrency through async runtimes like Tokio, and compile-time guarantees that eliminate many common memory and race-condition problems. When you are managing thousands of parallel tasks (scanning, parsing responses, diffing results, feeding pipelines), those guarantees become extremely valuable.</p>

<p>Another advantage is predictability. Systems that continuously process large streams of data cannot afford garbage-collection pauses or silent memory growth. For example, imagine processing vulnerability data across thousands of systems, correlating scan results, tracking changes, and feeding prioritization pipelines. Rust’s ownership model keeps resource usage explicit and stable even in long-running workloads like these.</p>

<p>In architectures like these, the AI model is just one component. Around it sits a large amount of infrastructure: discovery workers, schedulers, enrichment pipelines, storage layers, and APIs that expose results to analysts or other systems.</p>

<h2 id="example-concurrent-port-scanning-with-hping3">Example: Concurrent Port Scanning with hping3</h2>

<p>A typical backend component runs many scans concurrently but shapes traffic so the network and target are not overwhelmed. Below, we run hping3 for SYN port scanning with bounded concurrency and send results into a channel for downstream processing (e.g. enrichment or an AI risk layer):</p>

<div class="language-rust highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="k">use</span> <span class="nn">tokio</span><span class="p">::</span><span class="nn">process</span><span class="p">::</span><span class="n">Command</span><span class="p">;</span>
<span class="k">use</span> <span class="nn">tokio</span><span class="p">::</span><span class="nn">sync</span><span class="p">::{</span><span class="n">mpsc</span><span class="p">,</span> <span class="n">Semaphore</span><span class="p">};</span>
<span class="k">use</span> <span class="nn">std</span><span class="p">::</span><span class="nn">sync</span><span class="p">::</span><span class="nb">Arc</span><span class="p">;</span>

<span class="k">struct</span> <span class="n">ScanResult</span> <span class="p">{</span>
    <span class="n">target</span><span class="p">:</span> <span class="nb">String</span><span class="p">,</span>
    <span class="n">port</span><span class="p">:</span> <span class="nb">u16</span><span class="p">,</span>
    <span class="n">open</span><span class="p">:</span> <span class="nb">bool</span><span class="p">,</span>
<span class="p">}</span>

<span class="k">async</span> <span class="k">fn</span> <span class="nf">run_hping3_scan</span><span class="p">(</span>
    <span class="n">target</span><span class="p">:</span> <span class="o">&amp;</span><span class="nb">str</span><span class="p">,</span>
    <span class="n">port</span><span class="p">:</span> <span class="nb">u16</span><span class="p">,</span>
<span class="p">)</span> <span class="k">-&gt;</span> <span class="nn">std</span><span class="p">::</span><span class="nn">io</span><span class="p">::</span><span class="nb">Result</span><span class="o">&lt;</span><span class="n">ScanResult</span><span class="o">&gt;</span> <span class="p">{</span>
    <span class="k">let</span> <span class="n">out</span> <span class="o">=</span> <span class="nn">Command</span><span class="p">::</span><span class="nf">new</span><span class="p">(</span><span class="s">"hping3"</span><span class="p">)</span>
        <span class="nf">.arg</span><span class="p">(</span><span class="s">"-S"</span><span class="p">)</span>
        <span class="nf">.arg</span><span class="p">(</span><span class="s">"-p"</span><span class="p">)</span>
        <span class="nf">.arg</span><span class="p">(</span><span class="n">port</span><span class="nf">.to_string</span><span class="p">())</span>
        <span class="nf">.arg</span><span class="p">(</span><span class="s">"-c"</span><span class="p">)</span>
        <span class="nf">.arg</span><span class="p">(</span><span class="s">"1"</span><span class="p">)</span>
        <span class="nf">.arg</span><span class="p">(</span><span class="n">target</span><span class="p">)</span>
        <span class="nf">.output</span><span class="p">()</span>
        <span class="k">.await</span><span class="o">?</span><span class="p">;</span>
    <span class="k">let</span> <span class="n">open</span> <span class="o">=</span> <span class="n">out</span><span class="py">.status</span><span class="nf">.success</span><span class="p">();</span>
    <span class="nf">Ok</span><span class="p">(</span><span class="n">ScanResult</span> <span class="p">{</span>
        <span class="n">target</span><span class="p">:</span> <span class="n">target</span><span class="nf">.into</span><span class="p">(),</span>
        <span class="n">port</span><span class="p">,</span>
        <span class="n">open</span><span class="p">,</span>
    <span class="p">})</span>
<span class="p">}</span>

<span class="k">async</span> <span class="k">fn</span> <span class="nf">run_concurrent_scans</span><span class="p">(</span>
    <span class="n">tx</span><span class="p">:</span> <span class="nn">mpsc</span><span class="p">::</span><span class="n">Sender</span><span class="o">&lt;</span><span class="n">ScanResult</span><span class="o">&gt;</span><span class="p">,</span>
    <span class="n">targets</span><span class="p">:</span> <span class="nb">Vec</span><span class="o">&lt;</span><span class="nb">String</span><span class="o">&gt;</span><span class="p">,</span>
    <span class="n">ports</span><span class="p">:</span> <span class="nb">Vec</span><span class="o">&lt;</span><span class="nb">u16</span><span class="o">&gt;</span><span class="p">,</span>
    <span class="n">max_concurrent</span><span class="p">:</span> <span class="nb">usize</span><span class="p">,</span>
<span class="p">)</span> <span class="k">-&gt;</span> <span class="nb">Result</span><span class="o">&lt;</span><span class="p">(),</span> <span class="nb">Box</span><span class="o">&lt;</span><span class="k">dyn</span> <span class="nn">std</span><span class="p">::</span><span class="nn">error</span><span class="p">::</span><span class="n">Error</span> <span class="o">+</span> <span class="nb">Send</span> <span class="o">+</span> <span class="nb">Sync</span><span class="o">&gt;&gt;</span> <span class="p">{</span>
    <span class="k">let</span> <span class="n">sem</span> <span class="o">=</span> <span class="nn">Arc</span><span class="p">::</span><span class="nf">new</span><span class="p">(</span><span class="nn">Semaphore</span><span class="p">::</span><span class="nf">new</span><span class="p">(</span><span class="n">max_concurrent</span><span class="p">));</span>
    <span class="k">let</span> <span class="k">mut</span> <span class="n">handles</span> <span class="o">=</span> <span class="nn">Vec</span><span class="p">::</span><span class="nf">new</span><span class="p">();</span>
    <span class="k">for</span> <span class="n">target</span> <span class="k">in</span> <span class="o">&amp;</span><span class="n">targets</span> <span class="p">{</span>
        <span class="k">for</span> <span class="o">&amp;</span><span class="n">port</span> <span class="k">in</span> <span class="o">&amp;</span><span class="n">ports</span> <span class="p">{</span>
            <span class="k">let</span> <span class="n">tx</span> <span class="o">=</span> <span class="n">tx</span><span class="nf">.clone</span><span class="p">();</span>
            <span class="k">let</span> <span class="n">permit</span> <span class="o">=</span> <span class="n">sem</span><span class="nf">.clone</span><span class="p">()</span><span class="nf">.acquire_owned</span><span class="p">()</span><span class="k">.await</span><span class="o">?</span><span class="p">;</span>
            <span class="k">let</span> <span class="n">target</span> <span class="o">=</span> <span class="n">target</span><span class="nf">.clone</span><span class="p">();</span>
            <span class="n">handles</span><span class="nf">.push</span><span class="p">(</span><span class="nn">tokio</span><span class="p">::</span><span class="nf">spawn</span><span class="p">(</span><span class="k">async</span> <span class="k">move</span> <span class="p">{</span>
                <span class="k">let</span> <span class="n">_permit</span> <span class="o">=</span> <span class="n">permit</span><span class="p">;</span>
                <span class="k">let</span> <span class="n">result</span> <span class="o">=</span> <span class="nf">run_hping3_scan</span><span class="p">(</span><span class="o">&amp;</span><span class="n">target</span><span class="p">,</span> <span class="n">port</span><span class="p">)</span><span class="k">.await</span><span class="o">?</span><span class="p">;</span>
                <span class="k">let</span> <span class="n">_</span> <span class="o">=</span> <span class="n">tx</span><span class="nf">.send</span><span class="p">(</span><span class="n">result</span><span class="p">)</span><span class="k">.await</span><span class="p">;</span>
                <span class="nn">Ok</span><span class="p">::</span><span class="o">&lt;</span><span class="n">_</span><span class="p">,</span> <span class="nb">Box</span><span class="o">&lt;</span><span class="k">dyn</span> <span class="nn">std</span><span class="p">::</span><span class="nn">error</span><span class="p">::</span><span class="n">Error</span> <span class="o">+</span> <span class="nb">Send</span> <span class="o">+</span> <span class="nb">Sync</span><span class="o">&gt;&gt;</span><span class="p">(</span>
                    <span class="p">(),</span>
                <span class="p">)</span>
            <span class="p">}));</span>
        <span class="p">}</span>
    <span class="p">}</span>
    <span class="k">for</span> <span class="n">h</span> <span class="k">in</span> <span class="n">handles</span> <span class="p">{</span>
        <span class="k">let</span> <span class="n">_</span> <span class="o">=</span> <span class="n">h</span><span class="k">.await</span><span class="p">;</span>
    <span class="p">}</span>
    <span class="nf">Ok</span><span class="p">(())</span>
<span class="p">}</span>
</code></pre></div></div>

<p>Here, packet shaping is done by limiting concurrency with a <code class="language-plaintext highlighter-rouge">Semaphore</code> (e.g. <code class="language-plaintext highlighter-rouge">max_concurrent: 50</code>) so you do not flood the network or trigger rate limits. The channel <code class="language-plaintext highlighter-rouge">tx</code> feeds scan results to another task that can aggregate, store, or pass them to an AI pipeline.</p>

<h2 id="agentic-vulnerability-assessment-and-discovery">Agentic Vulnerability Assessment and Discovery</h2>

<p>I am building an agentic vulnerability assessment and discovery system that ties together concurrent scanning, traffic shaping, and an AI layer for risk and prioritization. The pipeline uses nmap for port scanning and service discovery, traceroute and tcptraceroute for target intel and path analysis, and tools like hping3 for SYN probes, with results fed into an AI layer for reasoning about findings.</p>

<p>The system is designed to operate autonomously with minimal human intervention, scaling to thousands of IP addresses via continuous, batch, or scheduled execution. The goal is to reduce reliance on manual effort for probe orchestration, result analysis, and prioritization decisions.</p>

<h2 id="architecture">Architecture</h2>

<p>The system is three tiers: Node.js frontend, Python backend, and Rust scanning agents.</p>

<ul>
  <li>Frontend (Node.js): Create campaigns, upload targets, initiate scans, and monitor runs. The UI talks to the backend for orchestration and live status.</li>
  <li>Backend (Python): Orchestration, scheduling, and coordination. It hands work to scanning agents, receives status and progress updates, aggregates results, and drives reasoning or prioritization. Single control plane for the whole system.</li>
  <li>Scanning agents (Rust): Run on separate machines inside the network. Each agent performs the actual probing (nmap, traceroute, tcptraceroute, etc.), coordinates with the backend over the network, and reports status, progress, and results. Rust keeps scanning fast and predictable; the backend and frontend handle workflow and UX.</li>
</ul>

<p>Agents register or poll the backend for work, stream progress and results back, and scale out by adding more machines. The frontend is where operators create campaigns, upload targets, and initiate scans.</p>

<h2 id="useful-cargo-crates-for-ai-backends">Useful Cargo Crates for AI Backends</h2>

<p>These crates are commonly used when building Rust backends that sit alongside AI inference or orchestration:</p>

<table>
  <thead>
    <tr>
      <th>Crate</th>
      <th>Purpose</th>
    </tr>
  </thead>
  <tbody>
    <tr>
      <td>tokio</td>
      <td>Async runtime: networking, timers, concurrency</td>
    </tr>
    <tr>
      <td>axum / actix-web</td>
      <td>HTTP APIs to expose results or call Python services</td>
    </tr>
    <tr>
      <td>serde / serde_json</td>
      <td>Serialization for configs, API payloads, pipeline data</td>
    </tr>
    <tr>
      <td>reqwest</td>
      <td>Async HTTP client (call model APIs, internal services)</td>
    </tr>
    <tr>
      <td>tonic</td>
      <td>gRPC for high-throughput service-to-service calls</td>
    </tr>
    <tr>
      <td>redis / deadqueue</td>
      <td>Queues and caches for job distribution and rate limiting</td>
    </tr>
    <tr>
      <td>tracing / tracing-subscriber</td>
      <td>Structured logging and observability</td>
    </tr>
    <tr>
      <td>pyo3</td>
      <td>Embed or call Python from Rust when you need a model API</td>
    </tr>
  </tbody>
</table>

<p>Example <code class="language-plaintext highlighter-rouge">Cargo.toml</code> for a minimal AI-facing backend:</p>

<div class="language-toml highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="nn">[package]</span>
<span class="py">name</span> <span class="p">=</span> <span class="s">"ai-backend"</span>
<span class="py">version</span> <span class="p">=</span> <span class="s">"0.1.0"</span>
<span class="py">edition</span> <span class="p">=</span> <span class="s">"2021"</span>

<span class="nn">[dependencies]</span>
<span class="nn">tokio</span> <span class="o">=</span> <span class="p">{</span> <span class="py">version</span> <span class="p">=</span> <span class="s">"1"</span><span class="p">,</span> <span class="py">features</span> <span class="p">=</span> <span class="nn">["full"]</span> <span class="p">}</span>
<span class="py">axum</span> <span class="p">=</span> <span class="s">"0.7"</span>
<span class="nn">serde</span> <span class="o">=</span> <span class="p">{</span> <span class="py">version</span> <span class="p">=</span> <span class="s">"1"</span><span class="p">,</span> <span class="py">features</span> <span class="p">=</span> <span class="nn">["derive"]</span> <span class="p">}</span>
<span class="py">serde_json</span> <span class="p">=</span> <span class="s">"1"</span>
<span class="nn">reqwest</span> <span class="o">=</span> <span class="p">{</span> <span class="py">version</span> <span class="p">=</span> <span class="s">"0.11"</span><span class="p">,</span> <span class="py">default-features</span> <span class="p">=</span> <span class="kc">false</span><span class="p">,</span> <span class="py">features</span> <span class="p">=</span> <span class="nn">["json"]</span> <span class="p">}</span>
<span class="py">tracing</span> <span class="p">=</span> <span class="s">"0.1"</span>
<span class="nn">tracing-subscriber</span> <span class="o">=</span> <span class="p">{</span> <span class="py">version</span> <span class="p">=</span> <span class="s">"0.3"</span><span class="p">,</span> <span class="py">features</span> <span class="p">=</span> <span class="nn">["env-filter"]</span> <span class="p">}</span>
</code></pre></div></div>

<h2 id="ai-backend-services-that-use-rust">AI Backend Services That Use Rust</h2>

<p>Several production AI backends and infra projects rely on Rust for speed and reliability:</p>

<ul>
  <li>Candle (Hugging Face): ML inference engine in Rust. Used to run models (including LLMs) with minimal dependencies and good performance on CPU and GPU.</li>
  <li>llm.rs / llama.cpp bindings: Rust crates and tooling around fast inference runtimes, often used for local or edge deployment.</li>
  <li>Inference servers and gateways: Many custom inference gateways that sit in front of Python model servers are written in Rust for request routing, batching, rate limiting, and auth.</li>
  <li>Vector DBs and embedding pipelines: Services that index embeddings, run similarity search, or build RAG pipelines often use Rust for the hot path (e.g. qdrant, milvus-related components, or custom indexers).</li>
  <li>Orchestration and agents: Backends that schedule tasks, call multiple models, or run agent loops use Rust for the control plane and Python (or FFI) for the model calls.</li>
  <li>Observability and telemetry: Pipelines that ingest traces, metrics, or logs from AI workloads sometimes use Rust for high-throughput ingestion and aggregation.</li>
</ul>

<p>These are examples of the split you see in practice: Python for model code and experimentation, Rust for the services that serve, scale, and orchestrate it.</p>

<h2 id="python-and-rust-together">Python and Rust Together</h2>

<p>The Python ecosystem remains central to AI development. Most model libraries, research tooling, and experimentation frameworks still live there. Python remains excellent for experimentation and model development.</p>

<p>In practice, many architectures benefit from both: Python for model development and Rust for the high-performance backend infrastructure around it.</p>

<p>When you need speed, safety, and concurrency in the always-on backend that powers AI systems, Rust becomes a very compelling choice.</p>]]></content><author><name></name></author><summary type="html"><![CDATA[Most conversations about AI focus on models and Python. But once you start building real systems around AI, concurrency, speed and performance become important.]]></summary></entry><entry><title type="html">Where AI Fits in Red Teaming Workflows</title><link href="https://kkvelan.github.io/blog/2026/02/24/ai-red-teaming-workflows.html" rel="alternate" type="text/html" title="Where AI Fits in Red Teaming Workflows" /><published>2026-02-24T12:00:00+00:00</published><updated>2026-02-24T12:00:00+00:00</updated><id>https://kkvelan.github.io/blog/2026/02/24/ai-red-teaming-workflows</id><content type="html" xml:base="https://kkvelan.github.io/blog/2026/02/24/ai-red-teaming-workflows.html"><![CDATA[<p>A recent discussion with a few red team leads kept coming back to one question: where does AI actually fit in red teaming workflows? Not as a replacement for tradecraft, but as a tool that speeds up the right parts of the job. The answer is not “everywhere,” and it is not “nowhere.” It is in specific, repeatable tasks where you still own the reasoning and the outcome. Below are the areas we found most practical, plus how to use AI without giving up control or client trust.</p>

<p><img src="/blog/ai-red-teaming-workflows/input.jpeg" alt="Where AI Fits in Red Teaming Workflows" /></p>

<h2 id="macro-payloads-and-initial-attack-chains">Macro Payloads and Initial Attack Chains</h2>

<p>Crafting macro payloads for documents as part of an initial attack chain is one practical area. Instead of manually building VBA structures, adjusting execution flow, or refining trigger logic, AI can help generate clean macro templates quickly. You still control the logic, execution path, and safety boundaries. It accelerates development, but does not replace understanding of how Office, process spawning, and detection controls work.</p>

<h2 id="thick-clients-and-binary-triage">Thick Clients and Binary Triage</h2>

<p>Red teaming thick client applications is another strong area. Binary triage is as well. When you load a binary into Ghidra, the first level of understanding takes time. If you extract specific functions and ask a model to summarize control flow, trace user-controlled inputs, or point out unsafe memory handling, it can speed up the early analysis. You still verify everything yourself. You still confirm exploitability. But you understand the code faster.</p>

<h2 id="lateral-movement-analysis">Lateral Movement Analysis</h2>

<p>Lateral movement analysis in a Windows domain becomes complex when the environment is large. Local admin memberships, delegation settings, active sessions, trust relationships. It is not easy to mentally connect all of this. You can feed structured data into a model and take MITRE ATT&amp;CK as a reference to construct possible movement paths across techniques and trust boundaries. It can highlight privilege chains and cross-tier access routes. You then validate what is actually possible and in scope.</p>

<h2 id="c2-and-lab-setup">C2 and Lab Setup</h2>

<p>Making simple C2 implants in lab environments is another example. Instead of manually wiring Sliver profiles, writing custom stagers, tweaking compile flags, and setting up a quick control panel, AI can help generate a basic structure faster. It reduces setup effort. GhostLink is a C&amp;C platform for red team engagements in this space: AI can help with boilerplate, config generation, and wiring up observers or control panels so you spend more time on operator workflow and OPSEC and less on repetitive setup. But OPSEC awareness, detection impact, and responsible usage remain with the operator.</p>

<p><img src="/blog/ai-red-teaming-workflows/i1.jpeg" alt="Editing remote process file" /></p>

<p><em>Editing remote process file on the host where the C2 agent runs.</em></p>

<p><img src="/blog/ai-red-teaming-workflows/i2.jpeg" alt="Control panel actions" /></p>

<p><em>Launching actions from the control panel: download file, upload file, take screenshot on the machine where the C2 agent runs.</em></p>

<p><img src="/blog/ai-red-teaming-workflows/i3.jpeg" alt="Building agents" /></p>

<p><em>Compiling and building agents or implants for different architectures and OS platforms.</em></p>

<p><img src="/blog/ai-red-teaming-workflows/i4.jpeg" alt="Observability" /></p>

<p><em>Observability and debugging for the C2 platform.</em></p>

<h2 id="caution-and-guardrails">Caution and Guardrails</h2>

<p>One caution. This is not about blindly copy-pasting into ChatGPT or casually using AI assistants. You must understand what you are doing. In red teaming, data sensitivity and client trust matter. I clearly advise a few conditions. Prefer local models wherever possible. Use synthetic or sanitized data for experiments. Put proper guardrails in place. Build a small observability layer to log prompts and outputs. And if the customer already has approved AI models in their own cloud environment, it is better to use their endpoints within their boundary instead of moving data outside.</p>

<h2 id="the-real-advantage">The Real Advantage</h2>

<p>AI does not replace red team skill. It reduces time spent on repetitive work and early analysis. The real advantage still belongs to people who understand authentication flows, privilege boundaries, operating system behavior, and trust relationships.</p>

<p>If you know what you are doing, AI makes you faster. If you do not, it only makes you confident without depth.</p>]]></content><author><name></name></author><summary type="html"><![CDATA[A recent discussion with a few red team leads kept coming back to one question: where does AI actually fit in red teaming workflows? Not as a replacement for tradecraft, but as a tool that speeds up the right parts of the job. The answer is not “everywhere,” and it is not “nowhere.” It is in specific, repeatable tasks where you still own the reasoning and the outcome. Below are the areas we found most practical, plus how to use AI without giving up control or client trust.]]></summary></entry></feed>