<?xml version="1.0" encoding="utf-8" standalone="yes"?><rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Algorithms |</title><link>https://ad25aderram.github.io/my-portfolio/tags/algorithms/</link><atom:link href="https://ad25aderram.github.io/my-portfolio/tags/algorithms/index.xml" rel="self" type="application/rss+xml"/><description>Algorithms</description><generator>HugoBlox Kit (https://hugoblox.com)</generator><language>en-us</language><lastBuildDate>Mon, 01 Dec 2025 00:00:00 +0000</lastBuildDate><item><title>Pipeline Optimisation with Operations Research</title><link>https://ad25aderram.github.io/my-portfolio/projects/cicd-optimisation/</link><pubDate>Mon, 01 Dec 2025 00:00:00 +0000</pubDate><guid>https://ad25aderram.github.io/my-portfolio/projects/cicd-optimisation/</guid><description>&lt;p&gt;An operations research project that treats a real-world DevOps pipeline as a scheduling problem — applying classical OR methods to reduce deployment time for a critical banking application.&lt;/p&gt;
&lt;h2 id="overview"&gt;Overview&lt;/h2&gt;
&lt;p&gt;CI/CD pipelines in high-stakes environments like banking have strict reliability and performance requirements. The challenge was to model the pipeline mathematically and find the minimum possible deployment duration while respecting task dependencies and enabling parallelism where possible.&lt;/p&gt;
&lt;h2 id="approach"&gt;Approach&lt;/h2&gt;
&lt;p&gt;The pipeline was modelled as a &lt;strong&gt;directed acyclic graph (DAG)&lt;/strong&gt; where each node is a pipeline stage and edges represent precedence constraints.&lt;/p&gt;
&lt;p&gt;Methods applied:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;Kahn&amp;rsquo;s topological sort&lt;/strong&gt; — to establish a valid execution order&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;PERT/CPM&lt;/strong&gt; — to compute earliest and latest start dates for each task&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Critical path analysis&lt;/strong&gt; — to identify zero-slack tasks that directly determine total duration&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Slack analysis&lt;/strong&gt; — to find where parallelisation opportunities exist&lt;/li&gt;
&lt;/ul&gt;
&lt;h2 id="results"&gt;Results&lt;/h2&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;Critical path: 64 minutes&lt;/strong&gt; — the theoretical minimum for a full deployment cycle&lt;/li&gt;
&lt;li&gt;Identified which pipeline stages offered the most optimisation potential&lt;/li&gt;
&lt;li&gt;Produced concrete managerial recommendations for resource prioritisation and risk mitigation&lt;/li&gt;
&lt;/ul&gt;
&lt;h2 id="architecture-diagram"&gt;Architecture diagram&lt;/h2&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-fallback" data-lang="fallback"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;[Checkout] → [Build] → [Unit Tests] ─────────────────────────┐
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; ↘ ▼
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; [Integration Tests] → [Security Scan] → [Deploy]
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; ↗
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; [Docker Build] ──────
&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;p&gt;&lt;em&gt;Stages on the critical path have zero slack — any delay directly increases total deployment time.&lt;/em&gt;&lt;/p&gt;
&lt;h2 id="tech-stack"&gt;Tech stack&lt;/h2&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;Language&lt;/strong&gt;: Python&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Methods&lt;/strong&gt;: PERT/CPM, topological sort (Kahn&amp;rsquo;s algorithm), critical path analysis, graph theory&lt;/li&gt;
&lt;/ul&gt;
&lt;h2 id="key-takeaways"&gt;Key takeaways&lt;/h2&gt;
&lt;p&gt;Operations research techniques that seem purely theoretical deliver actionable insights in modern software engineering — especially where reliability and time-to-market are critical. This project changed how I think about pipeline design.&lt;/p&gt;</description></item><item><title>Advanced Algorithms &amp; Data Structures</title><link>https://ad25aderram.github.io/my-portfolio/projects/advanced-algorithms/</link><pubDate>Sat, 01 Nov 2025 00:00:00 +0000</pubDate><guid>https://ad25aderram.github.io/my-portfolio/projects/advanced-algorithms/</guid><description>&lt;p&gt;A series of algorithm implementations focused on understanding performance trade-offs through experimental analysis. Rather than relying on theoretical Big-O alone, every implementation is benchmarked and visualised against real data.&lt;/p&gt;
&lt;h2 id="sorting--search-complexity"&gt;Sorting &amp;amp; search complexity&lt;/h2&gt;
&lt;p&gt;Seven sorting algorithms benchmarked across four data distributions — random, ascending, descending, and identical values:&lt;/p&gt;
&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Algorithm&lt;/th&gt;
&lt;th&gt;Complexity&lt;/th&gt;
&lt;th&gt;Best case&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Quick sort (dual-pivot)&lt;/td&gt;
&lt;td&gt;O(n log n) avg&lt;/td&gt;
&lt;td&gt;Random data&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Merge sort&lt;/td&gt;
&lt;td&gt;O(n log n)&lt;/td&gt;
&lt;td&gt;Consistent&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Heap sort&lt;/td&gt;
&lt;td&gt;O(n log n)&lt;/td&gt;
&lt;td&gt;Consistent&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Insertion sort&lt;/td&gt;
&lt;td&gt;O(n²)&lt;/td&gt;
&lt;td&gt;Nearly-sorted data&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Selection sort&lt;/td&gt;
&lt;td&gt;O(n²)&lt;/td&gt;
&lt;td&gt;—&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Bubble sort&lt;/td&gt;
&lt;td&gt;O(n²)&lt;/td&gt;
&lt;td&gt;—&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Binary search&lt;/td&gt;
&lt;td&gt;O(log n)&lt;/td&gt;
&lt;td&gt;—&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;
&lt;p&gt;Runtime was normalised against T(n)/n, T(n)/(n log n), and T(n)/n² to empirically verify complexity classes. Key finding: insertion sort consistently outperforms O(n log n) algorithms on nearly-sorted data — a result that only becomes intuitive when you see it in a plot.&lt;/p&gt;
&lt;h2 id="hash-tables"&gt;Hash tables&lt;/h2&gt;
&lt;ul&gt;
&lt;li&gt;Chaining with linked lists for collision handling&lt;/li&gt;
&lt;li&gt;Open addressing: linear probing, quadratic probing, double hashing&lt;/li&gt;
&lt;li&gt;Benchmarked on datasets up to 100,000 elements — collision rates and lookup times compared across all strategies&lt;/li&gt;
&lt;/ul&gt;
&lt;h2 id="probabilistic-filters"&gt;Probabilistic filters&lt;/h2&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;Bloom filter&lt;/strong&gt; with k = 2, 3, 4 hash functions — false positive rate vs. memory trade-off&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Count-Min Sketch&lt;/strong&gt; for frequency estimation — precision vs. hash function count&lt;/li&gt;
&lt;/ul&gt;
&lt;h2 id="string-pattern-matching"&gt;String pattern matching&lt;/h2&gt;
&lt;ul&gt;
&lt;li&gt;Naïve search vs. &lt;strong&gt;deterministic finite automaton (DFA)&lt;/strong&gt; — transition function computation and pattern recognition on randomly generated text&lt;/li&gt;
&lt;li&gt;Performance comparison shows DFA&amp;rsquo;s advantage grows with text length&lt;/li&gt;
&lt;/ul&gt;
&lt;h2 id="dynamic-programming"&gt;Dynamic programming&lt;/h2&gt;
&lt;ul&gt;
&lt;li&gt;Fibonacci: naïve recursion vs. memoisation — exponential-to-linear improvement demonstrated empirically&lt;/li&gt;
&lt;li&gt;Coin change: minimum coin count with full optimal solution reconstruction&lt;/li&gt;
&lt;/ul&gt;
&lt;h2 id="tech-stack"&gt;Tech stack&lt;/h2&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;Language&lt;/strong&gt;: Python&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Libraries&lt;/strong&gt;: Matplotlib, NumPy&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Focus&lt;/strong&gt;: Algorithm design, complexity analysis, experimental benchmarking&lt;/li&gt;
&lt;/ul&gt;</description></item></channel></rss>