AI

GPT Transformers models

Transformer models revolutionised AI by replacing recurrence with attention, allowing machines to understand context, relationships, and meaning across long sequences of data. This post explores how GPT models harness the Transformer architecture to generate coherent, intelligent text and why this innovation reshaped the entire AI landscape.

  • November 5, 2025

Unlocking RAG Success with Semantic Chunking

Semantic chunking is a smarter, meaning-based approach to breaking text into pieces for Retrieval-Augmented Generation (RAG) systems. Instead of slicing by token count, it groups sentences that belong together, keeping the logic and flow intact. This helps retrieval systems pull coherent context rather than fragmented thoughts—a crucial advantage when dealing with complex documents like contracts or financial statements. While current research shows mixed evidence on its effectiveness, the theoretical promise is clear: semantic chunking preserves context, coherence, and clarity, making RAG systems feel less mechanical and more genuinely understanding.

  • November 1, 2025

Transcribly

Github Repository 2023 – Present | Full-stack development | Technologies: Python, Flask, reactjs, Openai API, FFmpeg, pyDub, mpviepy,...

  • December 16, 2023

MOJO: SUPER PYTHON

Introducing Mojo: The Next Evolution in Programming Speed In the dynamic realm of programming languages, a new contender...

  • December 10, 2023