<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0"><channel><title><![CDATA[anshumancdx]]></title><description><![CDATA[anshumancdx]]></description><link>https://newsletter.anshumancdx.xyz</link><generator>RSS for Node</generator><lastBuildDate>Tue, 14 Apr 2026 02:01:43 GMT</lastBuildDate><atom:link href="https://newsletter.anshumancdx.xyz/rss.xml" rel="self" type="application/rss+xml"/><language><![CDATA[en]]></language><ttl>60</ttl><item><title><![CDATA[I Built My First npm Package , and It Solves Something with Git]]></title><description><![CDATA[If you write code every day, you probably write Git commit messages every day too.And if you're honest… you probably don’t enjoy it.
I definitely didn’t.
So I built a small tool that fixes this tiny b]]></description><link>https://newsletter.anshumancdx.xyz/slothcommit-ai-git-commit-tool</link><guid isPermaLink="true">https://newsletter.anshumancdx.xyz/slothcommit-ai-git-commit-tool</guid><category><![CDATA[GitHub]]></category><category><![CDATA[commit messages]]></category><category><![CDATA[commit]]></category><category><![CDATA[slothcommit]]></category><category><![CDATA[Gemini integration]]></category><dc:creator><![CDATA[Anshuman Praharaj]]></dc:creator><pubDate>Thu, 12 Mar 2026 10:06:38 GMT</pubDate><enclosure url="https://cdn.hashnode.com/uploads/covers/63ef099546e2f1644c4f28c4/452ead40-9a27-4af6-971e-fdd45de05d34.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>If you write code every day, you probably write <strong>Git commit messages</strong> every day too.And if you're honest… you probably <strong>don’t enjoy it</strong>.</p>
<p>I definitely didn’t.</p>
<p>So I built a small tool that fixes this tiny but annoying problem.<br />It’s called <strong>slothcommit</strong> , an AI-powered CLI that generates commit messages for you.</p>
<p>yeah so i decided to write this blog on " <strong>my first npm package"</strong> , why I built it, and what I learned along the way.</p>
<h2>The Problem: Writing Commit Messages is Annoying</h2>
<p>My typical workflow looked something like this:</p>
<ol>
<li><p>Write some code</p>
</li>
<li><p>Stage the files</p>
</li>
</ol>
<pre><code class="language-plaintext">git add .
</code></pre>
<ol>
<li>Then get stuck writing a commit message.</li>
</ol>
<p>Sometimes I would write something lazy like:</p>
<pre><code class="language-plaintext">fix stuff
</code></pre>
<p>Or something even worse:</p>
<pre><code class="language-plaintext">update
</code></pre>
<p>When I <em>did</em> want a good commit message, the workflow was painful:</p>
<ol>
<li><p>Copy my code changes</p>
</li>
<li><p>Paste them into ChatGPT</p>
</li>
<li><p>Ask it to generate a commit message</p>
</li>
<li><p>Copy the result back</p>
</li>
<li><p>Finally run <code>git commit</code></p>
</li>
</ol>
<p>It worked but thing is it was <strong>slow and repetitive</strong> and focus am lazy asf so i wanted a more lazier way to do this git commit message thing.</p>
<p>So I thought:</p>
<blockquote>
<p>Why can't a CLI tool just do this automatically?</p>
</blockquote>
<p>That idea became <strong>slothcommit</strong>.</p>
<h3>The Idea Behind Slothcommit</h3>
<p>The idea was simple.</p>
<p>Instead of manually writing commit messages, a CLI tool should:</p>
<ol>
<li><p>Look at the <strong>git diff</strong></p>
</li>
<li><p>Send it to an <strong>AI model</strong></p>
</li>
<li><p>Generate a <strong>clean conventional commit message</strong></p>
</li>
<li><p>Run the commit automatically</p>
</li>
</ol>
<p>So the workflow becomes:</p>
<pre><code class="language-plaintext">git add .
sloth
</code></pre>
<p>And that's it.</p>
<p>workflow diagram :</p>
<img src="https://cdn.hashnode.com/uploads/covers/63ef099546e2f1644c4f28c4/2d07e099-dc1a-43bf-9c6f-8514c496b5b3.png" alt="" style="display:block;margin:0 auto" />

<h3>What Slothcommit Does (or how it works)</h3>
<p><strong>slothcommit</strong> is an <strong>AI-powered Git commit assistant</strong>.</p>
<p>It analyzes your staged changes and generates a commit message using the Conventional Commit format.</p>
<p>Example output:</p>
<pre><code class="language-plaintext">feat(auth): add refresh token middleware
</code></pre>
<p>and while am writing this blog the package is now getting 100+ downloads every week, which is really encouraging to see!</p>
<p>If you haven’t tried it yet, check it out here:<br /><a href="https://www.npmjs.com/package/slothcommit">https://www.npmjs.com/package/slothcommit</a></p>
<p><strong>and in case you like it ? do give it a star</strong> <a href="https://github.com/anshumancodes/sloth">https://github.com/anshumancodes/sloth</a></p>
]]></content:encoded></item><item><title><![CDATA[How to deploy your Backend on Google Cloud Platform (for Newbies).]]></title><description><![CDATA[Google Cloud Platform (GCP) provides user-friendly services such as Cloud Run, which facilitates the deployment of backend applications with serverless scaling. This guide demonstrates using a basic P]]></description><link>https://newsletter.anshumancdx.xyz/how-to-deploy-your-backend-on-google-cloud-platform-for-newbies</link><guid isPermaLink="true">https://newsletter.anshumancdx.xyz/how-to-deploy-your-backend-on-google-cloud-platform-for-newbies</guid><category><![CDATA[Cloud]]></category><category><![CDATA[GCP]]></category><category><![CDATA[deployment]]></category><dc:creator><![CDATA[Anshuman Praharaj]]></dc:creator><pubDate>Sun, 08 Mar 2026 15:24:24 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/stock/unsplash/Ai-edIkrJGo/upload/d9ba67ed4577a209f8cabdba0ecf9602.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Google Cloud Platform (GCP) provides user-friendly services such as Cloud Run, which facilitates the deployment of backend applications with serverless scaling. This guide demonstrates using a basic Python Flask "Hello World" application on Cloud Run. Being container-based, it is particularly suitable for beginners working with Node.js, Python, or similar backend technologies.</p>
<p>Now to start with the whole deployment thing that we gonna do , first Install the Google Cloud CLI (gcloud) from the official site and run <code>gcloud init</code> to authenticate. after that enable billing on a new or existing GCP project . Dont worry you dont have to pay anything upfront as new users get $300 free credits from google , anyways after that just ensure APIs like Cloud Run and Cloud Build are enabled with <code>gcloud services enable run.googleapis.com cloudbuild.googleapis.com</code>.</p>
<p>Grant the Cloud Build service account the <code>roles/run.builder</code> role using <code>gcloud projects add-iam-policy-binding PROJECT_ID --member=serviceAccount:PROJECT_NUMBER-compute@developer.gserviceaccount.com --role=roles/run.builder</code>.</p>
<h2>our sample backend app</h2>
<p>before we go more into the gcp thing lets create a sample backend using python which we are going to deploy because ofc we need a app to deploy lol. alright now to do so :</p>
<p>Create a directory <code>my-backend</code> and add <code>main.py</code> inside it.</p>
<p>inside <code>main.py</code> paste this:</p>
<pre><code class="language-plaintext">import os
from flask import Flask
app = Flask(__name__)
@app.route("/")
def hello():
    return f"Hello from GCP backend! (Port: {os.environ.get('PORT', 8080)})"
if __name__ == "__main__":
    app.run(host="0.0.0.0", port=int(os.environ.get("PORT", 8080)))
</code></pre>
<p>Add <code>requirements.txt</code> with <code>Flask~=3.0</code> and <code>gunicorn~=23.0</code> for production serving.</p>
<h2><strong>Containerize Your App</strong></h2>
<p>Cloud Run automatically builds from source using buildpacks, eliminating the need for a Dockerfile for Python applications—this is managed by the gcloud run deploy command. However, for Docker users, such as those using Node.js, it is necessary to create a <code>Dockerfile</code> that exposes port 8080 and specifies CMD <code>["gunicorn", "main:app"]</code>. To test locally, use gcloud run <code>services deploy --source . --local</code> after installing the necessary dependencies.</p>
<h2><strong>Deploy to Cloud Run</strong></h2>
<p>In your app directory, run:</p>
<pre><code class="language-shell">gcloud run deploy my-backend --source . --region us-central1 --allow-unauthenticated --port 8080
</code></pre>
<p>Accept prompts for service name, region (e.g., <code>asia-south1</code> near users), and public access. Deployment builds the container, pushes to Artifact Registry, and provides a URL like <code>https://my-backend-xyz.run.app</code> , cool now visit the url to verify to make sure everything is running fine.</p>
<p>Cloud Run scales to zero when idle, fitting free tier limits (e.g., 2 million requests/month).</p>
<h2><strong>Configure and Scale</strong></h2>
<p>Set environment variables with <code>--set-env-vars KEY=VALUE</code> or CPU/memory via <code>--cpu 1 --memory 512Mi</code> during deploy. For production, add authentication (<code>--no-allow-unauthenticated</code>), custom domains, or connect to Cloud SQL for databases via VPC connectors. Monitor logs in GCP Console under Cloud Run &gt; Logs; auto-scales based on traffic (up to 1000 instances).</p>
<h2><strong>Connect Database</strong></h2>
<p>To utilize Cloud SQL with MySQL or PostgreSQL, first create an instance in the Console and obtain the connection string. During deployment, include the <code>option --add-cloudsql-instances INSTANCE_CONNECTION_NAME</code>.</p>
<p>Now For serverless applications, consider using Firestore(its very convient for new users), and it like offers like 1 GiB of free storage and 50,000 reads per day. Also ensure your application code is updated to incorporate the SQLAlchemy or <code>google-cloud-firestore</code> libraries in the requirements.txt file.Best Practices</p>
<p>Use CI/CD with Cloud Build triggers on Git pushes for auto-deploys. Monitor costs via Billing dashboard—stay under free tier by deleting unused services with <code>gcloud run services delete my-backend</code>. For traffic splitting or rollbacks, use GCP Console's versioning; secure with Cloud Armor if needed.​</p>
<h2><strong>Troubleshooting</strong></h2>
<p>If the build fails, start by checking the logs in the <strong>Cloud Build</strong> section to see what went wrong. In many cases, the issue is simply that the application isn’t exposing <code>PORT=8080</code>, which <strong>Cloud Run expects by default</strong>. If you encounter permission errors, try running the required <strong>IAM role grants again</strong> and give it a minute or two for the changes to propagate. If the application deploys but doesn’t respond, double-check the <strong>health checks and startup probes</strong>, making sure the service is actually listening on port <code>8080</code>.</p>
<p>It’s also helpful to understand how this differs from <strong>App Engine deployments</strong>. <strong>Cloud Run</strong> provides much more flexibility because it runs containers, but that flexibility means you need to configure things like the listening port yourself. <strong>App Engine</strong>, on the other hand, is more opinionated and simpler for many Python projects you typically just define the configuration in <code>app.yaml</code> and deploy with <code>gcloud app deploy</code>.</p>
]]></content:encoded></item><item><title><![CDATA[Understanding Database Sharding and Partitioning]]></title><description><![CDATA[Once you build a product and your user base grows, your data grows too. Managing and storing that data efficiently becomes crucial. This is where scaling comes in. You probably didn't start with the biggest database server when you built your small a...]]></description><link>https://newsletter.anshumancdx.xyz/understanding-database-sharding-and-partitioning</link><guid isPermaLink="true">https://newsletter.anshumancdx.xyz/understanding-database-sharding-and-partitioning</guid><category><![CDATA[sharding]]></category><category><![CDATA[partition]]></category><category><![CDATA[database]]></category><category><![CDATA[System Design]]></category><dc:creator><![CDATA[Anshuman Praharaj]]></dc:creator><pubDate>Sun, 27 Apr 2025 09:45:20 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1745747011705/3963f1d8-8267-46ef-b019-5576bdc42b73.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Once you build a product and your user base grows, your data grows too. Managing and storing that data efficiently becomes crucial. This is where scaling comes in. You probably didn't start with the biggest database server when you built your small app or SaaS business, but eventually, you'll need to. That's when concepts like sharding and partitioning come into play.</p>
<p>Before we dive into how sharding and partitioning work and how they help with scaling, let's check out what a database server looks like and how it operates</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1745742233630/b8128a0e-5e0e-468c-941a-0c26f7bdfc7a.png" alt class="image--center mx-auto" /></p>
<p>So, a database server is basically just a process or service running on a computer that uses the machine's disk to store data. Let's say you're running a MySQL or Postgres process on your AWS EC2. It runs on a port of that machine and opens up that port so your server or API can communicate with it.</p>
<p>Now, once your database is up and running in production and serving real users, you probably started with a small server with limited hardware because, honestly, why would you go for a big one when you don't have many users yet?</p>
<p>Now, imagine your app is getting tons of traffic, and your database server just can't keep up anymore. Your small server can handle about 120-150 writes per second, but you're getting 200 write requests per sec, and you're seeing bad metrics like longer query times and stuff. But you still need to keep your users happy, so you decide to beef up your current database server with more power—more RAM, more CPU, and more disk space. What you just did is called vertical scaling, which helps you handle more write requests. But now you're facing another problem: even though your writes are capped at 200, your reads have increased a lot. Using just that beefed-up server will increase costs, so what do you do? Well, you can create a read replica—a copy of the existing database that only handles reads.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1745743974127/ee0e989c-c880-4c8d-9717-d482e944c835.png" alt class="image--center mx-auto" /></p>
<p>For now the issue seems solved with this solution .</p>
<p>So, let's say your app is getting even more traffic, and you're hitting 1000 writes per second. Just like before, you think about scaling up your main database server to handle it, and bam, it works! Your database can now handle 1000 writes per second, and everything's running smoothly. But then, requests go up again, and now you have to handle 1250 writes per second. You'd think about scaling up your main database server again, right? But nope, your cloud provider says you've hit the limit and can't add more power. So, what do you do now? How are you going to keep serving your users?</p>
<p>This is when it's time to switch to horizontal scaling because vertical scaling has its limits. You can only upgrade one server so much. So, what's the plan? We split your write requests between two database servers, each handling 700 writes per second, and spread the data across these two nodes. Now, we can easily handle 1250 requests with two database servers.</p>
<p>Now, by adding one more server, we've split the load across two servers, so our system runs more efficiently.</p>
<p>So, here's the deal with sharding—when a database is <strong>sharded</strong>, it means the data is <strong>split up across a bunch of machines</strong> (often called <strong>computes</strong>, <strong>nodes</strong>, or <strong>servers</strong>). Each shard holds a chunk (a <em>subset</em>) of the entire data. So, if you've got 2 shards, you've got <strong>at least 2 different computes</strong> (physical or virtual) holding those shards.</p>
<p>In the example above, we added a new data node/server, made a shard, and split the data evenly across the shards, 50%-50%.</p>
<p><strong>Partitioning</strong> = <strong>breaking data into parts</strong>, <strong>within</strong> a <strong>single database system</strong></p>
]]></content:encoded></item><item><title><![CDATA[Understanding the Key Differences Between SQL and NoSQL Databases]]></title><description><![CDATA[When building modern applications, have you ever wondered: Which database system should I use—SQL or NoSQL? In this blog, we won't discuss choosing between them, but instead, let's explore how SQL and NoSQL databases differ, how they operate, and wha...]]></description><link>https://newsletter.anshumancdx.xyz/understanding-the-key-differences-between-sql-and-nosql-databases</link><guid isPermaLink="true">https://newsletter.anshumancdx.xyz/understanding-the-key-differences-between-sql-and-nosql-databases</guid><category><![CDATA[Databases]]></category><category><![CDATA[SQL]]></category><category><![CDATA[storage engine]]></category><category><![CDATA[NoSQL]]></category><dc:creator><![CDATA[Anshuman Praharaj]]></dc:creator><pubDate>Sun, 20 Apr 2025 17:13:46 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1745169044718/09223d63-31cf-4247-b3fc-f971857be9f8.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>When building modern applications, have you ever wondered: <strong>Which database system should I use—SQL or NoSQL?</strong> In this blog, we won't discuss choosing between them, but instead, let's explore how SQL and NoSQL databases differ, how they operate, and what features and trade-offs they offer.</p>
<p>But wait, what truly makes a database a <em>database</em>?</p>
<p>When we think of databases, isn't it interesting that SQL often pops into our minds first? Yet, what really defines a database isn't just the query language—it's the architecture, the <em>engine</em> working behind the scenes. SQL is merely the tip of the iceberg.</p>
<p>When it comes to SQL databases, the storage engine is like the behind-the-scenes hero that takes care of how data is stored, fetched, and kept in check. Basically, it's the part of the database management system (DBMS) that deals with SQL queries at the disk level, figuring out how tables, indexes, and records are set up. Take MySQL, for example—it has different storage engines like InnoDB and MyISAM, each tuned for specific tasks. InnoDB, which is the default, supports ACID compliance, row-level locking, and foreign key constraints, making it great for transactional apps. It uses a clustered index setup where data is stored with the primary key, allowing for speedy read and write operations.</p>
<p>Now, let's talk about NoSQL databases. These guys ditch the old-school relational model to give you flexible, high-speed data storage that's perfect for today's apps. Interestingly, the underlying engine can be the same for both SQL and NoSQL. Take document-oriented databases like MongoDB, for example—they often use the WiredTiger storage engine. This engine is cool because it supports document-level locking, compression, and checkpointing, which means it can handle lots of tasks at once and use disk space efficiently. WiredTiger uses a B-Tree data structure with a log-structured merge-tree (LSM) approach to make writing data faster by batching writes in memory before saving them to disk.</p>
<p>On the other hand, wide-column stores like Apache Cassandra are built around LSM trees, which let them handle fast writes.</p>
<p><strong>What differentiates SQL and NoSQL databases</strong></p>
<p>At a high level, the core difference between SQL and NoSQL databases comes down to <strong>data model, schema flexibility, and consistency guarantees</strong>.</p>
<h4 id="heading-1-data-model">1. <strong>Data Model</strong></h4>
<ul>
<li><p><strong>SQL databases</strong> (also known as <em>relational databases</em>) use a <strong>structured, table-based</strong> format. Data is stored in rows and columns, and relationships are maintained via foreign keys and joins.</p>
</li>
<li><p><strong>NoSQL databases</strong> ditch the rigid structure and can be <strong>document-based (e.g., MongoDB), key-value (e.g., Redis), columnar (e.g., Cassandra), or graph-based (e.g., Neo4j)</strong>. The format is flexible and can adapt to unstructured or semi-structured data.</p>
</li>
</ul>
<h4 id="heading-2-schema">2. <strong>Schema</strong></h4>
<ul>
<li><p><strong>SQL</strong> requires a <strong>fixed schema</strong>—you define your tables, columns, and data types before inserting any data. Changes require migrations.</p>
</li>
<li><p><strong>NoSQL</strong> databases are <strong>schema-less</strong> or have dynamic schemas, allowing you to store different structures in the same collection or bucket without predefined schemas.</p>
</li>
</ul>
<h4 id="heading-3-scalability">3. <strong>Scalability</strong></h4>
<ul>
<li><p><strong>SQL databases</strong> are typically <strong>vertically scalable</strong>—you scale by upgrading hardware (CPU, RAM).</p>
</li>
<li><p><strong>NoSQL databases</strong> are <strong>horizontally scalable</strong>—you scale by adding more servers/nodes, which is great for distributed and cloud-native applications.</p>
</li>
</ul>
<h4 id="heading-4-transactions-and-consistency">4. <strong>Transactions and Consistency</strong></h4>
<ul>
<li><p><strong>SQL</strong> systems follow <strong>ACID</strong> properties (Atomicity, Consistency, Isolation, Durability), ensuring strong consistency and reliable transactions.</p>
</li>
<li><p><strong>NoSQL</strong> systems often follow <strong>BASE</strong> principles (Basically Available, Soft state, Eventually consistent), prioritizing availability and partition tolerance, often at the cost of immediate consistency (as per the CAP theorem).</p>
</li>
</ul>
<h4 id="heading-5-query-language">5. <strong>Query Language</strong></h4>
<ul>
<li><p><strong>SQL</strong> uses the <strong>Structured Query Language (SQL)</strong> for querying data—standardized and powerful, especially for complex joins.</p>
</li>
<li><p><strong>NoSQL</strong> databases use <strong>custom query APIs or languages</strong> tailored to their model (e.g., MongoDB uses a JSON-like query language, Redis uses command-based access).</p>
</li>
</ul>
<p>If you're building something with strict transactional needs—like banking—SQL is your go-to. But for high-speed, flexible, large-scale applications—think social media, IoT, analytics—NoSQL might be the better fit.</p>
<p>One of the main things that set SQL and NoSQL databases apart is the <strong>guarantees and trade-offs</strong> they decide to make. When someone is creating a new database system, they're usually targeting a <strong>specific problem or niche</strong>. It's all about the purpose. Depending on what they need, they might think, “I want features A, B, and C, but I can skip D, E, and F.” With SQL databases, these trade-offs are pretty strict because of <strong>standardization</strong>—a relational database needs to stick to certain rules: ACID compliance, a table format, and a fixed schema. But with NoSQL, there’s <strong>no such enforcement</strong>. That’s where NoSQL shines with its flexibility. Developers get to choose which constraints to let go of and which strengths to boost.</p>
<p>For example, <strong>RocksDB and LevelDB</strong> were created as <strong>embedded databases</strong>—they're built to run right inside an app without needing a big server. They weren't made to be super relational or transactional, but to provide quick, low-latency storage right at the edge. Similarly, some databases focus entirely on speed and minimal persistence by keeping everything <strong>in RAM</strong> instead of on a disk—<strong>Redis</strong> is the classic example. Originally designed as an in-memory key-value store, Redis skips durability guarantees for super-fast performance, making it ideal as a cache or for temporary data. This kind of flexibility—picking trade-offs based on what you actually need—is what makes NoSQL stand out. It’s not about one-size-fits-all; it’s about whatever works best for the job.</p>
<p><strong>How indexing and Join happen in SQL and NoSQL databases</strong></p>
<p>Before diving into Indexing and Join, let's first get what a Node is in this context. A <strong>node</strong> is basically a <strong>logical unit of data storage</strong> within an indexing or tree structure, designed to make reads, writes, and storage on disk super efficient.</p>
<p><strong>Indexing</strong></p>
<p>Whether you’re using a <strong>SQL</strong> database like <strong>PostgreSQL</strong> or a <strong>NoSQL</strong> database like <strong>MongoDB</strong> or <strong>Cassandra</strong>, <strong>indexing</strong> is fundamentally about one thing: <strong>speeding up read operations</strong> by minimizing the amount of data scanned.</p>
<p>At its core, the concept of indexing is the <strong>same across both paradigms</strong>. Think of a dictionary: without an index, you’d read every word one by one. With an index, you know where each letter section begins—making lookup far faster. That’s exactly what databases aim to do: reduce the number of rows or documents scanned to retrieve relevant data.</p>
<p><strong>Joins</strong></p>
<p>Now coming to Joins one of the defining strengths of <strong>SQL databases</strong> is their ability to perform <strong>joins</strong>—that is, to combine data from multiple tables based on related keys (like foreign keys). Under the hood, SQL databases like MySQL or PostgreSQL perform joins by leveraging <strong>relational integrity</strong>, <strong>indexes</strong>, and optimized query planners. Since the data usually lives on a <strong>single machine (vertically scaled)</strong> or in a tightly controlled cluster, the database engine can efficiently <strong>fetch rows from multiple tables</strong>, align them by key, and return the joined result in one pass.</p>
<p>Now, Incase <strong>NoSQL databases</strong>. They're all about <strong>scaling out</strong> and spreading data across different places. In systems like MongoDB, Cassandra, or DynamoDB, data is <strong>split up</strong> across lots of nodes using a partition key. This means that the data you need for a join might be on totally <strong>different machines</strong>. To make the join happen, you'd have to gather all the data onto one machine, which can crank up <strong>network delays</strong>, <strong>CPU stress</strong>, and <strong>memory use</strong>. It's not just a hassle; it can get <strong>crazy expensive</strong> if you're dealing with huge datasets. Because of this, <strong>NoSQL databases usually skip joins altogether</strong>, pushing for <strong>denormalized data models</strong> or <strong>application-level joins</strong>. Basically, the code that ties the data together runs in your backend, not in the database itself.</p>
<p>Now concluding, you see when you're picking between SQL and NoSQL databases, it's all about what problem you're trying to solve, not just going with what's trendy or what you've heard. SQL databases like MySQL or PostgreSQL are great if your data is neat, and you need everything to be super reliable with those strong ACID guarantees. They can handle big chunks of data—up to 1–5TB—without making things too complicated with sharding. They're also really good at handling joins because everything's usually on one machine, so it’s quick and reliable.</p>
<p>But if you're dealing with flexible data structures, need to spread your data across lots of servers, or are dealing with tons of writes, like for analytics or caching, NoSQL databases like MongoDB, Cassandra, or Redis are your go-to. Just keep in mind, because NoSQL systems are often spread out, doing joins can get pricey and slow since you have to gather all the data onto one machine, which messes with the whole scaling thing. Both types of databases use similar tricks to speed up searches, and you can tweak them for better performance, but they each have their own pros and cons. You’ve got to weigh those trade-offs, thinking about consistency, availability, and scalability based on what your business needs. In the end, there's no "best" database—just the one that fits your needs the best.</p>
]]></content:encoded></item><item><title><![CDATA[How Indexes Improve Database Read Performance]]></title><description><![CDATA[Have you ever noticed how everything in building apps or software engineering revolves around handling data and performing operations? It involves operations like create, read, update, and delete, all usually done on a database. Fascinating, isn't it...]]></description><link>https://newsletter.anshumancdx.xyz/how-indexes-improve-database-read-performance</link><guid isPermaLink="true">https://newsletter.anshumancdx.xyz/how-indexes-improve-database-read-performance</guid><category><![CDATA[Databases]]></category><category><![CDATA[SQL]]></category><category><![CDATA[backend]]></category><category><![CDATA[indexing]]></category><dc:creator><![CDATA[Anshuman Praharaj]]></dc:creator><pubDate>Mon, 14 Apr 2025 17:39:34 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1744652259554/8809f595-fc86-4022-b398-fbd02e9122d3.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Have you ever noticed how everything in building apps or software engineering revolves around handling data and performing operations? It involves operations like create, read, update, and delete, all usually done on a database. Fascinating, isn't it? The data stored in the database is then used for a variety of business logic and manipulation. So, at its core, it's all about data, right? Now, here's something intriguing: among these four (CRUD) operations, reading data happens more frequently than the others and occurs multiple times. We know that database operations can be quite costly due to various engineering constraints. Every query to a database consumes resources such as CPU, memory, and disk I/O. As the database size grows, these operations—especially those that require scanning large data sets—can slow down response times significantly. Furthermore, the need to handle concurrent requests, ensure data consistency, and manage network latencies adds another layer of complexity, doesn't it? So, optimizing these operations, especially the "read" operation, becomes crucial. And that's where an interesting database engineering concept comes into play—"indexing."</p>
<p>But what exactly is indexing?! Well, let's explore an example of indexing. Have you ever used a dictionary? Yes, the big book, not the Python dictionary :) When you're searching for a word in a dictionary, you begin with the first letter and locate the word from there. The first letter serves as an index. The words in the dictionary are arranged by their first letter at the highest level.</p>
<p>Alright, enough with the analogies and beating around the bush—let's dive into the technical stuff!</p>
<p>In any data-driven application, a well-designed database is the backbone of performance. One of the most critical optimizations is indexing—a technique that can dramatically speed up how quickly your database retrieves data.In this post, we'll see how indexes work behind the scenes and why they are essential for efficient data retrieval.</p>
<p><strong>The Cost of Disk I/O</strong></p>
<p>At its core, a database consists of records stored on disk. Consider a simple SQL table, such as a <strong>users</strong> table, with columns like <code>id</code>, <code>name</code>, <code>age</code>, <code>bio</code>, and <code>username</code>. Each row in this table is converted into a series of bytes, for instance, 200 bytes per row. Data is written to disk in blocks, often 4 KB or, for our example, 600 bytes. This means that even reading a small piece of data requires loading an entire block into memory.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1744650364982/ae79a1ed-ef50-48fb-a569-dd0339f5aa43.png" alt class="image--center mx-auto" /></p>
<p>Imagine a table with 100 rows that needs 34 blocks to store all the data. In the worst-case scenario, if you need to scan the entire table, you have to read all 34 disk blocks. Even if each block read takes just one second, your query could take 34 seconds to finish. This is a significant performance cost, especially when dealing with millions of records or frequent queries for a large app.</p>
<p><strong>Here's How Indexes Can Help You Out</strong></p>
<p>Think of indexes as your database's table of contents. Instead of going through every row one by one to find what you need, an index cuts down the number of disk reads you have to do. Here's the scoop:</p>
<ul>
<li><p>Efficient Data Mapping<strong>:</strong> An index is like a mini table that links key values (like <code>age</code> or <code>name</code>) to where the records are. So, if you're looking for users who are 20 years old, the index has pairs of the indexed field and the row ID.</p>
</li>
<li><p>Sorted Structure<strong>:</strong> Since the index is sorted by the column you're interested in, a query can quickly jump to the right spot for any value (like 20) and only read what's needed, saving you from unnecessary reads.Let’s break down a simplified example:</p>
</li>
</ul>
<ol>
<li><p><strong>Without an Index:</strong></p>
<ul>
<li>The database has to go through all 34 blocks to find every row with <code>age = 20.</code></li>
</ul>
</li>
<li><p><strong>With an Index:</strong></p>
<ul>
<li><p>The index is tiny (just 8 bytes per entry in our example) and might only need 2 disk blocks.</p>
</li>
<li><p>The database checks these 2 index blocks to find the right row IDs. Then, it only reads the 2 blocks that have the actual data.</p>
</li>
<li><p>All in all, this means just 4 disk block reads—a big drop from 34 blocks.</p>
<p>  <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1744651617541/ad2db154-937e-4637-8c41-75c41f20d8fb.png" alt class="image--center mx-auto" /></p>
</li>
</ul>
</li>
</ol>
<p>This kind of reduction (an 8x improvement in our example) can turn a slow, expensive task into a much quicker one. Just think about the impact in a real-world system where database speed really matters.</p>
<p><strong>Behind the Scenes: How It Works</strong></p>
<p>When you run a query like <code>SELECT * FROM users WHERE age = 20,</code> Here's the lowdown:</p>
<ul>
<li><p>Step 1: Index Scan<br />  The system quickly checks the small index table to find where the value <code>20</code> pops up. Since the index is smaller and sorted, this scan is super fast, using just a couple of blocks.</p>
</li>
<li><p>Step 2: Data Fetch<br />  With the row IDs from the index, the system goes straight to the right rows in the main table. This skips the need to scan the whole table.</p>
</li>
<li><p>Net Result:<br />  The time it takes to get the query results is way shorter because there are fewer disk I/O operations.</p>
</li>
</ul>
<p>You see in environments where queries are executed frequently or databases are large, optimizing the read operations makes a dramatic difference. Without proper indexing, even a simple query might lead to a full table scan, causing heavy disk I/O that can degrade performance and overwhelm system resources. Conversely, with indexing, the performance gains are substantial, allowing databases to scale and run queries at lightning speeds.</p>
]]></content:encoded></item><item><title><![CDATA[How to Build Node.js Auth: Exploring Stateless vs Stateful auth]]></title><description><![CDATA[Authentication is one of the major component of any application you use , now am pretty sure you know why we need auth , let me tell you anyway : Authentication is like a bouncer for your app, making sure only the right people get in. It keeps your d...]]></description><link>https://newsletter.anshumancdx.xyz/how-to-build-nodejs-auth-exploring-stateless-vs-stateful-auth</link><guid isPermaLink="true">https://newsletter.anshumancdx.xyz/how-to-build-nodejs-auth-exploring-stateless-vs-stateful-auth</guid><category><![CDATA[Node.js]]></category><category><![CDATA[Express.js]]></category><category><![CDATA[authentication]]></category><category><![CDATA[JWT token,JSON Web,Token,Token authentication,Access token,JSON token,JWT security,JWT authentication,Token-based authentication,JWT decoding,JWT implementation]]></category><category><![CDATA[Web Development]]></category><dc:creator><![CDATA[Anshuman Praharaj]]></dc:creator><pubDate>Fri, 25 Oct 2024 20:40:35 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1729886532526/1787586b-1a08-41ce-a4c6-facfb662b313.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Authentication is one of the major component of any application you use , now am pretty sure you know why we need auth , let me tell you anyway : Authentication is like a bouncer for your app, making sure only the right people get in. It keeps your data safe and ensures users can access their accounts securely.</p>
<p>Now as a newbie developer if heard about stateful and stateless auth and you google up and reading this blogpost to know what exactly stateful and stateless auth are lets begin to know what exactly these auth systems are and how to build one ? (at the end).</p>
<h2 id="heading-what-is-stateful-authentication"><strong>What is Stateful Authentication?</strong></h2>
<p>Stateful authentication is a method where the server retains session state for each authenticated user. This typically involves storing session data on the server, such as a unique session token or ID for each user. When a user logs in, the server verifies their identity by checking this session ID, ensuring both authentication and proper authorization.</p>
<p><strong>The Challenge of Session Management</strong></p>
<p>One key drawback of stateful authentication is that if the session memory is deleted on the backend, the session ID held by the client becomes completely useless. Because of this limitation, stateful authentication is often used for shorter sessions.</p>
<p>In such cases, users may need to log in each time they access the application.To sustain this authentication system, we ultimately have to store and update tokens in a database. Each time a user logs in, a database call is required to validate their credentials. This can lead to a significant number of database read operations, increasing costs and resource consumption.</p>
<p>However there are still usecases for session based auth.</p>
<p><strong>Use Cases for Stateful Authentication</strong></p>
<p>Despite the above stated issues, some applications still rely on stateful authentication. A good example this is banking apps, which prioritize security. In these applications, users must validate their login information every time they access their accounts to ensure maximum security,this why you have login into yono app everytime you want to use it and it automatically logs you out [india specific example]</p>
<p><strong>Advantages of Stateful Authentication</strong></p>
<ul>
<li><p><strong>High Security</strong>: Each session has a unique session ID, making it difficult for unauthorized users to gain access.</p>
</li>
<li><p><strong>Simplicity</strong>: The implementation and management of stateful authentication can be pretty straightforward.</p>
</li>
</ul>
<p><strong>Disadvantages of Stateful Authentication</strong></p>
<ul>
<li><p><strong>Resource Intensive</strong>: As the number of logged-in users increases, so does the demand on server resources.</p>
</li>
<li><p><strong>Limited Third-Party Integration</strong>: It can be challenging for third-party applications to utilize your credentials effectively.</p>
</li>
</ul>
<p>Now lets look into Stateless Auth.</p>
<h2 id="heading-what-is-stateless-authentication"><strong>What is Stateless Authentication?</strong></h2>
<p>In contrast to stateful authentication, stateless authentication does not rely on the server to maintain session state. Instead, each request from a client contains all the necessary information to verify the user's identity and authorization. This method typically uses tokens like JSON Web Tokens (JWTs).</p>
<p>but <strong>How Stateless Authentication Works?</strong></p>
<p>With stateless authentication, credentials(payload) are stored in tokens that are encrypted using a secret key this key btw can be decrypted by anyone if they have it So, even though anyone can peek at a JWT's payload since it's just base64 encoded, that doesn’t compromise security. The real magic is in the signature. If someone tries to mess with the token, the server will reject it because the signature won’t match. It’s like a tamper-proof seal!.</p>
<p>When the server receives a token, it verifies it against this secret key. If any changes are made to the token data (payload) without the secret key, the server will invalidate the token and deny access.</p>
<p>Cool now that we are done with the theory <strong>lets build a very simple jwt base auth system in express js</strong> which will have signup , signin and a simple auth middleware that allows only authenticated users to access “/protected” route .</p>
<pre><code class="lang-bash"><span class="hljs-comment"># initiate a project by :</span>

mkdir auth-app
<span class="hljs-built_in">cd</span> auth-app
npm init 

<span class="hljs-comment">#hit enter to everything that comes after npm init</span>
</code></pre>
<p>once you are done with setting up the project folder lets install express and jwt and cookie parser</p>
<pre><code class="lang-bash"><span class="hljs-comment"># inside auth-app , lets install express and jwt first</span>

npm i express jsonwebtoken cookie-parser
</code></pre>
<p>once you install the express framework , lets create a js file to write our auth app by :</p>
<pre><code class="lang-bash"><span class="hljs-comment">#[bash]</span>
touch app.js
</code></pre>
<p>now we are done with all basics and setup lets write our auth app inside app.js :)</p>
<pre><code class="lang-javascript"><span class="hljs-keyword">const</span> express = <span class="hljs-built_in">require</span>(<span class="hljs-string">"express"</span>);
<span class="hljs-keyword">const</span> fs = <span class="hljs-built_in">require</span>(<span class="hljs-string">"fs"</span>);
<span class="hljs-keyword">const</span> cors = <span class="hljs-built_in">require</span>(<span class="hljs-string">"cors"</span>);
<span class="hljs-keyword">const</span> jwt = <span class="hljs-built_in">require</span>(<span class="hljs-string">'jsonwebtoken'</span>);
<span class="hljs-keyword">const</span> cookieParser = <span class="hljs-built_in">require</span>(<span class="hljs-string">'cookie-parser'</span>);
<span class="hljs-keyword">const</span> JWT_SECRET= <span class="hljs-string">"#fhdnenc"</span> <span class="hljs-comment">// KEEP THIS IN .env file in real project due to security </span>
<span class="hljs-keyword">const</span> app = express();
app.use(express.json());
app.use(cookieParser());
app.use(cors());

<span class="hljs-keyword">const</span> users = [];

app.get(<span class="hljs-string">"/"</span>, <span class="hljs-function">(<span class="hljs-params">req, res</span>) =&gt;</span> {
  res.send(<span class="hljs-string">"Welcome to the auth app"</span>);
});

<span class="hljs-function"><span class="hljs-keyword">function</span> <span class="hljs-title">tokengenerator</span>(<span class="hljs-params">{ payload }</span>) </span>{
  <span class="hljs-keyword">return</span> jwt.sign(payload, JWT_SECRET, { <span class="hljs-attr">expiresIn</span>: <span class="hljs-string">'1h'</span> }); <span class="hljs-comment">// Use '1h' for clarity</span>
}

app.post(<span class="hljs-string">"/signup"</span>, <span class="hljs-function">(<span class="hljs-params">req, res</span>) =&gt;</span> {
  <span class="hljs-keyword">const</span> { name, email, password } = req.body;
  <span class="hljs-keyword">if</span> (!name || !email || !password) {
    <span class="hljs-keyword">return</span> res.status(<span class="hljs-number">400</span>).send(<span class="hljs-string">"Please fill all fields"</span>);
  }

  <span class="hljs-keyword">const</span> user = {
    name,
    email,
    password,
  };

  users.push(user);

  res.status(<span class="hljs-number">200</span>).json({ <span class="hljs-attr">message</span>: <span class="hljs-string">"Signup successful"</span>, <span class="hljs-attr">user</span>: { name, email } });
});

app.post(<span class="hljs-string">"/login"</span>, <span class="hljs-function">(<span class="hljs-params">req, res</span>) =&gt;</span> {
  <span class="hljs-keyword">const</span> { email, password } = req.body;
  <span class="hljs-keyword">const</span> user = users.find(<span class="hljs-function">(<span class="hljs-params">user</span>) =&gt;</span> user.email === email &amp;&amp; user.password === password);

  <span class="hljs-keyword">if</span> (!user) {
    <span class="hljs-keyword">return</span> res.status(<span class="hljs-number">400</span>).send(<span class="hljs-string">"Invalid email or password"</span>);
  }

  <span class="hljs-keyword">const</span> token = tokengenerator({ <span class="hljs-attr">payload</span>: { <span class="hljs-attr">name</span>: user.name, <span class="hljs-attr">email</span>: user.email } });
  res.cookie(<span class="hljs-string">"token"</span>, token); <span class="hljs-comment">// Set the token as a cookie</span>
  <span class="hljs-keyword">return</span> res.status(<span class="hljs-number">200</span>).json({ <span class="hljs-attr">message</span>: <span class="hljs-string">"Login successful"</span> });
});

<span class="hljs-function"><span class="hljs-keyword">function</span> <span class="hljs-title">authvalidator</span>(<span class="hljs-params">req, res, next</span>) </span>{
  <span class="hljs-keyword">const</span> token = req.cookies.token; <span class="hljs-comment">// Retrieve token from cookies</span>

  <span class="hljs-keyword">if</span> (!token) {
    <span class="hljs-keyword">return</span> res.status(<span class="hljs-number">401</span>).send(<span class="hljs-string">"Unauthorized"</span>);
  }

  jwt.verify(token, JWT_SECRET, <span class="hljs-function">(<span class="hljs-params">err, decoded</span>) =&gt;</span> {
    <span class="hljs-keyword">if</span> (err) {
      <span class="hljs-keyword">return</span> res.status(<span class="hljs-number">400</span>).send(<span class="hljs-string">"Invalid token"</span>);
    }

    req.user = decoded;
    next();
  });
}

app.get(<span class="hljs-string">"/protected"</span>, authvalidator, <span class="hljs-function">(<span class="hljs-params">req, res</span>) =&gt;</span> {
  res.send(<span class="hljs-string">`Welcome <span class="hljs-subst">${req.user.name}</span> to the protected page`</span>);
});

<span class="hljs-comment">// Save users periodically</span>
<span class="hljs-built_in">setInterval</span>(<span class="hljs-function">() =&gt;</span> {
  fs.writeFileSync(<span class="hljs-string">"data.json"</span>, <span class="hljs-built_in">JSON</span>.stringify(users));
}, <span class="hljs-number">3600</span> * <span class="hljs-number">1000</span>); <span class="hljs-comment">// Save every hour</span>

app.listen(<span class="hljs-number">4000</span>, <span class="hljs-function">() =&gt;</span> {
  <span class="hljs-built_in">console</span>.log(<span class="hljs-string">"Server is running on port 4000"</span>); <span class="hljs-comment">// Corrected port number</span>
});
</code></pre>
<p>now lets run the auth-app by :</p>
<pre><code class="lang-bash">node app.js
</code></pre>
<p>or you can also configure your package.json like this</p>
<pre><code class="lang-bash">{
  <span class="hljs-string">"name"</span>: <span class="hljs-string">"express-auth"</span>,
  <span class="hljs-string">"version"</span>: <span class="hljs-string">"1.0.0"</span>,
  <span class="hljs-string">"main"</span>: <span class="hljs-string">"index.js"</span>,
  <span class="hljs-string">"scripts"</span>: {
    <span class="hljs-string">"test"</span>: <span class="hljs-string">"echo \"Error: no test specified\" &amp;&amp; exit 1"</span>,
    <span class="hljs-string">"server"</span>: <span class="hljs-string">"node app.js"</span> <span class="hljs-comment">#custom </span>
  },
  <span class="hljs-string">"author"</span>: <span class="hljs-string">""</span>,
  <span class="hljs-string">"license"</span>: <span class="hljs-string">"ISC"</span>,
  <span class="hljs-string">"description"</span>: <span class="hljs-string">""</span>,
  <span class="hljs-string">"dependencies"</span>: {
    <span class="hljs-string">"cookie-parser"</span>: <span class="hljs-string">"^1.4.7"</span>,
    <span class="hljs-string">"cors"</span>: <span class="hljs-string">"^2.8.5"</span>,
    <span class="hljs-string">"express"</span>: <span class="hljs-string">"^4.21.1"</span>,
    <span class="hljs-string">"jsonwebtoken"</span>: <span class="hljs-string">"^9.0.2"</span>
  }
}
</code></pre>
<p>and run the app like this</p>
<pre><code class="lang-bash">npm run server
</code></pre>
<p>in real world application we DONT store the jwt secret in the app.js neither write all the paths and token generator function in a single file for the shake of understanding i have written all the code in single file .</p>
<p>also you can test this app/ api using postman or thunderclient inside vscode.</p>
<p>by : <a target="_blank" href="https://anshumancdx.xyz/">anshumancdx</a></p>
]]></content:encoded></item><item><title><![CDATA[understanding NodeJs Versioning]]></title><description><![CDATA[Understanding Node.js versioning is crucial for maintaining the health, reliability, and security of the JavaScript ecosystem, particularly when working with npm packages.
Let's  understand it with an example
Lets take my Nodejs version that is 16.20...]]></description><link>https://newsletter.anshumancdx.xyz/understanding-nodejs-versioning</link><guid isPermaLink="true">https://newsletter.anshumancdx.xyz/understanding-nodejs-versioning</guid><category><![CDATA[Node.js]]></category><category><![CDATA[npm]]></category><dc:creator><![CDATA[Anshuman Praharaj]]></dc:creator><pubDate>Sat, 27 Jan 2024 18:48:08 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1729885142270/6529e38d-8dd6-4e8a-92ba-d35c8c70917d.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Understanding Node.js versioning is crucial for maintaining the health, reliability, and security of the JavaScript ecosystem, particularly when working with npm packages.</p>
<p>Let's  understand it with an example</p>
<p>Lets take my Nodejs version that is 16.20.2 , Now this version have parts as you can see . To understand this lets break down these individually.<br />-- first  part<br />the last digit of V16.20.2 in this case 2  signifies minor bug fixes and doesn't have much significance, you can update your nodes to latest or can ignore it if the<br />last digit of the version is chnaged in changelog. </p>
<p>-- second part<br />now lets go the 2nd part or digit of V16.20.2 that is 20 now this part of the nodejs Versioning siginifies some feature update ,Bug fixes or security updates so<br />Updating to the version on release is recommended</p>
<p>-- 3rd part</p>
<p>now we have the 3rd part of the version V16.20.2 that is 16 this siginifies major chnage also known as breaking update that means it can break code incase of version mismatch<br />so its a must to update or use that version while working on application that uses a specific version</p>
<p>Antoher thing in the Node.js versioning is the caret (^) is used in the package.json file to specify a range of compatible versions for a dependency. t indicates that you will accept any version that is compatible with the specified version, up to the next major version.<br />example :^x.y.z: Allows versions from x.y.z up to, but not including, x+1.0.0.</p>
<p>btw you can always check your node version by typing  ```node --version```</p>
]]></content:encoded></item><item><title><![CDATA[Introduction to http]]></title><description><![CDATA[Http or hyper text transfer protocol is transfer protocol on which the whole web relies on, http makes it possible for us to load web pages and that's the reason it is essential to learn about it.
Hypertext Transfer Protocol (HTTP) is an application-...]]></description><link>https://newsletter.anshumancdx.xyz/introduction-to-http</link><guid isPermaLink="true">https://newsletter.anshumancdx.xyz/introduction-to-http</guid><category><![CDATA[Computer Science]]></category><category><![CDATA[http]]></category><category><![CDATA[Web Development]]></category><dc:creator><![CDATA[Anshuman Praharaj]]></dc:creator><pubDate>Sat, 27 Jan 2024 09:58:39 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1706349450461/3f6a3b9a-b320-45c1-8704-48264825a899.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Http or hyper text transfer protocol is transfer protocol on which the whole web relies on, http makes it possible for us to load web pages and that's the reason it is essential to learn about it.</p>
<p>Hypertext Transfer Protocol (HTTP) is an application-layer protocol used to send documents like HTML. It makes communication between web browsers and web servers possible.</p>
<p>A typical http flow works like : a client makes a request  to a server , and the server  sends a response message to client, which basically means requests are initiated by the recipient(client), usually the browser. complete document(dom) is reconstructed from the different sub-documents fetched, like text, layout of the web page , images, videos(static resources),scripts etc.</p>
<p>lets understand more on http request and its components.</p>
<p><strong>What is in a HTTP request?</strong></p>
<p>An HTTP request is the way the client asks the  information from a web server. When you enter a URL in your browser's address bar, click on a link, or submit a form on a webpage, your browser generates an HTTP request to the server hosting the website.</p>
<p>A HTTP request made  carries with it a series of encoded data that carries different types of information like  the method, request headers,Optional HTTP body,HTTP version.</p>
<p>lets deep dive into  how these requests work, and what role the information in the request play!</p>
<p><strong>HTTP methods</strong></p>
<p>An HTTP method is like a set of instructions telling the server what to do when you ask for something. Think of them as action words, or sometimes people call them HTTP verbs. The two main ones are <code>GET</code> and <code>POST</code>:</p>
<p>- The  <a target="_blank" href="https://developer.mozilla.org/en-US/docs/Web/HTTP/Methods/GET"><strong>GET</strong></a>  method requests for a  data representation of the asked resource. Requests using <code>GET</code> can retrieve data.</p>
<p>- The [<strong>POST</strong>](http://localhost:5173/blog/(https:/developer.mozilla.org/en-US/docs/Web/HTTP/Methods/POST) method sends data to a server so it may change its state. This is the method often used in html forms to send information or data to server.</p>
<p> <strong>HTTP request headers</strong></p>
<p>Request headers contain text information stored in key-value pairs, and they are included in every HTTP request (and response, more on that later). These headers communicate core information, such as what browser the client is using and what data is being requested.</p>
<p>here is how it looks :</p>
<p><img src="https://firebasestorage.googleapis.com/v0/b/anshumancdx.appspot.com/o/blogImgs%2Frequestheader.png?alt=media&amp;token=5101acb3-2f9d-4b03-9228-e99493fd568f" alt="http-header-img" /></p>
<p><strong>What is in an HTTP request body?</strong></p>
<p>The body of an HTTP request is the section that carries the actual data being sent to the server. It includes information like usernames, passwords, or any other data entered into a form on a website. This data is crucial for the server to process and respond accordingly.</p>
<p>For example, when you submit a form on a webpage, the details you enter, such as your name and email address, are included in the body of the HTTP request that gets sent to the server for processing. This separation of headers (containing metadata) and the body (containing actual data) allows for efficient communication between the client and the server on the web.</p>
<p><strong>what is a an HTTP response?</strong></p>
<p>An HTTP response is what clients receive from the  server in return of an HTTP request.These responses provides the information around what the client asked for . Now the http response also contains 3 informations or has 3 parts :</p>
<ol>
<li><p>status code</p>
</li>
<li><p>response headers</p>
</li>
<li><p>Body</p>
</li>
</ol>
<p>let me break it down further one by one :</p>
<p><strong>HTTP status code :</strong> status codes are three-digit codes that provide context about the  result of an HTTP request made by a client to a server and there are 5 categories of it like </p>
<ol>
<li><p>1xx Informational</p>
</li>
<li><p>2xx Success</p>
</li>
<li><p>3xx Redirection</p>
</li>
<li><p>4xx Client Error</p>
</li>
<li><p>5xx Server Error</p>
</li>
</ol>
<p>and "xx" can be anything from 0-99. </p>
<ul>
<li><p><strong>2xx (Success):</strong> Starting with '2' indicates a successful request. For instance, '200 OK' signifies that the client's request for a webpage was properly completed.</p>
</li>
<li><p><strong>4xx (Client Error):</strong> Starting with '4' means there was an error on the client side. A common example is '404 NOT FOUND,' which occurs when there's a mistake in the URL, like a typo.</p>
</li>
<li><p><strong>5xx (Server Error):</strong> Starting with '5' indicates an error on the server side. For example, '500 Internal Server Error' suggests that something went wrong within the server while processing the request.</p>
</li>
<li><p><strong>1xx (Informational):</strong> Starting with '1' denotes an informational response. However, these codes are not as commonly encountered during regular web browsing.</p>
</li>
<li><p><strong>3xx (Redirection):</strong> Starting with '3' signifies a redirection. These codes instruct the client to take further action to complete the request.</p>
</li>
</ul>
<p><strong>HTTP Response Headers :</strong></p>
<p>response headers provide additional context on the response and the data being sent in the response body. and it looks something like this : </p>
<p><img src="https://www.cloudflare.com/img/learning/ddos/glossary/hypertext-transfer-protocol-http/http-response-headers.png" alt="HTTP response headers" /></p>
<p>(got this img from google , because am too lazy to inspect and make a screenshot )</p>
<p><strong>HTTP response body:</strong></p>
<p>When you ask a website for something using your browser, like opening a webpage, the server usually sends back a response. If everything goes well (like a '200 OK' status), the server puts the requested info in the response body. For most web stuff, that info is in the form of HTML, the language browsers use to create webpages. So, your browser takes that HTML and turns it into the webpage you see!</p>
<p>see you in next blog!</p>
]]></content:encoded></item></channel></rss>