<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0"><channel><title><![CDATA[Ushan’s Tech Blog]]></title><description><![CDATA[Sharing insights on modern web development, generative AI, AI integration techniques, and the latest technologies to help developers stay ahead.]]></description><link>https://blogs.ushan.me</link><generator>RSS for Node</generator><lastBuildDate>Tue, 14 Apr 2026 02:40:17 GMT</lastBuildDate><atom:link href="https://blogs.ushan.me/rss.xml" rel="self" type="application/rss+xml"/><language><![CDATA[en]]></language><ttl>60</ttl><item><title><![CDATA[LangChain for Beginners]]></title><description><![CDATA[What is LangChain?
LangChain is a powerful framework designed to simplify the development of applications powered by large language models (LLMs) like GPT-4, Claude, or Llama. Think of it as a toolkit that helps developers connect AI models to real-w...]]></description><link>https://blogs.ushan.me/langchain-for-beginners</link><guid isPermaLink="true">https://blogs.ushan.me/langchain-for-beginners</guid><category><![CDATA[generative ai]]></category><category><![CDATA[langchain]]></category><category><![CDATA[ai-agent]]></category><category><![CDATA[AI-automation]]></category><dc:creator><![CDATA[Ushan Chamod]]></dc:creator><pubDate>Mon, 01 Sep 2025 08:11:45 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1756712674638/a8d9da53-ed1c-45bb-a79d-58a6d3b4807a.webp" length="0" type="image/jpeg"/><content:encoded><![CDATA[<h1 id="heading-what-is-langchain">What is LangChain?</h1>
<p>LangChain is a powerful framework designed to simplify the development of applications powered by large language models (LLMs) like GPT-4, Claude, or Llama. Think of it as a toolkit that helps developers connect AI models to real-world data sources, create complex workflows, and build intelligent applications more efficiently.</p>
<hr />
<h1 id="heading-why-use-langchain">Why Use LangChain?</h1>
<h2 id="heading-the-problem-langchain-solves">The Problem LangChain Solves</h2>
<p>Working with LLMs directly can be challenging:</p>
<ul>
<li><p><strong>Limited Context:</strong> LLMs have knowledge cutoff dates and can't access real-time information.</p>
</li>
<li><p><strong>No Memory:</strong> Each interaction is independent - the model doesn't remember previous conversations.</p>
</li>
<li><p><strong>Complex Workflows:</strong> Building multi-step AI applications requires managing chains of operations.</p>
</li>
<li><p><strong>Data Integration:</strong> Connecting LLMs to your own data sources is complicated.</p>
</li>
</ul>
<h2 id="heading-the-langchain-solution">The LangChain Solution</h2>
<p>LangChain addresses these challenges by providing:</p>
<ul>
<li><p><strong>Data Connectivity:</strong> Easy integration with databases, APIs, and documents.</p>
</li>
<li><p><strong>Memory Management:</strong> Built-in conversation memory and context management.</p>
</li>
<li><p><strong>Chain Operations:</strong> Tools to create complex, multi-step workflows.</p>
</li>
<li><p><strong>Agent Capabilities:</strong> Ability for AI to use tools and make decisions</p>
</li>
</ul>
<hr />
<h1 id="heading-core-concepts">Core Concepts</h1>
<h2 id="heading-chains">Chains</h2>
<p>Chains are the backbone of LangChain applications. They represent sequences of operations that process input through multiple steps, allowing you to build complex workflows by connecting different components together.</p>
<p><strong>How Chains Work:</strong> Think of a chain like a factory assembly line. Each step in the chain performs a specific task and passes the result to the next step:</p>
<blockquote>
<p>User Question → Document Retrieval → Context Creation → LLM Response</p>
</blockquote>
<h2 id="heading-prompts">Prompts</h2>
<p>Prompts are instructions you give to the language model. LangChain's prompt templates make it easy to create reusable, dynamic prompts that can be customized for different situations.</p>
<p><strong>Why Prompt Templates Matter:</strong></p>
<ul>
<li><p><strong>Consistency</strong>: Same format across your application</p>
</li>
<li><p><strong>Reusability</strong>: Write once, use many times</p>
</li>
<li><p><strong>Dynamic Content</strong>: Insert variables based on user input</p>
</li>
<li><p><strong>Easy Testing</strong>: Modify prompts without changing code</p>
</li>
</ul>
<pre><code class="lang-python"><span class="hljs-keyword">from</span> langchain.prompts <span class="hljs-keyword">import</span> PromptTemplate

template = <span class="hljs-string">"You are a helpful assistant. Answer the question: {question}"</span>
prompt = PromptTemplate(
    input_variables=[<span class="hljs-string">"question"</span>],
    template=template
)

<span class="hljs-comment"># Generate the actual prompt</span>
formatted_prompt = prompt.format(question=<span class="hljs-string">"What is Python?"</span>)
print(formatted_prompt)
<span class="hljs-comment"># Output: "You are a helpful assistant. Answer the question: What is Python?"</span>
</code></pre>
<h2 id="heading-memory">Memory</h2>
<p>Memory is what makes your AI applications feel more natural and conversational. Without memory, each interaction with your AI is completely independent - it won't remember what you talked about just moments ago.</p>
<h2 id="heading-agents">Agents</h2>
<p>Agents are the "smart" part of LangChain - they can think, plan, and decide which tools to use to accomplish a task. Unlike chains (which follow a fixed sequence), agents can dynamically choose their next action based on the situation.</p>
<p><strong>How Agents Work:</strong></p>
<ol>
<li><p><strong>Observe</strong>: Look at the current situation</p>
</li>
<li><p><strong>Think</strong>: Decide what to do next</p>
</li>
<li><p><strong>Act</strong>: Use a tool or provide an answer</p>
</li>
<li><p><strong>Repeat</strong>: Continue until the task is complete</p>
</li>
</ol>
<p><strong>Agent Components:</strong></p>
<ul>
<li><p><strong>LLM</strong>: The "brain" that makes decisions</p>
</li>
<li><p><strong>Tools</strong>: Functions the agent can use (search, calculator, database query, etc.)</p>
</li>
<li><p><strong>Agent Type</strong>: The reasoning strategy (ReAct, Plan-and-Execute, etc.)</p>
</li>
</ul>
<h2 id="heading-retrievers">Retrievers</h2>
<p>Retrievers are specialized components that find relevant information from large datasets or document collections. They're essential for building applications that can answer questions about your own data.</p>
<p><strong>How Retrievers Work:</strong></p>
<ol>
<li><p><strong>Index Creation</strong>: Documents are processed and stored in a searchable format</p>
</li>
<li><p><strong>Query Processing</strong>: User questions are converted into search queries</p>
</li>
<li><p><strong>Similarity Search</strong>: Find documents most relevant to the query</p>
</li>
<li><p><strong>Return Results</strong>: Provide the most relevant documents or passages</p>
</li>
</ol>
<hr />
<h1 id="heading-getting-started-installation-and-setup">Getting Started: Installation and Setup</h1>
<h2 id="heading-installing-langchain">Installing LangChain</h2>
<pre><code class="lang-python">pip install langchain
pip install openai  <span class="hljs-comment"># or your preferred LLM provider</span>
</code></pre>
<h2 id="heading-basic-setup">Basic Setup</h2>
<pre><code class="lang-python"><span class="hljs-keyword">import</span> os
<span class="hljs-keyword">from</span> langchain.llms <span class="hljs-keyword">import</span> OpenAI

<span class="hljs-comment"># Set your API key</span>
os.environ[<span class="hljs-string">"OPENAI_API_KEY"</span>] = <span class="hljs-string">"your-api-key-here"</span>

<span class="hljs-comment"># Initialize the language model</span>
llm = OpenAI(temperature=<span class="hljs-number">0.7</span>)
</code></pre>
<h2 id="heading-your-first-langchain-application">Your First LangChain Application</h2>
<p>Let's build a simple question-answering application:</p>
<pre><code class="lang-python"><span class="hljs-keyword">from</span> langchain.llms <span class="hljs-keyword">import</span> OpenAI
<span class="hljs-keyword">from</span> langchain.prompts <span class="hljs-keyword">import</span> PromptTemplate
<span class="hljs-keyword">from</span> langchain.chains <span class="hljs-keyword">import</span> LLMChain

<span class="hljs-comment"># Initialize the language model</span>
llm = OpenAI(temperature=<span class="hljs-number">0.7</span>)

<span class="hljs-comment"># Create a prompt template</span>
template = <span class="hljs-string">"""
You are a helpful AI assistant. Please answer the following question clearly and concisely:

Question: {question}
Answer:
"""</span>

prompt = PromptTemplate(
    input_variables=[<span class="hljs-string">"question"</span>],
    template=template
)

<span class="hljs-comment"># Create a chain</span>
chain = LLMChain(llm=llm, prompt=prompt)

<span class="hljs-comment"># Use the chain</span>
response = chain.run(<span class="hljs-string">"What is artificial intelligence?"</span>)
print(response)
</code></pre>
<hr />
<h1 id="heading-common-langchain-components">Common LangChain Components</h1>
<h2 id="heading-document-loaders">Document Loaders</h2>
<p>Load data from various sources:</p>
<pre><code class="lang-python"><span class="hljs-keyword">from</span> langchain.document_loaders <span class="hljs-keyword">import</span> TextLoader, PDFLoader, WebBaseLoader

<span class="hljs-comment"># Load a text file</span>
loader = TextLoader(<span class="hljs-string">"my_document.txt"</span>)
documents = loader.load()

<span class="hljs-comment"># Load a PDF</span>
pdf_loader = PDFLoader(<span class="hljs-string">"my_document.pdf"</span>)
pdf_docs = pdf_loader.load()
</code></pre>
<h2 id="heading-text-splitters">Text Splitters</h2>
<p>Break large documents into manageable chunks:</p>
<pre><code class="lang-python"><span class="hljs-keyword">from</span> langchain.text_splitter <span class="hljs-keyword">import</span> CharacterTextSplitter

text_splitter = CharacterTextSplitter(
    chunk_size=<span class="hljs-number">1000</span>,
    chunk_overlap=<span class="hljs-number">200</span>
)
texts = text_splitter.split_documents(documents)
</code></pre>
<h2 id="heading-vector-stores">Vector Stores</h2>
<p>Store and search documents using embeddings:</p>
<pre><code class="lang-python"><span class="hljs-keyword">from</span> langchain.embeddings <span class="hljs-keyword">import</span> OpenAIEmbeddings
<span class="hljs-keyword">from</span> langchain.vectorstores <span class="hljs-keyword">import</span> Chroma

embeddings = OpenAIEmbeddings()
vectorstore = Chroma.from_documents(texts, embeddings)
</code></pre>
<h2 id="heading-retrieval-qa">Retrieval QA</h2>
<p>Create a question-answering system over your documents:</p>
<pre><code class="lang-python"><span class="hljs-keyword">from</span> langchain.chains <span class="hljs-keyword">import</span> RetrievalQA

qa = RetrievalQA.from_chain_type(
    llm=llm,
    chain_type=<span class="hljs-string">"stuff"</span>,
    retriever=vectorstore.as_retriever()
)

response = qa.run(<span class="hljs-string">"What does the document say about AI?"</span>)
</code></pre>
<hr />
<h1 id="heading-building-a-complete-rag-application">Building a Complete RAG Application</h1>
<p>RAG (Retrieval-Augmented Generation) is one of the most popular LangChain use cases. Here's a complete example:</p>
<pre><code class="lang-python"><span class="hljs-keyword">from</span> langchain.llms <span class="hljs-keyword">import</span> OpenAI
<span class="hljs-keyword">from</span> langchain.document_loaders <span class="hljs-keyword">import</span> TextLoader
<span class="hljs-keyword">from</span> langchain.text_splitter <span class="hljs-keyword">import</span> CharacterTextSplitter
<span class="hljs-keyword">from</span> langchain.embeddings <span class="hljs-keyword">import</span> OpenAIEmbeddings
<span class="hljs-keyword">from</span> langchain.vectorstores <span class="hljs-keyword">import</span> Chroma
<span class="hljs-keyword">from</span> langchain.chains <span class="hljs-keyword">import</span> RetrievalQA

<span class="hljs-comment"># 1. Load your documents</span>
loader = TextLoader(<span class="hljs-string">"knowledge_base.txt"</span>)
documents = loader.load()

<span class="hljs-comment"># 2. Split documents into chunks</span>
text_splitter = CharacterTextSplitter(chunk_size=<span class="hljs-number">1000</span>, chunk_overlap=<span class="hljs-number">0</span>)
texts = text_splitter.split_documents(documents)

<span class="hljs-comment"># 3. Create embeddings and vector store</span>
embeddings = OpenAIEmbeddings()
vectorstore = Chroma.from_documents(texts, embeddings)

<span class="hljs-comment"># 4. Create the QA chain</span>
llm = OpenAI(temperature=<span class="hljs-number">0</span>)
qa_chain = RetrievalQA.from_chain_type(
    llm=llm,
    chain_type=<span class="hljs-string">"stuff"</span>,
    retriever=vectorstore.as_retriever()
)

<span class="hljs-comment"># 5. Ask questions</span>
question = <span class="hljs-string">"What are the main topics covered in the document?"</span>
answer = qa_chain.run(question)
print(answer)
</code></pre>
<hr />
<h1 id="heading-memory-in-langchain">Memory in LangChain</h1>
<p>Add conversation memory to maintain context:</p>
<pre><code class="lang-python"><span class="hljs-keyword">from</span> langchain.memory <span class="hljs-keyword">import</span> ConversationBufferMemory
<span class="hljs-keyword">from</span> langchain.chains <span class="hljs-keyword">import</span> ConversationChain

<span class="hljs-comment"># Create memory</span>
memory = ConversationBufferMemory()

<span class="hljs-comment"># Create conversation chain with memory</span>
conversation = ConversationChain(
    llm=llm,
    memory=memory,
    verbose=<span class="hljs-literal">True</span>
)

<span class="hljs-comment"># Have a conversation</span>
response1 = conversation.predict(input=<span class="hljs-string">"Hi, my name is John"</span>)
response2 = conversation.predict(input=<span class="hljs-string">"What's my name?"</span>)
</code></pre>
<hr />
<h1 id="heading-error-handling-and-debugging">Error Handling and Debugging</h1>
<h3 id="heading-common-issues-and-solutions">Common Issues and Solutions</h3>
<ul>
<li><p>API Key Errors: Ensure your API keys are properly set in environment variables.</p>
</li>
<li><p>Token Limits: Monitor input/output token usage and implement chunking strategies.</p>
</li>
<li><p>Slow Performance: Consider using smaller models or implementing caching.</p>
</li>
<li><p>Memory Issues: Clear memory periodically in long conversations.</p>
</li>
</ul>
<h3 id="heading-debugging-tips">Debugging Tips</h3>
<pre><code class="lang-python"><span class="hljs-comment"># Enable verbose mode to see what's happening</span>
chain = LLMChain(llm=llm, prompt=prompt, verbose=<span class="hljs-literal">True</span>)

<span class="hljs-comment"># Add error handling</span>
<span class="hljs-keyword">try</span>:
    response = chain.run(question)
<span class="hljs-keyword">except</span> Exception <span class="hljs-keyword">as</span> e:
    print(<span class="hljs-string">f"Error: <span class="hljs-subst">{e}</span>"</span>)
</code></pre>
<hr />
<p>LangChain simplifies the process of building sophisticated AI applications by providing a comprehensive framework for working with language models. Whether you're building a simple chatbot or a complex document analysis system, LangChain's modular approach and extensive library of components make it easier to create powerful, production-ready applications.</p>
<p>Start with the basics, experiment with different components, and gradually build more complex applications as you become comfortable with the framework. The key to success with LangChain is understanding how to chain together different components to create workflows that solve real-world problems.</p>
<p>Remember that LangChain is rapidly evolving, so stay updated with the latest documentation and community resources to make the most of this powerful framework.</p>
]]></content:encoded></item><item><title><![CDATA[React Compiler සිංහලෙන්]]></title><description><![CDATA[React කියන්නේ කාලයත් සමග නිතරම update වෙමින්, නව features එකතු වෙන JavaScript library එකක්. 2024 React Conf එකේදී developers ලාට වැදගත් සහ game-changing feature එකක් announce කරලා තියෙනවා. ඒක තමයි React Compiler.
අපි මේ ලිපියෙන් බලමු:

React Compiler...]]></description><link>https://blogs.ushan.me/react-compiler</link><guid isPermaLink="true">https://blogs.ushan.me/react-compiler</guid><category><![CDATA[React-compiler]]></category><category><![CDATA[React]]></category><dc:creator><![CDATA[Ushan Chamod]]></dc:creator><pubDate>Sat, 16 Aug 2025 17:27:10 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1755365042409/4aad54c7-bb20-4fa3-9ff8-c58a588dc7fa.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>React කියන්නේ කාලයත් සමග නිතරම update වෙමින්, නව features එකතු වෙන JavaScript library එකක්. 2024 React Conf එකේදී developers ලාට වැදගත් සහ game-changing feature එකක් announce කරලා තියෙනවා. ඒක තමයි <strong>React Compiler</strong>.</p>
<p>අපි මේ ලිපියෙන් බලමු:</p>
<ul>
<li><p>React Compiler කියන්නේ මොකක්ද?</p>
</li>
<li><p>React Compiler භාවිතා කරන්න ඕනේ ඇයි?</p>
</li>
<li><p>React Compiler භාවිතා කරන්නේ කොහොමද?</p>
</li>
<li><p>React Compiler භාවිතා කිරීමෙන් ලැබෙන ප්‍රතිලාභ මොනවාද?</p>
</li>
</ul>
<hr />
<h1 id="heading-react-compiler">React Compiler කියන්නේ මොකක්ද?</h1>
<p>React Compiler ගැන යන්න කලින්, මුලින්ම බලමු <strong>“Compile”</strong> කියන්නේ මොකක්ද කියලා.</p>
<p><strong>Compile</strong> කියන්නේ අපි ලියපු <strong>source code</strong> (උදාහරණයක් විදිහට JavaScript, TypeScript, C++ code) එක computer එකට තේරෙන <strong>machine code</strong> හෝ <strong>optimized instructions</strong> වලට පරිවර්තනය කරන විශේෂ program එකක්.</p>
<ul>
<li><p>TypeScript compiler → TypeScript code එක JavaScript එකට badagena denawa.</p>
</li>
<li><p>Babel → modern JavaScript code එක backward-compatible JavaScript එකට badagena denawa.</p>
</li>
<li><p>C++ compiler → C++ code එක CPU එකට තේරෙන machine code එකට badagena denawa.</p>
</li>
</ul>
<p><strong>සරලව කියනවනම්:</strong><br />Compile කියන්නේ → <em>“අපිට හොඳට කියවෙන්න පුළුවන් code එක, computer එකට තේරෙන ආකාරයට</em> පරිවර්තනය කරන <em>process එක.”</em></p>
<hr />
<p>දැන් බලමු React Compiler කියන්නේ මොකක්ද කියලා.</p>
<p>React වලදි <strong>අවශ්‍ය නොවන re-rendering</strong> වළක්වාගන්න අපිට <strong>memo, useMemo, useCallback</strong> භාවිතා කරන්න පුළුවන්. නමුත් ඒවා බොහෝ විට code එක සංකීර්ණ කරනවා, සහ වැරදි ලෙස භාවිතා වීමේ අවදානමක් තියෙනවා.</p>
<p>React Compiler භාවිතයෙන් මේ ගැටළු වලට විසඳුමක් ලැබෙනවා. React Compiler කියන්නේ <strong>static compiler</strong> එකක්. මේක ඔයාගේ components <strong>build time</strong> එකේදීම <strong>ස්වයංක්‍රීයව optimize කරනවා</strong>. ඒ කියන්නේ ඔයා performance hacks (memo, useMemo, useCallback) තැන තැන දාන්න ඕනේ වෙන්නේ නැහැ. React Compiler තමන්ම code එක analyze කරලා, අවශ්‍ය optimizations යොදනවා.</p>
<hr />
<h1 id="heading-react-compiler-1">React Compiler භාවිතා කරන්න ඕනේ ඇයි?</h1>
<p>React Compiler එක භාවිතා කිරීම මගින් ලැබෙන advantages ගැන කතා කරන්න කලින්, පහල sample code දෙකේ වෙනස බලන්න.</p>
<h3 id="heading-manual-optimization-compiler">🔴 Manual Optimization (Compiler එකට කලින්)</h3>
<pre><code class="lang-javascript"><span class="hljs-keyword">import</span> React, { useState, useCallback, memo } <span class="hljs-keyword">from</span> <span class="hljs-string">"react"</span>;

<span class="hljs-keyword">const</span> Button = memo(<span class="hljs-function">(<span class="hljs-params">{ onClick }</span>) =&gt;</span> {
  <span class="hljs-built_in">console</span>.log(<span class="hljs-string">"Button rendered"</span>);
  <span class="hljs-keyword">return</span> <span class="xml"><span class="hljs-tag">&lt;<span class="hljs-name">button</span> <span class="hljs-attr">onClick</span>=<span class="hljs-string">{onClick}</span>&gt;</span>Increment<span class="hljs-tag">&lt;/<span class="hljs-name">button</span>&gt;</span></span>;
});

<span class="hljs-keyword">export</span> <span class="hljs-keyword">default</span> <span class="hljs-function"><span class="hljs-keyword">function</span> <span class="hljs-title">App</span>(<span class="hljs-params"></span>) </span>{
  <span class="hljs-keyword">const</span> [count, setCount] = useState(<span class="hljs-number">0</span>);

  <span class="hljs-keyword">const</span> handleClick = useCallback(<span class="hljs-function">() =&gt;</span> {
    setCount(<span class="hljs-function">(<span class="hljs-params">c</span>) =&gt;</span> c + <span class="hljs-number">1</span>);
  }, []);

  <span class="hljs-keyword">return</span> (
    <span class="xml"><span class="hljs-tag">&lt;<span class="hljs-name">div</span>&gt;</span>
      <span class="hljs-tag">&lt;<span class="hljs-name">h1</span>&gt;</span>Count: {count}<span class="hljs-tag">&lt;/<span class="hljs-name">h1</span>&gt;</span>
      <span class="hljs-tag">&lt;<span class="hljs-name">Button</span> <span class="hljs-attr">onClick</span>=<span class="hljs-string">{handleClick}</span> /&gt;</span>
    <span class="hljs-tag">&lt;/<span class="hljs-name">div</span>&gt;</span></span>
  );
}
</code></pre>
<p>මෙහිදී <strong>memo</strong> සහ <strong>useCallback</strong> manually use කරන්න වෙනවා unnecessary re-renders වළක්වා ගන්න.</p>
<hr />
<h3 id="heading-automatic-optimization-react-compiler">🟢 Automatic Optimization (React Compiler මගින්)</h3>
<pre><code class="lang-javascript"><span class="hljs-keyword">import</span> React, { useState } <span class="hljs-keyword">from</span> <span class="hljs-string">"react"</span>;

<span class="hljs-function"><span class="hljs-keyword">function</span> <span class="hljs-title">Button</span>(<span class="hljs-params">{ onClick }</span>) </span>{
  <span class="hljs-built_in">console</span>.log(<span class="hljs-string">"Button rendered"</span>);
  <span class="hljs-keyword">return</span> <span class="xml"><span class="hljs-tag">&lt;<span class="hljs-name">button</span> <span class="hljs-attr">onClick</span>=<span class="hljs-string">{onClick}</span>&gt;</span>Increment<span class="hljs-tag">&lt;/<span class="hljs-name">button</span>&gt;</span></span>;
}

<span class="hljs-keyword">export</span> <span class="hljs-keyword">default</span> <span class="hljs-function"><span class="hljs-keyword">function</span> <span class="hljs-title">App</span>(<span class="hljs-params"></span>) </span>{
  <span class="hljs-keyword">const</span> [count, setCount] = useState(<span class="hljs-number">0</span>);

  <span class="hljs-keyword">const</span> handleClick = <span class="hljs-function">() =&gt;</span> {
    setCount(<span class="hljs-function">(<span class="hljs-params">c</span>) =&gt;</span> c + <span class="hljs-number">1</span>);
  };

  <span class="hljs-keyword">return</span> (
    <span class="xml"><span class="hljs-tag">&lt;<span class="hljs-name">div</span>&gt;</span>
      <span class="hljs-tag">&lt;<span class="hljs-name">h1</span>&gt;</span>Count: {count}<span class="hljs-tag">&lt;/<span class="hljs-name">h1</span>&gt;</span>
      <span class="hljs-tag">&lt;<span class="hljs-name">Button</span> <span class="hljs-attr">onClick</span>=<span class="hljs-string">{handleClick}</span> /&gt;</span>
    <span class="hljs-tag">&lt;/<span class="hljs-name">div</span>&gt;</span></span>
  );
}
</code></pre>
<hr />
<h3 id="heading-react-compiler-advantages"><strong>React Compiler එකේ Advantages</strong></h3>
<ul>
<li><p><strong>Automatic Memoization:</strong></p>
<p>  memo හෝ useCallback වගේ hacks manual add කරන්න ඕනේ නෑ. React Compiler එක විසින් code එක analyze කරලා අවශ්‍ය වෙලාවට memoization handle කරනවා.</p>
</li>
<li><p><strong>Cleaner Code:</strong></p>
<p>  useCallback සහ useMemo wrappers අඩු වෙන නිසා code එක සරල වීම සහා maintain කරන්න ලේසි වෙනවා.</p>
</li>
<li><p><strong>Optimize Performance:</strong></p>
<p>  Build time එකේදිම compiler එක අවශ්‍ය optimizations යොදනවන නිසා අනවශ්‍ය re-render වැලකීම මගින් application එක හොද performance level එකකින් වැඩ කරනවා.</p>
</li>
<li><p><strong>Better Developer Experience:</strong><br />  Developers ලට performance hacks ගැන වැඩි අවදානයක් දෙන්නේ නැතුව features develop කරන එක focused කරන්න පුළුවන්.</p>
</li>
</ul>
<hr />
<h1 id="heading-limitations">Limitations</h1>
<ul>
<li><p>හැම situation එකකම compiler automatic optimize කරන්නේ නෑ.</p>
</li>
<li><p>Complex objects, refs, context-heavy usage වගේ තැන්ව ලදී developer intervention අවශ්‍ය වෙන්න පුළුවන්.</p>
</li>
</ul>
<hr />
]]></content:encoded></item></channel></rss>