Skip to content

Commit 80824c7

Browse files
committed
insert "hero" image
1 parent 5a69601 commit 80824c7

File tree

10 files changed

+6
-100
lines changed

10 files changed

+6
-100
lines changed

content/posts/machine learning/deep learning/NLP/Gemma2+RAG/index.md

Lines changed: 1 addition & 48 deletions
Original file line numberDiff line numberDiff line change
@@ -8,7 +8,7 @@ menu:
88
identifier: gemma2_rag
99
parent: nlp
1010
weight: 9
11-
hero: mermaid-diagram.svg
11+
hero: mermaid-diagram-hd.png
1212
tags: ["Deep Learning", "NLP", "Machine Learning"]
1313
categories: ["NLP"]
1414
---
@@ -88,11 +88,6 @@ from llama_index.vector_stores.faiss import FaissVectorStore
8888
import faiss
8989
```
9090

91-
/opt/conda/lib/python3.10/site-packages/pydantic/_internal/_fields.py:161: UserWarning: Field "model_id" has conflict with protected namespace "model_".
92-
93-
You may be able to resolve this warning by setting `model_config['protected_namespaces'] = ()`.
94-
warnings.warn(
95-
9691

9792
## Data Loading
9893
* Use *SimpleDirectoryReader* from llama_index.
@@ -114,48 +109,6 @@ embed_model = HuggingFaceEmbedding(model_name="sentence-transformers/all-MiniLM-
114109
```
115110

116111

117-
modules.json: 0%| | 0.00/349 [00:00<?, ?B/s]
118-
119-
120-
121-
config_sentence_transformers.json: 0%| | 0.00/116 [00:00<?, ?B/s]
122-
123-
124-
125-
README.md: 0%| | 0.00/10.7k [00:00<?, ?B/s]
126-
127-
128-
129-
sentence_bert_config.json: 0%| | 0.00/53.0 [00:00<?, ?B/s]
130-
131-
132-
133-
config.json: 0%| | 0.00/612 [00:00<?, ?B/s]
134-
135-
136-
137-
model.safetensors: 0%| | 0.00/90.9M [00:00<?, ?B/s]
138-
139-
140-
141-
tokenizer_config.json: 0%| | 0.00/350 [00:00<?, ?B/s]
142-
143-
144-
145-
vocab.txt: 0%| | 0.00/232k [00:00<?, ?B/s]
146-
147-
148-
149-
tokenizer.json: 0%| | 0.00/466k [00:00<?, ?B/s]
150-
151-
152-
153-
special_tokens_map.json: 0%| | 0.00/112 [00:00<?, ?B/s]
154-
155-
156-
157-
1_Pooling/config.json: 0%| | 0.00/190 [00:00<?, ?B/s]
158-
159112

160113
## 4. Language Model Setup and Loading
161114
* It uses the "google/gemma-2-9b-it" model, a powerful instruction-tuned language model.
Loading
Loading

public/index.json

Lines changed: 1 addition & 1 deletion
Large diffs are not rendered by default.

public/posts/index.html

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -480,7 +480,7 @@
480480
<div class="card">
481481
<div class="card-head">
482482
<a href="/posts/machine-learning/deep-learning/nlp/gemma2&#43;rag/" class="post-card-link">
483-
<img class="card-img-top" src='/posts/machine-learning/deep-learning/nlp/gemma2&#43;rag/mermaid-diagram.svg' alt="Hero Image">
483+
<img class="card-img-top" src='/posts/machine-learning/deep-learning/nlp/gemma2&#43;rag/mermaid-diagram-hd.png' alt="Hero Image">
484484
</a>
485485
</div>
486486
<div class="card-body">

public/posts/machine-learning/deep-learning/nlp/gemma2+rag/index.html

Lines changed: 3 additions & 50 deletions
Original file line numberDiff line numberDiff line change
@@ -491,7 +491,7 @@
491491
<div class="content">
492492
<div class="container p-0 read-area">
493493

494-
<div class="hero-area col-sm-12" id="hero-area" style='background-image: url(/posts/machine-learning/deep-learning/nlp/gemma2&#43;rag/mermaid-diagram.svg);'>
494+
<div class="hero-area col-sm-12" id="hero-area" style='background-image: url(/posts/machine-learning/deep-learning/nlp/gemma2&#43;rag/mermaid-diagram-hd.png);'>
495495
</div>
496496

497497

@@ -575,12 +575,7 @@ <h2 id="2-setup-and-import">2. Setup and Import</h2>
575575
</span></span><span style="display:flex;"><span>
576576
</span></span><span style="display:flex;"><span><span style="color:#f92672">from</span> llama_index.vector_stores.faiss <span style="color:#f92672">import</span> FaissVectorStore
577577
</span></span><span style="display:flex;"><span><span style="color:#f92672">import</span> faiss
578-
</span></span></code></pre></div><pre><code>/opt/conda/lib/python3.10/site-packages/pydantic/_internal/_fields.py:161: UserWarning: Field &quot;model_id&quot; has conflict with protected namespace &quot;model_&quot;.
579-
580-
You may be able to resolve this warning by setting `model_config['protected_namespaces'] = ()`.
581-
warnings.warn(
582-
</code></pre>
583-
<h2 id="data-loading">Data Loading</h2>
578+
</span></span></code></pre></div><h2 id="data-loading">Data Loading</h2>
584579
<ul>
585580
<li>Use <em>SimpleDirectoryReader</em> from llama_index.</li>
586581
</ul>
@@ -593,49 +588,7 @@ <h2 id="data-loading">Data Loading</h2>
593588
</ul>
594589
<div class="highlight"><pre tabindex="0" style="color:#f8f8f2;background-color:#272822;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-python" data-lang="python"><span style="display:flex;"><span><span style="color:#75715e"># Load embedding model</span>
595590
</span></span><span style="display:flex;"><span>embed_model <span style="color:#f92672">=</span> HuggingFaceEmbedding(model_name<span style="color:#f92672">=</span><span style="color:#e6db74">&#34;sentence-transformers/all-MiniLM-L6-v2&#34;</span>)
596-
</span></span></code></pre></div><pre><code>modules.json: 0%| | 0.00/349 [00:00&lt;?, ?B/s]
597-
598-
599-
600-
config_sentence_transformers.json: 0%| | 0.00/116 [00:00&lt;?, ?B/s]
601-
602-
603-
604-
README.md: 0%| | 0.00/10.7k [00:00&lt;?, ?B/s]
605-
606-
607-
608-
sentence_bert_config.json: 0%| | 0.00/53.0 [00:00&lt;?, ?B/s]
609-
610-
611-
612-
config.json: 0%| | 0.00/612 [00:00&lt;?, ?B/s]
613-
614-
615-
616-
model.safetensors: 0%| | 0.00/90.9M [00:00&lt;?, ?B/s]
617-
618-
619-
620-
tokenizer_config.json: 0%| | 0.00/350 [00:00&lt;?, ?B/s]
621-
622-
623-
624-
vocab.txt: 0%| | 0.00/232k [00:00&lt;?, ?B/s]
625-
626-
627-
628-
tokenizer.json: 0%| | 0.00/466k [00:00&lt;?, ?B/s]
629-
630-
631-
632-
special_tokens_map.json: 0%| | 0.00/112 [00:00&lt;?, ?B/s]
633-
634-
635-
636-
1_Pooling/config.json: 0%| | 0.00/190 [00:00&lt;?, ?B/s]
637-
</code></pre>
638-
<h2 id="4-language-model-setup-and-loading">4. Language Model Setup and Loading</h2>
591+
</span></span></code></pre></div><h2 id="4-language-model-setup-and-loading">4. Language Model Setup and Loading</h2>
639592
<ul>
640593
<li>It uses the &ldquo;google/gemma-2-9b-it&rdquo; model, a powerful instruction-tuned language model.</li>
641594
<li>It configures 8-bit quantization to reduce memory usage</li>
Loading
Loading

0 commit comments

Comments
 (0)