Skip to content

Commit 96b71ff

Browse files
authored
Revert "Update 01-llms.md (#28)" (#45)
This reverts commit c8a60f3.
1 parent c8a60f3 commit 96b71ff

File tree

2 files changed

+8
-8
lines changed

2 files changed

+8
-8
lines changed

sessions/01-llms.md

+3-3
Original file line numberDiff line numberDiff line change
@@ -20,13 +20,13 @@ Hey everyone, in this session, we'll review the essential things that you need t
2020

2121
First, what's an LLM? LLM stands for Large Language Model. It's a deep neural network that's trained on a huge amount of data that's able to perform various tasks using natural language. Its main capability is to recognize and generate text.
2222

23-
Next, how do you create an LLM? First, you take as many data as you can from various sources, from the web, a website like Wikipedia, from social networks, from books, from open source repositories, basically everything that you can find, and you feed it to the model. This is a very expensive task that costs a lot of money, and this is how you get what we call the fundamental model. Next, you can fine-tune the model using specialized domain-specific data, and you give it more weight. It's a cheaper task because you use less amount of data, but that's higher quality. You end up with a fine-tuned LLM that may be more useful for a specific use case. To improve the quality of the results, you reinforce the training by having humans ask questions and evaluate the results. This is a long and complicated process, but ultimately, this is what allows you to end up with an LLM that's better suited to follow instructions and basically gets the results that you want.
23+
Next, how do you create an LLM? First, you take as many data as you can from various sources, from the web, a website like Wikipedia, from social networks, from books, from open source repositories, basically everything that you can find, and you feed it to the model. This is a very expensive task that costs a lot of money, and this is how you get what we call the fundamental model. Next, you can fine-tune the model using specialized domain-specific data, and you give it more weight. It's a cheaper task because you use less amount of data, but that's higher quality. You end up with a fine-tuned LLM that may be more useful for a specific use case. To improve the quality of the results, you reinforce the training by having humans ask questions and evaluate the results. This is a long and complicated process, but ultimately, this is what allows you to end up with an LLM that's better suited to follow instruction and basically gets the results that you want.
2424

2525
Now, from a practical standpoint, there are two model types. And just to set things straight, LLMs do not think. At their core, what they do is just text completion. So we send some text as the input for the model, and what you'll get as a result is the completion of the text, just like in a regular IDE when you try to get auto-completion. Let's see a little demo. So here, I'll be using GitHub Copilot as my LLM to test the text completion features. So let's say, for example, I'm setting a command in there that says "print hello to the console". The completion that I will get is what I expect, "console.log hello world".
2626

2727
Now, if I want to try something different and change the input to say, "print hello to a DOM element with the ID root". Now, the completion that I will get from the model is also what I expect, "document.element by ID ...". So yeah, just by changing the input text, I get a different completion.
2828

29-
Next, we also have chat models that are tuned to follow instructions. This is most likely what you've been more familiar with. So if I go back to VS Code, we can have an example with the Copilot chat window in there. This time, I can try to create something a bit more complicated. Let's say, for example, I want to create a function that prints hello to a DOM element with the ID root. And here, I get the function that I need as a result, along with some explanation. So I gave an instruction, and the chat model generated the answer for me.
29+
Next, we also have chat models that are tuned to follow instructions. This is most likely what you've been more familiar with. So if I go back to VS Code, we can have an example with the Copilot chat window in there. This time, I can try to create something a bit more complicated. Let's say, for example, I want to create a function that prints hello to a DOM element with the ID root. And here, I get the function that I need as a result, along with some explanation. So I give an instruction, and the chat model generated for me the answer.
3030

3131
Chat models integrate special tokens to mark the specific parts of the prompt, as you can see in there with the im_start and im_end tokens. But what's a token?
3232

@@ -36,7 +36,7 @@ Tokens are in fact just numbers, and that's great because AI models can only wor
3636

3737
Now, let's talk a bit about the limits. LLMs currently only have a limited amount of tokens that you can use to define the context of your prompts. This is basically how much text that you can use as input. AI models commonly use 2,000 to 4,000 tokens as their context window, which fits about 3,000 words or said otherwise, it's about six pages of documents. We also have newer models that can use more than 100,000 tokens, which means you can fit entire books in there. But you have to keep in mind that the less context that you use, the more attention that you will get, which means you will get more accuracy with your results.
3838

39-
Another limit that you have to keep in mind is while these AI models are powerful to achieve many tasks, just like humans, they have biases. Statistical biases, to be precise. Because the models have been trained on content created by humans, they can sometimes exhibit the same biases as for the training content. And that's also why you should not rely on LLMs for any critical judgment or decision without taking some mitigation measures.
39+
Another limit that you have to keep in mind is while these AI models are powerful to achieve many tasks, just like humans, they have biases. Statistical biases, to be precise. Because the models have been trained on content created by humans, it can sometimes exhibit the same biases as for the training contents. And that's also why you should not rely on LLMs for any critical judgment or decision without taking some mitigation measures.
4040

4141
It's also why at Microsoft, we've been deeply committed to responsible AI. Responsible AI is a framework for building AI systems according to six principles. Fairness, reliability and safety, privacy and security, inclusiveness, transparency, and accountability. For Microsoft, these principles are the cornerstone of a responsible and trustworthy approach to AI, especially as intelligent technology becomes more and more present in products and services that people use every day. And this is not only for us. It's something that you also have to keep in mind when using AI models to build your own applications.
4242

sessions/02-prompt-engineering.md

+5-5
Original file line numberDiff line numberDiff line change
@@ -18,17 +18,17 @@ All slides and code samples: https://aka.ms/genai-js/content
1818

1919
In this session, we'll talk about practical prompt engineering techniques that you can use in your apps to get more effective prompts.
2020

21-
Let's start by defining what exactly prompt engineering is. Prompt engineering is the process of designing and optimizing prompts to get better results from AI models. You'll often hear some fancy names when talking about prompt engineering techniques, but behind the fancy names, there are simple concepts that you can easily understand and apply.
21+
Let's start by defining what exactly is prompt engineering. Prompt engineering is the process of designing and optimizing prompts to get better results from AI models. You'll often hear some fancy names when talking about prompt engineering techniques, but behind the fancy names, there are simple concepts that you can easily understand and apply.
2222

23-
Let's start with zero-shot. It simply means that you can generate results without providing any specific examples, just using the general training data of the model. Here I'm trying to translate text from English to French. "Hello world" in French is "bonjour le monde." Because there were enough texts that were provided in the training material of the model, both in French and English, I don't need anything more to get the result that I want.
23+
Let's start with zero-shot. It simply means that you can generate results without providing any specific example, just using the general training data of the model. Here I'm trying to translate text from English to French. "Hello world" in French is "bonjour le monde." Because there were enough texts that were provided in the training material of the model, both in French and English, I don't need anything more to get the result that I want.
2424

2525
Next, we have few-shot learning. By adding examples in your prompt context, you can condition the output to be what you want it to be. For example, here I have a phrase in the French language followed by a colon and the language name in English. You provide three examples and then a new phrase followed by a colon as input. This will hint the model into giving the result that we want. Okay, this phrase we provided is in Swedish.
2626

2727
As we've seen in the previous video, LLMs do not think: they just complete text. So when you're asking a question that requires some reasoning, very often they'll get it wrong. Like with this simple problem here: "When I was six years old, my sister was twice my age. Now I'm 30. How old is my sister?" And yeah, most of the time, if you're trying to get a straight answer, it will be wrong.
2828

2929
But we can use a technique called chain of thought to force the model to simulate human-like reasoning. If I add this simple phrase, "let's think step by step," in the prompt, this time it will make the model decompose the steps needed to get to the results. By doing that, it will augment the context used to get to the answer. And you're more likely to get the correct results.
3030

31-
Now let's cover a few tips that you can use to improve your prompts. The most important one is to always be as clear as possible when writing your prompts. For example, if you want to get a short product description, add as much detail as you can regarding what you want it to say, like saying here that the bottle is 100% recycled and that it comes with no dyes. The more precise your instructions are, the more accurate the result is.
31+
Now let's cover a few tips that you can use to improve your prompts. The most important one is to always be as clear as possible when writing your prompts. For example, if you want to get a short product description, add as much detail as you can regarding what you want it to say, like saying here that the bottle is 100% recycled and that it comes with no dyes. The more precise your instructions are, the more accurate is the result.
3232

3333
When designing a prompt, try to always think about the context to set it right. For example, if you want to extract the key points from an email, define what topics you're interested in so the model will know what to prioritize. For example, here I can say that I'm interested in AI webinars and submission deadlines. Sometimes you do not need complex descriptions to get what you want. Instead, you can use simple cues to lead the model in the right direction. Like here, adding the keyword SELECT will make the model generate a SQL query, even though we did not specify the query language anywhere.
3434

@@ -50,6 +50,6 @@ And you know what? Now we could even make it more personal, if we could include
5050

5151
But we would like to consume the results in a web UI. So it will be best to get the results as JSON. So what I can do is that I can hint at the format I want in there, without even the need to say that I want JSON. As you can see, I'm just putting the form, the shape of the answer that I want. Let's try it again. And yeah, perfect. I get both the product name and the answer to show in the UI. And now you've seen how you can tweak a prompt to get the results you need from AI models.
5252

53-
Let's quickly recap the tips we've just seen to get more effective prompts. First, be clear in the instructions that you provide. When you can, provide additional context to make sure that the AI model has everything it needs to better understand your request. Use cues to tip the model into the direction you want the answers to be. You can also define the output format explicitly if you have special needs. Sometimes using examples is the best way to get the result that you want. And if you have complex tasks, you can ask explicitly to break them down step by step. Note that you don't need to use all of these all the time. Only use what makes sense for your use case.
53+
Let's quickly recap the tips we've just seen to get more effective prompts. First, be clear in the instructions that you provide. When you can, provide additional context to make sure that the AI model has everything it needs to better understand your request. Use cues to tip the model into the direction you want the answers to be. You can also define the output format explicitly if you have special needs. Sometimes using examples is the best way to get the result that you want. And if you have complex tasks, you can ask explicitly to break it down step by step. Note that you don't need to use all of these all the time. Only use what makes sense for your use case.
5454

55-
In the next session, we'll talk about RAG, a more complex prompt engineering technique to improve the AI'z accuracy and reliability.
55+
In the next session, we'll talk about RAG, a more complex prompt engineering technique to improve the AI accuracy and reliability.

0 commit comments

Comments
 (0)