How to Train Artificial Intelligence:
A Sneak Peek into the Process 

Artificial intelligence can be a difficult concept to grasp for many of us. There are so many aspects to what might be possible with AI, and the potential applications for it are so far reaching it really can feel like an infinite number of options – which is hard for any brain to conceptualize! 

In this blog post, we want to get specific and detailed about some of what goes into training artificial intelligence. Specifically, we will break down some of the early stages in our own process of building and developing our own models. This is by no means a comprehensive look at all that goes into this process, but we hope it will give a glimpse into one very important aspect of it. 

The first step in training AI is to collect data – and we mean lots of it! Here at Resua, one of the prominent ideas for the models we want to build is, “Give us 200 examples of a task, and we can train AI to do it.” 

Yes, 200 examples – we did say it was a lot of data. 

Now, what kind of data? 

Let’s get into an example – say we want to train artificial intelligence to be able to read through an article and give us a summary of it so that we don’t need to read the entire thing. What we would need to feed into it would be 200 examples of: 

  1. The article 
  2. The summary 

We would want a wide range of topics, types of articles, and writers. The more examples we can feed into the AI, the better the results will be because it will have plenty to learn from in order to complete the task on its own further down the road. 

Now, you might be thinking, “No way I’d want to summarize 200 articles.” Yes, that does sound like an awful lot of work. 

Here is where things get fun! 

The Process

For us, this process involved using the pre-existing artificial intelligence model GPT-3. The GPT-3 transformer is an artificial intelligence model which was been built to generate large amounts of realistic and natural text based off small inputs. GPT-3 was developed by OpenAI, and it is currently the largest neural network machine learning model, containing 175 billion parameters, and making it 17 times as large as its predecessor GPT-2. 

The applications of this tool are far-reaching and are changing the game in many ways. GPT-3 has already been used to create better customer service chatbot experiences, to generate articles, to write stories, to build reports, to answer questions, to develop better speech recognition and translation models, and so much more. Not only can it generate realistic text in any language, it can even go beyond human language to generate things like programming code. 

One can access GPT-3 through OpenAI, or in our case, through AI21. Their creator studio allows you to play around with pre-existing models, or to use it to build your own tools. 

As you can see – there are a number of examples listed to play around with, including customer service bots, Twitter marketing, ad copy, article titles, and so on. 

For our purposes, what we would do is insert two examples into the model, and then test to see if GPT-3 can replicate what we want based on those examples. We would decide the parameters and how we want to input the information, using some kind of key or stop code between the examples so it knows where the breaks in the information are. For these examples we will use two hashtags as the stop code (##) in order to make the pattern clear to the machine. 

This might look like:

Example #1
Input Headline: [Article #1 Headline]
Article: [Full Article #1 Text in one big paragraph]##
Output Summary: [Summary of Article #1, written by a human]##
Example #2
Input Headline: [Article #2 Headline]
Article: [Full Article #2 Text in one big paragraph]##
Output Summary: [Summary of Article #2 written by a human]##
Example #3
Input Headline: [Article #3 Headline]
Article: [Full Article #3 Text in one big paragraph]##

We would put two examples in and then input the headline and text of a third article without filling in the summary, in order to test and see what AI would come up with. The structure would then look like this:

Obviously, we would have actual information and text put in there. For now, we just want to show the basic structure of what goes in. Two examples of what we want – the article and the summary, and then a third article, leaving it up to AI to fill in the summary and see what we get! If this was all filled in, we would set some parameters (which we will explain in more detail later) and hit “generate” to see what we get! 

If things all work out well and GPT-3 produces a good summary of the article we give it, then we’ve created 1 more piece of data and would know that that structure works to teach AI what we want.  

Even with this bare-bones structure, AI can see the pattern and produce a result. As you can see here, when I hit “generate”, it filled in the rest: 

Example

Now let’s get into a real example to better demonstrate how this process works.  

We will change up the prompt for what we want AI to do for us. Here, let’s have AI generate some headlines based off an article.  

What we’ve done here is taken two articles and drafted them in a clear structure: 

As you can see in this screenshot, there are number of parameters we can set differently and play with. 

  • Model: Options for what size AI we want to be using and how much information we want it to have access to. 
    • J1-large: 7.5 billion parameters 
    • J1-grande: 17 billion parameters 
    • J1-jumbo: 178 billion parameters 
  • Max completion length: How many total characters, or tokens, will be used up in both the prompt and the response. The maximum is 2048. If we wanted a long response, we would probably use the maximum amount of tokens available. 
  • Temperature: Controls the sampling randomness – the higher the temperature, the more creative the responses will be. 
  • Top P: Filters for the best information to pull from – a higher Top P will mean only the best and most likely options are selected for, creating more stable and repetitive results. 

For our example, we’ll use the largest model available for this. We will set the max completion length – we could constrain the length since these are short headlines we are asking it to generate, or we can just set it the maximum length and see what gets generated. 

Playing with the Temperate and Top P are a bit more of an intuitive art, and something we may want to try a few different ways and see which settings generate the best results. For this situation, we set the Temperature to 0.8, giving it a bit of creative freedom, and we set the Top P to 0.98, meaning it should give us fairly stable and predictable headlines. It can be fun to give AI way more creative freedom, but for the purposes of the example and making sure our prompt works, we will restrain it somewhat.

For our example, I used these two articles as our samples:  

A motorcycle tire stuck around a crocodile’s neck for 6 years is finally removed 

Indian students block roads as row over hijab in schools mounts 

And the test article will be this fun one: 

Mariah Carey calls out ‘diva’ Meghan Markle: ‘Don’t act like’ you aren’t 

As you can see above, it was able to generate a few different options for some headlines. Some are a bit similar and repetitive, but it was able to do what we hoped! 

Doing this in this way has enabled us to make sure that this process works and can be replicated. Now, going back to the beginning, we will want to have at least 200 examples ready to go in order to even consider beginning to train AI with this skill set. The more examples the better, but 200 is a good place to start. 

This can be a time-consuming task, but these examples show how we can use the existing technology to save ourselves a little bit of time – not needing a human to come up with 200 examples and using AI to make the process more efficient. We could use this process and tool to generate 200 examples much faster. Then, we would take all of those pieces of data and use them to train AI to repeat that task, hopefully with a lot of good variation and natural-sounding language.

Only the Beginning

We hope that this has been an interesting sneak peek into some of the very first steps that go into the process of training AI. As you can tell, this is a complex process that cannot be fully covered in one article, with many more technical steps ahead. That being said, we hope that breaking the process down in this way has helped to give you a better understanding and overview of what goes into this work and how a machine can be trained to do a certain task! 

Here at Resua, we believe that we can develop all sorts of tools with this technology that can ultimately save all of us humans a lot of time. We imagine a world where AI can take care of tasks that can be automated, leaving us more time to focus on the aspects of our work and our lives that will always require that human touch. It’s not about machines doing the work for us, but assisting us with the more monotonous, time-consuming tasks, so that we can live more freely and focus on what truly matters. 

If you have any questions about this process, please reach out and let us know! We’d love to hear from you. 

Share this Post

Facebook
Twitter
LinkedIn
Scroll to Top