0800-31-0700 for new subscribers
0800-31-0800 technical support

Neural networks: what is it and where is it used

Home /

Blog

/

Neural networks: what is it and where is it used

Neural networks: what is it and where is it used

13.03.2023

Software

20501

Have you seen the movie Terminator? Remember the imminent rise of the machines? If it happens, you'll blame the scientists who are developing neural networks today. But as long as your fridge isn't up to anything, read our article. Today we'll cover the role of neural networks in modern technology in simple terms.

What it is

A neural network (hereinafter referred to as a NN) is a mathematical model that learns itself from the data it receives. It works like the human brain, but instead of neurons, it uses hardware and software.

There are around 86 billion neurons functioning in the brain. If we think of a neuron as an electrical element, it will have many inputs and one output. The output is determined by a set of inputs.


Historical background

The idea of neural networks emerged as scientists gained knowledge about the way the brain works and neurophysiology developed. The concept of a neural network was first mentioned in 1943. At that time, scientists were considering implementation of NN equivalents based on vacuum tubes. Already in the late 40's the first learning algorithm was proposed.

The famous mathematician Alan Turing in the 50's suggested that in 50 years, machines will learn how to think and will acquire the computing power that will allow them to deceive people by the plausibility of speech. He was right, but more on that later :)

In the 1980s, many scientists had already developed a serious interest in learning networks, but before the advent of the Internet, working with huge databases had to be done manually.

In 2007, Toronto created and implemented deep learning algorithms that are now used in many modern systems. For example, it’s thanks to this development that our smartphone cameras instantly find and focus on faces, search engines offer us the most relevant results, and websites offer advertisements.

NN today

NNs differ from conventional software in their ability to learn. It does not only take place during the network start-up phase, but continuously. Therefore, the result is often based on data acquired long after implementation.

A prime example is — Google Translator. When translating, users can select the most appropriate word from the options and also suggest their own. The software remembers the choices and later uses the data to show them to other users.

Other networks may use online libraries and databases as sources of information.

Turing was right: on many websites today you can chat with chatbots who will answer your questions. These are usually rather primitive chatbots whose task is solely to advise potential customers.

More serious designs are presented in the Turing Test competition. A few years ago, for example, a chatbot convinced 47% of the contest judges that it was a living girl of fourteen.

In 2016, a programme of Japanese developers self-wrote a book called The Day the Computer Writes a Novel. It received positive reviews and was entered in the finals of a literary competition along with books authored by humans.

And, of course, it’s impossible not to mention the discussed development of the company OpenAI ChatGPT — a chatbot with artificial intelligence (AI), which is able to work in dialog mode with the user. Launched in November 2022, the service has caused a furor with its extensive capabilities, which include the creation of texts in a variety of formats (from posts for social networks and email distributions to website articles and coursework).

But the chatbot has already had its shortcomings. First, it generates information based on content from the Internet as of 2021, so it can't access more recent facts. Second, it does not recognise true and untrue sources, so it can misinform the person it’s talking to.

Despite the programme's imperfection, in just two months the audience of active ChatGPT users around the world grew to a record 100 million people. Until February 18, 2023, the service was blocked for residents of Ukraine, but now the restriction is removed. So if you still have not had time to chat with the chatbot, nothing prevents you from doing so. But keep in mind: it's better to formulate ChatGPT requests in English, because the AI still has a rather poor vocabulary of Ukrainian.

In addition to text content generation, the NS are capable of generating and visualising. One of the giants in the field of development of graphics processors, NVIDIA created the intelligent application GauGAN. It can be used to produce almost real-life paintings from primitive drawings.


Another popular artificial intelligence programme that allows pictures to be generated from natural language descriptions is called Midjourney. For example, below is an image of the computers of the future that it has generated.


Today, neural networks are widely used in various industries to perform tasks such as:

  • social media filtering and behavioural data analysis to develop personalised recommendations in targeted marketing;
  • financial forecasting;
  • diagnostics through the classification of medical images in healthcare institutions;
  • forecasting of power grid loads and energy requirements;
  • identification of chemical compounds, etc.

The future of NNs

The development of this industry is aimed at creating fully autonomous artificial intelligence. A fully working model of the human brain is scheduled for presentation in 2022. Swiss scientists and IBM are working on the development.

You would agree that today we are no longer surprised by self-service checkouts in many supermarkets. Although they are not even neural networks yet, installing them allows you to automate the work of people doing monotonous jobs and thus reduce the number of jobs. Is it possible that, just like that, the future has arrived in which robots will be able, albeit partially, to replace humans? What will happen then...?

And there's really a lot to look forward to. That's the conclusion we can draw from the key features of neural networks:

  • Increasing complexity. As the development of more powerful hardware and software for training neural networks continues, it will become possible to build more complex models. This will make it possible to solve more complex problems in high-precision fields such as computer vision, robotics, etc.
  • More effective learning. Training neural networks is a computationally intensive task that requires significant resources. And researchers are not resting on their laurels, but continue to work on improving the efficiency of learning algorithms and developing new techniques to accelerate the process.
  • Better interpretability. One of the problems with neural networks is that they can be difficult to interpret. Researchers are working to develop methods to make NNs more transparent. This will allow users to better understand how neural networks make their decisions.
  • Integration with other technologies. Neural networks are just one part of the AI technology ecosystem. Over time, closer integration of neural networks with other technologies (natural language processing, speech recognition, machine vision) is expected.
  • Wider adoption. Humans already use neural networks in a wide range of applications, from human speech and image recognition to autonomous vehicles. And as the technology continues to improve and become more widely available, we can expect even wider adoption in a variety of industries.

Summary

In conclusion, I would like to add that neural networks should not be feared or treated with hostility. The winners will be those who are able to make friends with advanced technologies and apply them for the benefit of their work and life in general. That is why it’s better not to put off getting to know neural networks, but to start mastering the tools that are new to you right now.

Comments

0

Еще комментарии