As an Amazon Associate Nanowerk earns from qualifying purchases.
smartworlder logo
smartworlder logo

Artificial Intelligence Explained - what it is and what it does

Content
– Background, History and Definition of AI
– Goals of Artificial Intelligence - Weak AI versus Strong AI
– The Four Types of AI
– Subsets of Artificial Intelligence
– Examples of AI
– AI Programming Languages and Programming Frameworks
– AI in Nanotechnology
Few terms are as poorly understood as artificial intelligence, which has given rise to so many questions and debates that no singular definition of the field is universally accepted. Arriving at such a definition is further complicated, if not made impossible, by the fact that there isn’t even an unanimously agreed definition of (human) intelligence or what constitutes ‘thinking’.

Background, History and Definition of AI

Research into the science of Artificial Intelligence (AI) started in the 1950s, stimulated by the invention of computers. In 1950, Alan Turing published his paper Computing Machinery and Intelligence in which he proposed to consider the question “Can machines think?” and then went on to discuss how to build intelligent machines and how to test their intelligence.
He proposed an imitation game, which later became known as the Turing Test, in which a human evaluator would judge natural language conversations between a human and a machine designed to generate human-like responses. If the evaluator cannot reliably tell the machine from the human, the machine is said to have passed the test. The test results do not depend on the machine's ability to give correct answers to questions, only how closely its answers resemble those a human would give.
Developed from 1964-1966 by Joseph Weizenbaum at MIT Artificial Intelligence Laboratory, ELIZA, a natural language processing computer program that achieved cult status, was one of the first chatbots and one of the first programs capable of attempting the Turing test.
Eliza interface
The very 1960s Eliza interface.

Where it all began

Move forward six years from Turing’s paper to 1956, the Dartmouth Summer Research Project on Artificial Intelligence took place, widely considered to be the founding event of artificial intelligence as a field. The event was organized by John McCarthy, Marvin Minsky, Nathaniel Rochester and Claude Shannon. In preparing the workshop, John McCarthy actually coined the term Artificial Intelligence.
At that time, the scientists thought that human intelligence can be so precisely described that a machine can be made to simulate it. From the proposal for the workshop: “The study is to proceed on the basis of the conjecture that every aspect of learning or any other feature of intelligence can in principle be so precisely described that a machine can be made to simulate it. An attempt will be made to find how to make machines use language, form abstractions and concepts, solve kinds of problems now reserved for humans, and improve themselves.”
Since then, many different definitions of AI have been proposed.

Early Days

For instance, Marvin Minsky (co-founder of the Massachusetts Institute of Technology's AI laboratory) in his book The Society of Mind defined it as “The field of research concerned with making machines do things that people consider to require intelligence.” He went on to state that there is no clear boundary between psychology and AI because the brain itself is a kind of machine.
In psychology, human intelligence is usually characterized by the combination of many diverse behaviors, not just one trait. This includes abilities to learn, form concepts, understand, apply logic and reason, including the capacities to recognize patterns, plan, innovate, solve problems, make decisions, retain information, and use language to communicate.
Research in AI has focused mainly on the same abilities; or, as Patrick Henry Winston, who succeeded Minsky as director of the MIT Artificial Intelligence Laboratory (now the MIT Computer Science and Artificial Intelligence Laboratory, put it in his book Artificial Intelligence: “AI is the study of the computations that make it possible to perceive, reason and act.”
According to this definition, AI leverages computers and machines to mimic the problem-solving and decision-making capabilities of the human mind.

Knowledge-based Systems

Rather than tackling each problem from scratch, another branch of AI research has sought to embody knowledge in machines: The most efficient way to solve a problem is to already know how to solve it. But simple assumption presents several problems on its own: Discovering how to acquire the knowledge that is needed; learning how to represent it; developing processes that can exploit the knowledge effectively.
This research has led to many practical knowledge-based problem-solving systems. Some of them are called expert systems because they are based on imitating the methods of particular human experts.
In what has become one of the leading textbooks on AI, in 1995 Stuart J. Russell and Peter Norvig published Artificial Intelligence: A Modern Approach, in which they delve into four potential goals or definitions of AI. They differentiate cognitive computer systems on the basis of rationality and thinking vs. acting:
A human approach that deals with thought processes and reasoning:
  • Systems that think like humans
  • Systems that act like humans
  • An ideal approach that deals with behavior:
  • Systems that think rationally
  • Systems that act rationally
  • Alan Turing’s definition would have fallen under the category of “systems that act like humans.”
    More recently, Francois Chollet, an AI researcher at Google and creator of the machine-learning software library Keras, argues that intelligence is tied to a system's ability to adapt and improvise in a new environment, to generalize its knowledge and apply it to unfamiliar scenarios: “Intelligence is the efficiency with which you acquire new skills at tasks that you did not previously know about that you did not prepare for… so intelligence is not a skill itself, it's not what you know, it's not what you can do, it's how well and how efficiently you can learn. (watch the full interview with him here on YouTube).
    And yet another definition from IBM postulates that “AI is the simulation of human intelligence processes by computers. These processes include learning from constantly changing data, reasoning to make sense of data and self-correction mechanisms to make decisions”.

    Methods of AI Research

    Early AI research developed into two distinct, and to some extent competing, methods – the top-down (or symbolic) approach, and the bottom-up (or connectionist) approach.
    The top-down approach seeks to replicate intelligence by analyzing cognition independent of the biological structure of the brain, in terms of the processing of symbols. In contrast, the bottom-up approach seeks to create artificial neural networks in imitation of the brain’s structure.
    Both approaches are facing difficulties: Symbolic techniques work in simplified lab environments but typically break down when confronted with the complexities of the real world; meanwhile, bottom-up researchers have been unable to replicate the nervous systems of even the simplest living things. Caenorhabditis elegans, a much-studied worm, has approximately 300 neurons whose pattern of interconnections is perfectly known. Yet, connectionist models have failed to mimic even this worm (source).

    Goals of artificial intelligence – Weak AI versus Strong AI

    Artificial Narrow Intelligence

    Weak AI, or more fittingly: Artificial Narrow Intelligence (ANI), operates within a limited context and is a simulation of human intelligence. Narrow AI is often focused on performing a single task extremely well and while these machines may seem intelligent, they are operating under far more constraints and limitations than even the most basic human intelligence.
    ANI systems are already widely used in commercial systems for instance as personal assistants such as Siri and Alexa, expert medical diagnosis systems, stock-trading systems, Google search, image recognition software, self-driving cars, or IBM's Watson.

    Artificial General Intelligence

    Strong AI, or Artificial General Intelligence (AGI), is the kind of artificial intelligence we see in the movies, like the robots from Westworld, Ex_Machina, or I, Robot. The ultimate ambition of strong AI is to produce a machine whose overall intellectual ability is indistinguishable from that of a human being. Much like a human being, an AGI system would have a self-aware consciousness that has the ability to solve any problem, learn, and plan for the future.

    Superintelligence

    And then there is the concept of Artificial Super Intelligence (ASI), or superintelligence, that would surpass the intelligence and ability of the human brain. Superintelligence is still entirely theoretical with no practical examples in use today.

    Cognitive Computing

    A term you might hear in the context of AI is Cognitive Computing. Quite often the terms are used interchangeably. Although both refer to technologies that can perform and/or augment tasks, help better inform decisions, and create interactions that have traditionally required human intelligence, such as planning, reasoning from partial or uncertain information, and learning, there are differences.
    Whereas the goal of AI systems is to solve a problem through the best possible algorithm (and not necessarily as humans would do it), cognitive systems mimic human behavior and reasoning to solve complex problems.

    The Four Types of AI

    Type 1 AI: Reactive machines

    The most basic types of AI systems are purely reactive and have the ability neither to form memories nor to use past experiences to inform current decisions. Deep Blue, IBM’s chess-playing supercomputer, which beat international grandmaster Garry Kasparov in the late 1990s, is the perfect example of this type of machine.
    Deep Blue can identify the pieces on a chess board and know how each moves. It can make predictions about what moves might be next for it and its opponent. And it can choose the most optimal moves from among the possibilities.
    But it doesn’t have any concept of the past, nor any memory of what has happened before. Apart from a rarely used chess-specific rule against repeating the same move three times, Deep Blue ignores everything before the present moment. All it does is look at the pieces on the chess board as it stands right now, and choose from possible next moves.
    This type of intelligence involves the computer perceiving the world directly and acting on what it sees. It doesn’t rely on an internal concept of the world. In a seminal paper, AI researcher Rodney Brooks argued that we should only build machines like this. His main reason was that people are not very good at programming accurate simulated worlds for computers to use, what is called in AI scholarship a “representation” of the world.
    The current intelligent machines we marvel at either have no such concept of the world, or have a very limited and specialized one for its particular duties. The innovation in Deep Blue’s design was not to broaden the range of possible movies the computer considered. Rather, the developers found a way to narrow its view, to stop pursuing some potential future moves, based on how it rated their outcome. Without this ability, Deep Blue would have needed to be an even more powerful computer to actually beat Kasparov.
    Similarly, Google’s AlphaGo, which has beaten top human Go experts, can’t evaluate all potential future moves either. Its analysis method is more sophisticated than Deep Blue’s, using a neural network to evaluate game developments.
    These methods do improve the ability of AI systems to play specific games better, but they can’t be easily changed or applied to other situations. These computerized imaginations have no concept of the wider world – meaning they can’t function beyond the specific tasks they’re assigned and are easily fooled.
    They can’t interactively participate in the world, the way we imagine AI systems one day might. Instead, these machines will behave exactly the same way every time they encounter the same situation. This can be very good for ensuring an AI system is trustworthy: You want your autonomous car to be a reliable driver. But it’s bad if we want machines to truly engage with, and respond to, the world. These simplest AI systems won’t ever be bored, or interested, or sad.

    Type 2 AI: Limited memory

    This Type 2 class contains machines can look into the past. Self-driving cars do some of this already. For example, they observe other cars’ speed and direction. That can’t be done in a just one moment, but rather requires identifying specific objects and monitoring them over time.
    These observations are added to the self-driving cars’ preprogrammed representations of the world, which also include lane markings, traffic lights and other important elements, like curves in the road. They’re included when the car decides when to change lanes, to avoid cutting off another driver or being hit by a nearby car.
    But these simple pieces of information about the past are only transient. They aren’t saved as part of the car’s library of experience it can learn from, the way human drivers compile experience over years behind the wheel.
    So how can we build AI systems that build full representations, remember their experiences and learn how to handle new situations? Brooks was right in that it is very difficult to do this. My own research into methods inspired by Darwinian evolution can start to make up for human shortcomings by letting the machines build their own representations.

    Type 3 AI: Theory of mind

    We might stop here, and call this point the important divide between the machines we have and the machines we will build in the future. However, it is better to be more specific to discuss the types of representations machines need to form, and what they need to be about.
    Machines in the next, more advanced, class not only form representations about the world, but also about other agents or entities in the world. In psychology, this is called “theory of mind” – the understanding that people, creatures and objects in the world can have thoughts and emotions that affect their own behavior.
    This is crucial to how we humans formed societies, because they allowed us to have social interactions. Without understanding each other’s motives and intentions, and without taking into account what somebody else knows either about me or the environment, working together is at best difficult, at worst impossible.
    If AI systems are indeed ever to walk among us, they’ll have to be able to understand that each of us has thoughts and feelings and expectations for how we’ll be treated. And they’ll have to adjust their behavior accordingly.

    Type 4 AI: Self-awareness

    The final step of AI development is to build systems that can form representations about themselves. Ultimately, we AI researchers will have to not only understand consciousness, but build machines that have it.
    This is, in a sense, an extension of the “theory of mind” possessed by Type III artificial intelligences. Consciousness is also called “self-awareness” for a reason. (“I want that item” is a very different statement from “I know I want that item.”) Conscious beings are aware of themselves, know about their internal states, and are able to predict feelings of others. We assume someone honking behind us in traffic is angry or impatient, because that’s how we feel when we honk at others. Without a theory of mind, we could not make those sorts of inferences.
    While we are probably far from creating machines that are self-aware, we should focus our efforts toward understanding memory, learning and the ability to base decisions on past experiences. This is an important step to understand human intelligence on its own. And it is crucial if we want to design or evolve machines that are more than exceptional at classifying what they see in front of them.

    Subsets of Artificial Intelligence

    The cognitive technologies that contribute to various AI applications can broadly be summarized in several categories you might have heard about. Let’s explain them one by one:

    What are Expert Systems?

    These systems gain knowledge about a specific subject and can solve problems as accurately as a human expert on this subject. Expert systems were among the first truly successful forms of AI software. An expert system is divided into two subsystems: the inference engine and the knowledge base. The knowledge base represents facts and rules. The inference engine applies the rules to the known facts to deduce new facts. Inference engines can also include explanation and debugging abilities.

    What is Fuzzy Logic?

    Fuzzy logic is an approach to computing based on ‘degrees of truth’ rather than the usual ‘true or false’ (1 or 0) Boolean logic on which modern computers are based. This standard logic with only two truth values, true and false, makes vague attributes or situations difficult to characterize. Human experts often use rules that contain vague expressions, and so it is useful for an expert system’s inference engine to employ fuzzy logic. AI systems therefore use fuzzy logic to imitate human reasoning and cognition. Rather than strictly binary cases of truth, fuzzy logic includes 0 and 1 as extreme cases of truth but with various intermediate degrees of truth. IBM's Watson supercomputer is one of the most prominent examples of how variations of fuzzy logic and fuzzy semantics are used.
    fuzzy logic
    An example of fuzzy logic. (Source: Tutorialspoint).

    What is Machine Learning?

    The ability of statistical models to develop capabilities and improve their performance overtime without the need to follow explicitly programmed instructions. Machine learning enables computers to self-learn from information and apply that learning without human intercession. This is particularly helpful in situations where a solution is covered up in a huge data set (for instance in weather forecasting or climate modelling) – an area described by the term Big Data.

    What are Deep Learning Neural Networks?

    A complex form of machine learning involving neural networks, with many layers of abstract variables. Neural networks are a subset of machine learning and are at the heart of deep learning algorithms. Their name and structure are inspired by the human brain, mimicking the way that biological neurons signal to one another. Neural networks are a series of algorithms that mimic the operations of a human brain to recognize relationships between vast amounts of data. For instance, in financial services, They are used in a variety of applications from forecasting and marketing research to fraud detection and risk assessment.

    What is Robotic Process Automation (RPA)?

    This is software technology that automates repetitive, rules-based processes usually performed by people sitting in front of computers. RPA makes it easy to build, deploy, and manage software robots that emulate humans’ actions interacting with digital systems and software. Just like people, software robots can do things like understand what’s on a screen, complete the right keystrokes, navigate systems, identify and extract data, and perform a wide range of defined actions. By interacting with applications just as humans would, software robots can open email attachments, complete e-forms, record and re-key data, and perform other tasks that mimic human action. But software robots can do it faster and more consistently than people.

    What is Natural Language Processing (NLP)?

    Natural language processing allows machines to understand and respond to text or voice data in much the same way humans do. This enables conversational interaction between humans and computers. This requires the ability to extract or generate meaning and intent from text in a readable, stylistically natural, and grammatically correct form. NLP drives computer programs that translate text from one language to another, respond to spoken commands, and summarize large volumes of text rapidly - even in real time. In daily life you are already interacting with NLP in the form of voice-operated GPS systems, digital assistants, speech-to-text dictation software, or customer service chatbots.

    What is Speech Recognition?

    The ability to automatically and accurately recognize and transcribe human speech. It uses natural language processing (NLP) to process human speech into a written format. Many mobile devices incorporate speech recognition into their systems to conduct voice search – e.g., Siri or Alexa – or provide more accessibility around texting.

    What is Computer Vision?

    The ability to extract meaning and intent out of visual elements, whether characters (in the case of document digitization), or the categorization of content in images such as faces, objects, scenes, and activities. You see this working in Google’s image search service. Powered by convolutional neural networks, computer vision has applications within photo tagging in social media, radiology imaging in healthcare, and self-driving cars’ ability to recognize obstacles.

    Examples of Artificial Intelligence

    Artificial intelligence has made its way into a wide variety of markets. Here are just a few of the more prominent examples:

    Artificial Intelligence in the Transportation Industry

    AI systems play a fundamental role in operating an autonomous vehicle. Self-driving cars are equipped with a large number of sensors that gather information from the vehicle’s surroundings in real time and in any environment. This data informs the on-board AI system about the presence of objects, distances, road conditions or whether the car is about to hit something. The data from these sensors is used by on-board to make instantaneous decisions to determine whether there are any dangerous conditions, the vehicle needs to shift lanes, or it should slow or stop completely.
    AI technologies are used in other areas of the transportation to manage traffic, predict flight delays, and make ocean shipping safer and more efficient.

    Artificial Intelligence in Healthcare

    Artificial intelligence in healthcare mimics human cognition in the analysis, presentation, and comprehension of complex medical and health care data. Specifically, AI is the ability of computer algorithms to approximate conclusions based solely on input data. The primary aim of health-related AI applications is to analyze relationships between prevention or treatment techniques and patient outcomes. AI programs are applied to practices such as diagnosis processes, treatment protocol development, drug development, personalized medicine, and patient monitoring and care.
    For instance, IBM’s Watson understands natural language and can respond to questions asked of it. The system mines patient data and other available data sources to form a hypothesis, which it then presents with a confidence scoring schema. Scientists in Japan reportedly saved a woman’s life by applying Watson to help them diagnose a rare form of cancer. Faced with a 60-year-old woman whose cancer diagnosis was unresponsive to treatment, they supplied Watson with 20 million clinical oncology studies, and it diagnosed the rare leukemia that had stumped the clinicians in just ten minutes.
    Other AI applications include using online virtual health assistants and chatbots to help patients and healthcare customers find medical information, schedule appointments, understand the billing process and complete other administrative processes. An array of AI technologies is also being used to predict, fight and understand pandemics such as COVID-19.

    Artificial Intelligence in Finance and Banking

    AI in finance encompasses everything from chatbot assistants to credit card fraud detection and task automation. In corporate finance, AI helps to better predict and assess loan risks. And of course, AI systems are and have been developed for investors’ automated trading on stock exchanges.
    Consumer financial services are employing chatbots to make their customers aware of services and offerings and to handle transactions that don't require human intervention. AI virtual assistants are being used to improve and cut the costs of compliance with banking regulations. Banking organizations are also using AI to improve their decision-making for loans, and to set credit limits and identify investment opportunities.

    Artificial Intelligence in Manufacturing

    Manufacturing has been at the forefront of incorporating smart technologies into the workflow. The applications of AI in the field of manufacturing are widespread and range from failure prediction and predictive maintenance to quality assessment, inventory management and pricing decisions.

    Artificial Intelligence in Law and Legal Services

    Contract reviews and analytics as well as legal research – sifting through large amounts of documents is often overwhelming for humans. Using AI to help automate the legal industry's labor-intensive processes is saving time and improving client service. Law firms are using machine learning to describe data and predict outcomes, computer vision to classify and extract information from documents and natural language processing to interpret requests for information.

    Artificial Intelligence in the Military

    Autonomous weapon systems – commonly known as killer robots – are robots with lethal weapons that can operate independently, selecting and attacking targets without a human weighing in on those decisions. Militaries around the world are investing heavily in autonomous weapons research and development. The U.S. alone budgeted US$18 billion for autonomous weapons between 2016 and 2020.

    Artificial Intelligence in Cybersecurity

    AI and machine learning have become essential to information security, as these technologies are capable of swiftly analyzing millions of data sets and tracking down a wide variety of cyber threats - from malware to shady behavior that might result in a phishing attack.These technologies continually learn and improve, drawing data from past experiences and present to detect anomalies and identify suspicious activities that indicate threats and to pinpoint new varieties of attacks that can occur today or tomorrow. By analyzing data and using logic to identify similarities to known malicious code, AI can provide alerts to new and emerging attacks much sooner than human employees and previous technology iterations.

    Artificial Intelligence in Nanotechnology

    Examples of the use and application of artificial intelligence tools in nanotechnology research include: In scanning probe microscopy, researchers have developed an approach called functional recognition imaging (FR-SPM), which looks for direct recognition of local behaviors from measured spectroscopic responses using neural networks trained on examples provided by an expert. This technique combines the use of artificial neural networks (ANNs) with principal component analysis, which is used to simplify the input data to the neural network by whitening and decorrelating the data, reducing the number of independent variables.
    Characterization of the structural properties of nanomaterials has also been solved by the use of ANNs. For example, these algorithms have been employed to determine the morphology of carbon nanotube turfs by quantifying structural properties such as alignment and curvature.
    ANNs have been used to explore the nonlinear relationship between input variables and output responses in the deposition process of transparent conductive oxide.

    AI Programming Languages

    Today’s AI uses conventional CMOS hardware and the same basic algorithmic functions that drive traditional software. Future generations of AI are expected to inspire new types of brain-inspired circuits and architectures that can make data-driven decisions faster and more accurately than a human being can.
    AI programming differs quite a bit from standard software engineering approaches where programming usually starts from a detailed formal specification. In AI programming, the implementation effort is actually part of the problem specification process.
    The programming languages that are used to build AI and machine learning applications vary. Each application has its own constraints and requirements, and some languages are better than others in particular problem domains. Languages have also been created and have evolved based on the unique requirements of AI applications.
    Eras of AI language evolution
    Eras of AI language evolution (Source: IBM)
    Due to the fuzzy nature of many AI problems, AI programming benefits considerably if the programming language frees the programmer from the constraints of too many technical constructions (e.g., low-level construction of new data types, manual allocation of memory). Rather, a declarative programming style is more convenient using built-in high-level data structures (e.g., lists or trees) and operations (e.g., pattern matching) so that symbolic computation is supported on a much more abstract level than would be possible with standard imperative languages.
    From the requirements of symbolic computation and AI programming, two basic programming paradigms emerged initially as alternatives to the imperative style: the functional and the logical programming style. Both are based on mathematical formalisms, namely recursive function theory and formal logic.

    Functional Programming Style

    The first practical and still most widely used AI programming language is the functional language Lisp (Lisp stands for List processor), developed by John McCarthy in the late 1950s. Lisp is based on mathematical function theory and the lambda abstraction.
    Beside Lisp a number of alternative functional programming languages have been developed, in particular ML (which stands for Meta-Language) and Haskell.
    Programming in a functional language consists of building function definitions and using the computer to evaluate expressions, i.e., function application with concrete arguments. The major programming task is then to construct a function for a specific problem by combining previously defined functions according to mathematical principles. The main task of the computer is to evaluate function calls and to print the resulting function values.

    Logical Programming Style

    During the early 1970s, a new programming paradigm appeared, namely logic programming on the basis of predicate calculus. The first and still most important logic programming language is Prolog (an acronym for Programming in Logic), developed by Alain Colmerauer, Robert Kowalski and Phillippe Roussel. Problems in Prolog are stated as facts, axioms and logical rules for deducing new facts.
    Programming in Prolog consists of the specification of facts about objects and their relationships, and rules specifying their logical relationships. Prolog programs are declarative collections of statements about a problem because they do not specify how a result is to be computed but rather define what the logical structure of a result should be. This is quite different from imperative and even functional programming, in which the focus is on defining how a result is to be computed. Using Prolog, programming can be done at a very abstract level quite close to the formal specification of a problem.

    Object–oriented Languages

    Object–oriented languages belong to another well–known programming paradigm. In such languages the primary means for specifying problems is to specify abstract data structures also called objects or classes. A class consists of a data structure together with its main operations often called methods. An important characteristic is that it is possible to arrange classes in a hierarchy consisting of classes and subclasses. Popular object–oriented languages are Eiffel, C++ and Java.
    As new languages are being developed and applied to AI problems, the workhorses of AI (LISP and Prolog) continue to being used extensively. For instance, the IBM team that programmed Watson used Prolog to parse natural-language questions into new facts that could be used by Watson.
    More recent programming languages that have found applications in AI:

    Python

    Python is a general-purpose interpreted language that includes features from many languages (such as object-oriented features and functional features inspired by LISP). What makes Python useful in the development of intelligent applications is the many modules available outside the language. These modules cover machine learning (scikit-learn, Numpy), natural language and text processing (NLTK), and many neural network libraries that cover a broad range of topologies.

    Java

    Java was developed in early 1990s and it can easily be coded, it is highly scalable – making it desirable for AI projects. It is also portable and can easily be implemented on different platforms since it uses virtual machine technology.

    C++

    The C language has been around for a long time but still continues to be relevant. In 1996, IBM developed the smartest and fastest chess-playing program in the world, called Deep Blue. Deep Blue was written in C and was capable of evaluating 200 million positions per second. In 1997, Deep Blue became the first chess AI to defeat a chess grandmaster. C++ is an extension of C and has object-oriented, generic, and functional features in addition to facilities for low-level memory manipulation.

    R

    The R language (and the software environment in which to use it) follows the Python model. R is an open-source environment for statistical programming and data mining, developed in the C language. Because a considerable amount of modern machine learning is statistical in nature, R is a useful language that has grown in popularity since its stable release in 2000. R includes a large set of libraries that cover various techniques; it also includes the ability to extend the language with new features.

    Julia

    Julia is one of the newer languages on the list and was created to focus on performance computing in scientific and technical fields. Julia includes several features that directly apply to AI programming. While it is a general-purpose language and can be used to write any application, many of its features are well suited for numerical analysis and computational science.

    Scala

    Scala is a general-purpose language that blends object-oriented and functional programming styles into a concise machine learning programming language. Designed to be concise, many of Scala's design decisions are aimed to address criticisms of Java.

    AI Frameworks and Libraries

    There are numberous tools out – AI frameworks and AI libraries – out there that make the creation of AI applications such deep learning, neural networks and natural language processing applications easier and faster.
    These are open source collections of AI components, machine learning algorithms and solutions that perform specific, well-defined operations for common use cases to allow rapid prototyping.

    Caffee

    Caffe (which stands for (Convolutional Architecture for Fast Feature Embedding) is a deep learning framework made with expression, speed, and modularity in mind. It is open source, written in C++, with a Python interface.

    Scikit-learn

    Scikit-learn, initially started in 2007 as a Google Summer of Code project, is a free software machine learning library for the Python programming language.

    Google Cloud AutoML

    AutoML enables developers with limited machine learning expertise to train high-quality models specific to their business needs. AutoML offers a free trial but otherwise offers a pay-as-you-go pricing.

    Amazon Machine Learning

    Amazon offers a large set of AI and machine learning services and tools.

    TensorFlow

    TensorFlow is a free and open-source software library developed by the Google Brain team for machine learning and artificial intelligence. .It still is used for both research and production at Google.
    Tensorflow is a symbolic math library based on dataflow and differentiable programming. It can be used across a range of tasks but has a particular focus on training and inference of deep neural networks.
    It offers a highly capable framework for executing the numerical computations needed for machine learning (including deep learning). On top of that, the framework provides APIs for most major languages, including Python, C, C++, Java and Rust.
    TensorFlow manipulates and connects data sets using multidimensional arrays (called tensors) and converts data flow graphs into mathematical operations (referred to as nodes). Programmers can rely on an object-oriented language like Python to treat those tensors and nodes as objects, coupling them to build the foundations for machine learning operations.

    Keras

    Keras is an open-source software library that provides a Python interface for artificial neural networks. Keras acts as an interface for the TensorFlow library.
    Keras features a plug-and-play framework that programmers can use to build deep learning neural network models, even if they are unfamiliar with the specific tensor algebra and numerical techniques.

    PyTorch

    PyTorch is an open-source machine learning library based on the Torch library (a scientific computing framework for creating deep learning algorithms), used for applications such as computer vision and natural language processing, primarily developed by Facebook's AI Research lab. It is free and open-source software using the C++ interface.
    PyTorch is a direct competitor to TensorFlow. In particular, writing optimized code in PyTorch is somewhat easier than in TensorFlow, mainly due to its comprehensive documentation, dynamic graph computations and support for parallel processing.

    Apache MXNet

    Apache MXNet is an open-source deep learning software framework, used to train, and deploy deep neural networks. It offers features similar to TensorFlow and PyTorch but goes further by providing distributed training for deep learning models across multiple machines.
    It is scalable, allowing for fast model training, and supports a flexible programming model and multiple programming languages (including C++, Python, Java, Julia, Matlab, JavaScript, Go, R, Scala, Perl, and Wolfram Language.)
    SmartWorlder logo
    Check out our SmartWorlder section to read more about smart technologies.