4 minute read

Artificial Intelligence and Machine Learning

Forging a New World for Scholarly Communication and the Advancement of Science

by Bonnie Lawlor

The uses of Artificial Intelligence (AI) and Machine Learning (ML) are topics of presentations at most conferences today across diverse professional disciplines. Why? The following quote says it all:

Just as electricity transformed almost everything 100 years ago, today I actually have a hard time thinking of an industry that I don’t think AI (Artificial Intelligence) will transform in the next several years. [1]

In May 2019, I agreed to cover a conference entitled “Artificial Intelligence: Finding Its Place in Research, Discovery, and Scholarly Publishing.” [2] Fortunately for me, this was not a conference for the AI computer-savvy, rather it was for non-techies who wanted to learn how ML and AI are being used within the scholarly communication community—by researchers, educators, funders, publishers, and technology vendors, etc.

Listening to the speakers motivated me to learn more about AI and ML simply because I needed to understand the jargon and how it is being used. In listening to some presentations I thought that the terms were being used interchangeably. I was curious to learn if and how they differed and the history behind them, which I will briefly share before moving on to some of the interesting highlights of the conference.

The concept of AI was captured in a 1950’s paper by Alan Turing, “Computing Machinery and Intelligence,” *1 in which he considered the question “can machines think.” It is well-written, understandable by a non-techie like me, and worth a read. Five years later a proof-of-concept program, Logic Theorist,*3 was demonstrated at the Dartmouth Summer Research Project on Artificial Intelligence *4 where the term “Artificial Intelligence” was actually coined and the study of the discipline was formally launched (for more on the history and future of AI, see, “The History of Artificial Intelligence”*5 ).

Defined concisely, “Artificial Intelligence is the science and engineering of making computers behave in ways that, until recently, we thought required human intelligence.”*6 That same article goes on to define Machine Learning as “the study of computer algorithms that allow computer programs to automatically improve through experience.”*7 ML is one of the tools with which AI can be achieved and the article goes on to explain why the two terms are so often incorrectly interchanged (ML and AI are not the same. ML, as well as Deep Learning, are subsets under the overarching concept of AI). That same article defines a lot of the ML jargon and is worth a read if you plan on delving deeper into the subject. So on to some of the more interesting conference highlights.

Where are we in AI Development?

Rosina O. Weber, Associate Professor, Department of Information Science, College of Computing and Informatics, Drexel University, said that from her perspective there are three waves of AI: 1) describe, 2) categorize, and 3) explain. The first wave, describe, which has already occurred, allowed us to represent knowledge and examples as intelligent help desks, ontologies, content-based recommenders, etc. The second AI wave, categorize, is based on statistical learning and, according to Weber, is where AI and ML are today; e.g. image and speech recognition, sentiment analysis, neural networks, etc. This second wave has limitations, with the major problem being that humans do not want a machine to make decisions for them. However, she noted that as a Society we have accumulated a great deal of data and it changes quickly. The original methods of converting data into information and letting humans make decisions no longer work effectively. The data deluge that we are increasingly experiencing can only be managed with the help of automated decision-making agents. Hopefully, the third wave, explain, will come to the rescue. This is where software agents and humans become true partners, with humans serving as “managers” of the work performed by multiple decision-making agents. It is in this wave that the AI systems will be capable of interacting with humans, explaining themselves, and adapting to different contexts.

According to Weber the future of AI is the third wave, eXplainable AI (XAI). Humans will supervise their partners, the AI agents, who in turn, can explain their decisions, adapt to specific contexts, learn from experience, adopt ethical principles, and comply with regulations. The challenges to the realization of the third wave are: the change in the nature of work— are humans ready to be the “boss” of AI agents?; the need for humans to be able to trust the agents; and collaboration among publishers who control the richest source of data—the proprietary information held in scientific publications. Weber then went on to describe the Science and Technology Cycle and how AI can improve the cycle—from fundraising through research, publication, and education. It is a thought-provoking read [3].

See FULL TEXT PDF

* See reference within [2], i.e. https://doi.org/10.3233/ISU-190068; the number directly following * corresponds to the reference listed within that paper.

Cite: Lawlor, B. (2021). Artificial Intelligence and Machine Learning, Chemistry International, 43(1), 8-13 https://doi.org/10.1515/ci-2021-0103

This article is from: