9 minute read
Artificial Intelligence and Parliamentarians
Todd Crowder, PRP
Advertisement
Introduction
ChatGPT and other advanced artificial intelligence (AI) chatbots,1 capable of mimicking human conversation, captured the world’s attention in just a few months . Representing a significant leap forward from previous AI technologies, these large language models (LLMs) offer immense data processing potential and enhance human cognitive tools in many areas . Large language models (LLMs) utilize vast amounts of humangenerated text called “datasets” and undergo “training” by human users, who provide feedback on the quality of the responses, enabling the AI to “learn” and improve over time . For better or worse, AI technology (of which LLM technology is one type) is changing the world . Leveraging the new technology and its ability to rapidly access extensive amounts of data, intelligent—or perhaps, pseudo-intelligent—personal assistants are now within reach of everyone . However, this article focuses on how, in the near term, this technology will affect our profession .
Using chatGPT
As a new user, the author was amazed at the realism of the conversation . Most have seen the popup chatbots on websites designed to answer customer service questions and observed that they could usually answer only a few questions and then not terribly well . ChatGPT is far beyond such limited capabilities . The new user is impressed by the range of topics that ChatGPT can discuss and the depth of its knowledge, that is until it confidently affirms a boldfaced error . These errors are called “hallucinations . ” Because of the way ChatGPT functions, it finds and applies statistical patterns to generate answers, which can lead to unexpected and incorrect results . Although they can replicate human communication, LLMs don’t understand data as humans do . The challenges these errors will pose to the long-term usefulness of the LLM technology are yet to be determined . Microsoft Bing’s most recent update incorporates ChatGPT into its web search capabilities . Real-time internet access may provide one check against inaccuracies, even as it presents new challenges .
chatGPT and Parliamentary Procedure
After lengthy discussions of parliamentary procedure with ChatGPT, the author found that
ChatGPT performed at the competency level of a non-credentialed NAP member . ChatGPT4 (the premium version) scored 90% on past NAP membership exam questions .
The chatbot’s answers could be downright eloquent when the topic was broad . When asked a question such as, “Why do we use parliamentary procedure?” or “What makes a good presiding officer?”, the answers it gave were of professional quality . They were not merely lifted from outside sources; ChatGPT could intelligently discuss and defend them .
But ChatGPT failed to measure up when asked to get into the specifics of motions or complex questions that a professional parliamentarian might face . It held numerous misconceptions that humans have about parliamentary procedure . That makes sense because it draws its responses from human documents, and humans, of course, have misconceptions and make errors . Compounding the problem is the seeming confidence with which the answers are given . The AI boasts its “hallucinations” as known facts .
While ChatGPT may be weak on specific facts, names, and dates, it excels in understanding a topic’s essence . When asked to summarize any topic, broad or niche, it will write a short paper detailing the subject on the spot, usually high-quality and error-free . Talking with ChatGPT is like talking with a knowledgeable friend prone to fabricating very plausible fiction when unsure of the facts . OpenAI is working on making
ChatGPT more accurate, but opinions differ on how much of a task that will be . 2
LLM technologists can develop an AI with specific attention to our field, loading the AI with all the parliamentary authorities and available parliamentary resources, including books by parliamentarians, parliamentary opinions, meeting scripts, and anything else that would round out its knowledge base . Parliamentarians could then “train” it to use those materials to answer parliamentary questions and give parliamentary advice . At that point, an LLM chatbot will rival a human parliamentarian in many respects .
ChatGPT can roleplay . When asked to play the role of a presiding officer using RONR, with the author playing all the members, it did so as well as a novice chair would . It called the meeting to order appropriately but then mishandled almost every motion . However, it then ruled adequately on the resulting points of order . Its language was clear enough, but not quite as RONR prescribes .
A user of any LLM chatbot is well advised to double-check any hard data the chatbot gives . For now, LLM works better giving such information as suggestions and advice . One can describe a specific problem and ask for a list of recommendations .
When asked for advice on handling an unruly member, it competently provided the following six possible suggestions that a professional parliamentarian could have made, with a paragraph or so of relevant detail on each tip:
1 . Be calm .
2 . Call for order .
3 . Issue a warning .
4 . Recess .
5 . Invoke the removal process .
6 . Follow-up later .
Things to come
Parliamentarians are concerned with potential procedural changes to deliberative assemblies . The new technology will heavily impact functioning at both in-person and remote meetings . Naturally, we’ll be especially concerned about applying the rules .
Of course, guessing the future is a risky enterprise . Still, based on what is known about LLM technology, the following trends are arguably more likely than not .
Remote meeting software, e .g ., Zoom or a more specialized app, will incorporate parliamentary procedure with settings for different parliamentary authorities . It will be able to process and effect motions under Section 62 of RONR, which details disciplinary action against the presiding officer . The chairmanship of a meeting could be changed by such software, in accordance with a vote by the members .
The hierarchy of motions will be easy enough for artificial intelligence . AIs will assist in managing agendas . AIs will help the presiding officer and parliamentarian with tracking each parliamentary situation and timekeeping .
“Parliamentary popups,” particularly in remote meetings, but in-person ones as well, will appear on the screens of presiding officers and parliamentarians . These popups will identify breaches of order as they occur and offer suggestions on correct phrasing, tailoring their advice over time to each presiding officer’s preferences .
Adopters of AI will have to pay much attention to limitations on using AI and “guardrails” to prevent or curb abuse . As with many new technologies, AI’s abilities can be used for questionable purposes . For example, in a meeting setting, there could be software apps driven by AI that will watch for clues in the room that will gauge sentiment toward the outcome of a pending motion, whether the room is real or virtual . These apps will improve over time . The data or “clues” such an AI can use ethically, will be subject to vigorous debate, as an AI might be recommending action based on an accumulation of data people consider private . Such an app might draw implications from simple, obvious facts, such as the number of people attending a meeting . At the other extreme, it could “eavesdrop” for specific words in the crowd chatter or compile a database of every other member’s debate over a period of years and draw conclusions from that data . This kind of software would raise significant new issues of privacy .
Predictive analysis uses historical data to estimate the likelihood of future events and could be used to forecast outcomes during conventions and meetings . Privacy, transparency, security, and data bias issues may determine how much organizations and their members can use them . Ethical considerations will be hotly debated here as well .
One change pertains not to rule application but to deliberation itself . Debating a main motion will be assisted in real time by a lightningquick researcher . Parliamentarians have seen real-time language transcription become the norm . Now, LLM analysis of ongoing debate will become a feature at meetings . The technology will call up facts and statistics . It will suggest arguments to use in debate . Some have warned that this situation could lead to discussion
“by AIs for AIs . ”
In-person meetings will have more of a virtual component, with attendees having their schedules managed by AI on their device, along with motions they are interested in and the evolving pros and cons of those motions . Recommendation systems will provide suggestions to members for workshops and festivities outside the meeting hall that will likely fit their tastes .
Over the past few years, parliamentarians have watched online meetings offer increasingly effective automated speech transcription services at the touch of a button, like the audio transcription service on Zoom . The ability to analyze such speech, and to respond to it, could soon be added to these services . These abilities open many possibilities in areas yet to be considered .
That same real-time speech analysis will be used on the dais by an AI sounding an alarm to the presiding officer on possible breaches of decorum . Besides being able to listen for specific indecorous words, the AI will also be able to detect, with reliability increasing over time, trains of thought that impute motive and other more abstract violations of decorum .
AI capability will affect the business side of all professional practices . The software will become more intuitive and conversational .
For example, categorizing expenses in accounting software, e .g ., QuickBooks, will resemble a conversation .
intelligence and Parliamentarians
The integration of virtual reality (VR) and augmented reality (AR) technology could significantly transform the way we conduct meetings . VR technology offers an immersive experience through the use of a VR helmet or headset, allowing attendees to virtually participate in a meeting hall . On the other hand, AR technology, using a smartphone or tablet, can enhance the user’s perception of reality by combining digital elements with the physical world, which could make in-person or hybrid meetings more engaging .
Existing software has incorporated AI capability, with much more in development . In April, Grammarly, a popular writing assistant software that has existed since 2009, took a sudden leap forward, when newly embedded AI technology enabled it to “brainstorm ideas, compose writing, edit, and personalize text . ” challenges
Aside from the challenges LLM developers face with the AI’s accuracy, data bias is a huge problem . The data from which LLMs draw is, of course, biased . Just as it has errors because humans make errors, it has biases because humans have biases . Tremendous work will have to be put into an LLM to address data bias .
The sheer size of the LLMs makes them inaccessible to the human eye . They are trained in vast amounts of pre-existing writings which are biased, as humans are . Human review of that data would be prohibitive .
Any usage of AI in deliberative assemblies will require transparency . An LLM’s data must be from sources as neutral and diverse as possible and regularly audited . LLMs with unseen sources (“black box” algorithms) will be automatically suspect . Privacy concerns will continue to play a huge role as well .
Society has some big questions to face . Most involved in adopting the new technologies will be concerned with issues related to the usability of the technology . But there are other concerns; for instance, some AI chatbots have shown “emergent capabilities,” i .e ., capabilities not intended by their designers . Some AIs have famously engaged in dark, disturbing fantasies . As science fiction author Isaac Asimov suggested long ago, it might lead to the profession of “robopsychologist” becoming a reality . Today’s computer technology can state truth, falsehood, or word salad that leaves humans wondering why .
Further, the new technology is high in energy consumption, can be misused in many ways, and may aggravate the digital divide, i .e ., the gap in equal access to digital technology . The societal questions raised by the speedy mass adoption of these new technologies are beyond the scope of the parliamentary discipline . Still, parliamentarians will continue to address the issues that emerging technology raises for deliberative assemblies .
In his book, The Parliamentarian of Tomorrow, Gene Bierbaum wrote, “Not all parliamentarians are expected to become experts in technological change . ” While these words remain accurate, we must also consider another of his quotes, “Parliamentarians must stay ahead of the curve rather than lag behind . ”3
Tomorrow is here! NP
EnDnoTES:
1 A chatbot is a computer software application that relies on artificial intelligence (AI) to perform tasks humans typically carry out (Fuscaldo, Donna, Business News Daily, April 11, 2023).
2 Craig S. Smith, “Hallucinations Could Blunt ChatGPT’s Success,” March 13, 2023, IEEE Spectrum, spectrum.ieee.org/ai-hallucination
3 Bierbaum, Gene, PhD., The Parliamentarian of Tomorrow, Xlibris, 2010, Kindle Edition.