World News Intel

ChatGPT has become an overnight sensation wowing those who have tried it with an astonishing ability to churn out polished prose and answer complex questions. This generative AI platform has even passed an MBA test at the University of Pennsylvania’s Wharton School of Business and several other graduate-level exams. On one level, we have to admire humankind’s astonishing ability to invent and perfect such a device. But the deeper social and economic implications of ChatGPT and of other AI systems under rapid development are just beginning to be understood including their very real impacts on white-collar workers in the fields of education, law, criminal justice, and politics.

The use of AI systems in the political sphere raises some serious red flags. A Massachusetts Democrat in the U.S. House of Representatives, Jake Auchincloss, wasted no time using this untested and still poorly understood technology to deliver a speech on a bill supporting creation of a new artificial intelligence center. While points for cleverness are in order, the brief speech read by the Auchincloss on the floor of the U.S. House was actually written by ChatGPT. According to his staff, it was the first time that an AI-generated speech was made in Congress. Okay, we can look the other way on this one because Auchincloss was doing a little grandstanding and trying to prove a point. But what about Rep. Ted Lieu (D-Calif.) who used AI to write a bill to regulate AI and who now says he wants Congress to pass it?

The use of AI systems in the political sphere raises some serious red flags.

Not to go too deep into the sociological or philosophical weeds, but our current political nightmare is being played out in the midst of a postmodern epistemological crisis. We’ve gone from the rise of an Information Age to a somewhat darker place: a misinformation age where a high degree of political polarization now encourages us to reflexively question the veracity and accuracy of events and ideas on “the other side.” We increasingly argue less about ideas themselves than who said them and in what context. It’s well-known that the worst kind of argument is one where the two parties can’t even agree on the basic facts of a situation, and this is where we are today in our political theater.

Donald Trump introduced the notion of fake news, his “gift” to the electorate. We now question anything and everything that happens, with deep distrust in the mainstream media also contributing heavily to this habit of mind. This sets the stage for a new kind of political turmoil in which polarization threatens to gridlock and erode democracy even more. In this context, Hannah Arendt, an important political thinker about how democracies become less democratic, noted that: “The ideal subject of totalitarian rule is not the convinced Nazi or the convinced Communist, but people for whom the distinction between fact and fiction and the distinction between true and false no longer exist.”

David Bromwich—writing recently in The Nation—noted that Arendt believed there was “a totalitarian germ in the Western liberal political order.” Arendt is warning us and we should pay attention. The confusion and gridlock we experience today may give way to something even worse if we’re not vigilant. This is because the human mind seeks clarity and can only tolerate so much ambiguity. Authoritarian aspects of government that seek end runs around democratic norms offer a specious solution to this.

Generative AI and Democracy

Into this heady mix of confusion, delusion, and bitter argument in U.S. politics, we now have a sophisticated AI system that’s capable of churning out massive amounts of content. This content could be in the form of text, images, photos, videos, documentaries, speeches, or just about anything that might cross our computer screens.

Let’s consider what this means. An organization could conceivably use ChatGPT or Google Deep Mind as a core informational interface to the Internet and all of the various platforms available on it. For example, a political organization could use AI to churn out tweets, press releases, speeches, position papers, clever slogans, and all manner of other content. Worse, when this device becomes an actual product available to corporations and government agencies or entities (such as political campaigns for example), organizations that can afford the price can purchase versions intended for private use that are far more powerful than the free model now available. (As with other services that exist in the Internet model, the free offering is just there to get us hooked.)

Imagine a world where large amounts of what you see and hear are shaped by these systems. Imagine AI systems starting to compete with each other using their ability to entice and manipulate public opinion. And let’s keep in mind that it was Elon Musk who started and still financially supports OpenAI, the company that built ChatGPT. This, of course, is the same Elon Musk who owns a company called Neuralink chartered with exploring how we can hook ourselves into computers with brain implants. Lest you think that’s an idea only intended for special medical purposes, this has now become “a thing.” At this year’s Davos event in January, a gathering of the most powerful people on the planet, Klaus Schwab was caught on video gushing about how wonderful it will be when we all have brain implants.

Congress Must Act, But Will It?

What can be done about these possible additional threats to our already faltering democracy? Will our dysfunctional Congress “get it” and take action? I had some experiences years ago that woke me up to the lack of technological expertise in Congress while serving as a consultant to the Congressional Office of Technology Assessment, attending White House events, and meeting with a Senator who headed up the House Telecommunications Subcommittee. Although this was several decades ago, I have no reason to believe that much has changed. Last year’s Facebook hearings with Mark Zuckerberg on the hot seat showed further evidence of how many in Congress don’t fully understand today’s technology advances, how they’re monetized, or how they impact us culturally and politically.

Technology and politics are now conjoined and are moving under the radar of the media and many legislators. Democracy is morphing with more technocratic systems of governance moving forward that lack full oversight and a clear understanding of their social and political impacts. Newer and still poorly understood hyper-technologies are also giving powerful corporations yet another way to creep into and influence the political landscape. The worst case scenario, of course, is full-on technocracy in which we hand over certain key operations of government decision-making to these untried and unproven systems.

This has already happened to a limited extent in criminal justice cases involving AI, evoking the dystopian movie Minority Report. A 2019 article in MIT’s Technology Review pointed out that use of AI and automated tools by police departments in some cases resulted in erroneous convictions and even imprisonment. Perhaps greater public awareness of AI systems and the threat they pose to democracy will precipitate a long overdue reckoning and reconsideration of these issues with our elected officials. Let’s hope so.

Source link

Share.
Leave A Reply

Exit mobile version

Subscribe For Latest Updates

Sign up to best of business news, informed analysis and opinions on what matters to you.
Invalid email address
We promise not to spam you. You can unsubscribe at any time.
Thanks for subscribing!