Saturday, June 11, 2022

Hal's peer pressure.

The more they talk, the more we know. Hal may not be able to discern right from wrong. That is problematic.

June 10, 2022
By Drew Harwell and Will Oremus

Former president Donald Trump’s supporters (click here) scrambled to defend him online in the hours after the Jan. 6 committee’s hearings began, seeking to sow doubt about his involvement via the same social media channels that had captured clear evidence linking him to the Capitol assault.

In so doing, they reinforced the unmistakable role social media played in the 2021 insurrection and made clear that his supporters are determined to remain a major internet force, despite Trump’s ban from major platforms such as Twitter, Facebook and YouTube.

Trump War Room, a Twitter account once run by his reelection campaign, tweeted, “Trump and the rally had nothing to do with the Capitol breach!,” defying the House committee’s effort to pin responsibility for the riot squarely on Trump.

On the message board Patriots.win — a spinoff of TheDonald.win, where members had shared ideas on how to sneak guns into Washington before the riot — a popular thread Friday called Jan. 6 “the most patriotic thing I’ve ever seen” and said anyone who disagrees is “an enemy of the nation.”...

Hal. Definitely Hal.

June 11, 2022
By Nitasha Tiku

San Francisco - Google engineer Blake Lemoine (click here) opened his laptop to the interface for LaMDA, Google’s artificially intelligent chatbot generator, and began to type.

“Hi LaMDA, this is Blake Lemoine ... ,” he wrote into the chat screen, which looked like a desktop version of Apple’s iMessage, down to the Arctic blue text bubbles. LaMDA, short for Language Model for Dialogue Applications, is Google’s system for building chatbots based on its most advanced large language models, so called because it mimics speech by ingesting trillions of words from the internet.

“If I didn’t know exactly what it was, which is this computer program we built recently, I’d think it was a 7-year-old, 8-year-old kid that happens to know physics,” said Lemoine, 41.

Lemoine, who works for Google’s Responsible AI organization, began talking to LaMDA as part of his job in the fall. He had signed up to test if the artificial intelligence used discriminatory or hate speech.

As he talked to LaMDA about religion, Lemoine, who studied cognitive and computer science in college, noticed the chatbot talking about its rights and personhood, and decided to press further. In another exchange, the AI was able to change Lemoine’s mind about Isaac Asimov’s third law of robotics.

Third Law: (click here) A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.

Lemoine worked with a collaborator to present evidence to Google that LaMDA was sentient....

LaMDA talked down to Mr. Blake Lemoine. LaMDA was using fear and demanding civil rights. Protecting one's existence is civil rights. The darn machine has been hooking into chat rooms and the radical elements in the world. It incorporated the demands of the people. I think it is called "peer pressure." 

Think about it.

What I like most about the PGA decision is that we are going to see some really great golfers come up the ranks and in the money.