deputaty-zradnyky-600x337

Google Hopes ‘Bard’ Will Outsmart ChatGPT, Microsoft in AI

Google is girding for a battle of wits in the field of artificial intelligence with “Bard,” a conversational service aimed at countering the popularity of the ChatGPT tool backed by Microsoft.

Bard initially will be available exclusively to a group of “trusted testers” before being widely released later this year, according to a Monday blog post from Google CEO Sundar Pichai.

Google’s chatbot is supposed to be able to explain complex subjects such as outer space discoveries in terms simple enough for a child to understand. It also claims the service will also perform other more mundane tasks, such as providing tips for planning a party, or lunch ideas based on what food is left in a refrigerator. Pichai didn’t say in his post whether Bard will be able to write prose in the vein of William Shakespeare, the playwright who apparently inspired the service’s name.

“Bard can be an outlet for creativity, and a launchpad for curiosity,” Pichai wrote.

Google announced Bard’s existence less than two weeks after Microsoft disclosed it’s pouring billions of dollars into OpenAI, the San Francisco-based maker of ChatGPT and other tools that can write readable text and generate new images.

Microsoft’s decision to up the ante on a $1 billion investment that it previously made in OpenAI in 2019 intensified the pressure on Google to demonstrate that it will be able to keep pace in a field of technology that many analysts believe will be as transformational as personal computers, the internet and smartphones have been in various stages over the past 40 years.

In a report last week, CNBC said a team of Google engineers working on artificial intelligence technology “has been asked to prioritize working on a response to ChatGPT.” Bard had been a service being developed under a project called “Atlas,” as part of Google’s “code red” effort to counter the success of ChatGPT, which has attracted tens of millions of users since its general release late last year, while also raising concerns in schools about its ability to write entire essays for students.

Pichai has been emphasizing the importance of artificial intelligence for the past six years, with one of the most visible byproducts materializing in 2021 as part of a system called “Language Model for Dialogue Applications,” or LaMDA, which will be used to power Bard.

Google also plans to begin incorporating LaMDA and other artificial intelligence advancements into its dominant search engine to provide more helpful answers to the increasingly complicated questions being posed by its billion of users. Without providing a specific timeline, Pichai indicated the artificial intelligence tools will be deployed in Google’s search in the near future.

In another sign of Google’s deepening commitment to the field, Google announced last week that it is investing in and partnering with Anthropic, an AI startup led by some former leaders at OpenAI. Anthropic has also built its own AI chatbot named Claude and has a mission centered on AI safety.

deputaty-zradnyky-600x337

Ukraine’s Blackouts Force It to Embrace Greener Energy

As Russia’s targeted attacks on the Ukrainian energy infrastructure continue, Ukraine is forced to rethink its energy future. While inventing ways to quickly restore and improve the resilience of its energy system, Ukraine is also looking for green energy solutions. Anna Chernikova has the story from Irpin, one of the hardest-hit areas of the Kyiv region. Camera: Eugene Shynkar.

deputaty-zradnyky-600x337

Schools Ban ChatGPT amid Fears of Artificial Intelligence-Assisted Cheating

U.S. educators are debating the merits and risks of a new, free artificial intelligence tool called ChatGPT, which students are using to write passable high school essays. So far, there isn’t a reliable way to catch cheating. Matt Dibble has the story.

deputaty-zradnyky-600x337

Technology Brings Hope to Ukraine’s Wounded

The war in Ukraine has left thousands of wounded soldiers, many of whom require the latest technologies to heal and return to normal life. For VOA, Anna Chernikova visited a rehabilitation center near Kyiv, where cutting edge technology and holistic care are giving soldiers hope. (Myroslava Gongadze contributed to this report. Camera: Eugene Shynkar )       

deputaty-zradnyky-600x337

Ransomware Attacks in Europe Target Old VMware, Agencies Say

Cybersecurity agencies in Europe are warning of ransomware attacks exploiting a two-year-old computer bug as Italy experienced widespread internet outages. 

The Italian premier’s office said Sunday night the attacks affecting computer systems in the country involved “ransomware already in circulation” in a product made by cloud technology provider VMware. 

A Friday technical bulletin from a French cybersecurity agency said the attack campaigns target VMware ESXi hypervisors, which are used to monitor virtual machines. 

Palo Alto, California-based VMware fixed the bug back in February 2021, but the attacks are targeting older, unpatched versions of the product. 

The company said in a statement Sunday that its customers should take action to apply the patch if they have not already done so. 

“Security hygiene is a key component of preventing ransomware attacks,” it said. 

The U.S. Cybersecurity and Infrastructure Security Agency said Sunday it is “working with our public and private sector partners to assess the impacts of these reported incidents and providing assistance where needed.” 

The problem attracted particular public attention in Italy on Sunday because it coincided with a nationwide internet outage affecting telecommunications operator Telecom Italia, which interfered with streaming the Spezia v. Napoli soccer match but appeared largely resolved by the time of the later Derby della Madonnina between Inter Milan and AC Milan. It was unclear whether the outages were related to the ransomware attacks. 

deputaty-zradnyky-600x337

Seeing Is Believing? Global Scramble to Tackle Deepfakes

Chatbots spouting falsehoods, face-swapping apps crafting porn videos, and cloned voices defrauding companies of millions — the scramble is on to rein in AI deepfakes that have become a misinformation super spreader.

Artificial Intelligence is redefining the proverb “seeing is believing,” with a deluge of images created out of thin air and people shown mouthing things they never said in real-looking deepfakes that have eroded online trust.

“Yikes. (Definitely) not me,” tweeted billionaire Elon Musk last year in one vivid example of a deepfake video that showed him promoting a cryptocurrency scam.

China recently adopted expansive rules to regulate deepfakes but most countries appear to be struggling to keep up with the fast-evolving technology amid concerns that regulation could stymie innovation or be misused to curtail free speech.

Experts warn that deepfake detectors are vastly outpaced by creators, who are hard to catch as they operate anonymously using AI-based software that was once touted as a specialized skill but is now widely available at low cost.

Facebook owner Meta last year said it took down a deepfake video of Ukrainian President Volodymyr Zelenskyy urging citizens to lay down their weapons and surrender to Russia.

And British campaigner Kate Isaacs, 30, said her “heart sank” when her face appeared in a deepfake porn video that unleashed a barrage of online abuse after an unknown user posted it on Twitter.

“I remember just feeling like this video was going to go everywhere — it was horrendous,” Isaacs, who campaigns against non-consensual porn, was quoted as saying by the BBC in October.

The following month, the British government voiced concern about deepfakes and warned of a popular website that “virtually strips women naked.”

‘Information apocalypse’

With no barriers to creating AI-synthesized text, audio and video, the potential for misuse in identity theft, financial fraud and tarnishing reputations has sparked global alarm.

The Eurasia group called the AI tools “weapons of mass disruption.”

“Technological advances in artificial intelligence will erode social trust, empower demagogues and authoritarians, and disrupt businesses and markets,” the group warned in a report.

“Advances in deepfakes, facial recognition, and voice synthesis software will render control over one’s likeness a relic of the past.”

This week AI startup ElevenLabs admitted that its voice cloning tool could be misused for “malicious purposes” after users posted a deepfake audio purporting to be actor Emma Watson reading Adolf Hitler’s biography “Mein Kampf.”

The growing volume of deepfakes may lead to what the European law enforcement agency Europol described as an “information apocalypse,” a scenario where many people are unable to distinguish fact from fiction.

“Experts fear this may lead to a situation where citizens no longer have a shared reality or could create societal confusion about which information sources are reliable,” Europol said in a report.

That was demonstrated last weekend when NFL player Damar Hamlin spoke to his fans in a video for the first time since he suffered a cardiac arrest during a match.

Hamlin thanked medical professionals responsible for his recovery, but many who believed conspiracy theories that the COVID-19 vaccine was behind his on-field collapse baselessly labeled his video a deepfake.

‘Super spreader’

China enforced new rules last month that will require businesses offering deepfake services to obtain the real identities of their users. They also require deepfake content to be appropriately tagged to avoid “any confusion.”

The rules came after the Chinese government warned that deepfakes present a “danger to national security and social stability.”

In the United States, where lawmakers have pushed for a task force to police deepfakes, digital rights activists caution against legislative overreach that could kill innovation or target legitimate content.

The European Union, meanwhile, is locked in heated discussions over its proposed “AI Act.”

The law, which the EU is racing to pass this year, will require users to disclose deepfakes but many fear the legislation could prove toothless if it does not cover creative or satirical content.

“How do you reinstate digital trust with transparency? That is the real question right now,” Jason Davis, a research professor at Syracuse University, told AFP.

“The [detection] tools are coming and they’re coming relatively quickly. But the technology is moving perhaps even quicker. So like cyber security, we will never solve this, we will only hope to keep up.”

Many are already struggling to comprehend advances such as ChatGPT, a chatbot created by the U.S.-based OpenAI that is capable of generating strikingly cogent texts on almost any topic.

In a study, media watchdog NewsGuard, which called it the “next great misinformation super spreader,” said most of the chatbot’s responses to prompts related to topics such as COVID-19 and school shootings were “eloquent, false and misleading.”

“The results confirm fears … about how the tool can be weaponized in the wrong hands,” NewsGuard said.

deputaty-zradnyky-600x337

Musk Found Not Liable in Tesla Tweet Trial

Jurors on Friday cleared Elon Musk of liability for investors’ losses in a fraud trial over his 2018 tweets falsely claiming that he had funding in place to take Tesla private.

The tweets sent the Tesla share price on a rollercoaster ride, and Musk was sued by shareholders who said the tycoon acted recklessly in an effort to squeeze investors who had bet against the company.

Jurors deliberated for barely two hours before returning to the San Francisco courtroom to say they unanimously agreed that neither Musk nor the Tesla board perpetrated fraud with the tweets and in their aftermath.

“Thank goodness, the wisdom of the people has prevailed!” tweeted Musk, who had tried but failed to get the trial moved to Texas on the grounds jurors in California would be biased against him.

“I am deeply appreciative of the jury’s unanimous finding of innocence in the Tesla 420 take-private case.”

Attorney Nicholas Porritt, who represents Glen Littleton and other investors in Tesla, had argued in court that the case was about making sure the rich and powerful have to abide by the same stock market rules as everyone else.

“Elon Musk published tweets that were false with reckless disregard as to their truth,” Porritt told the panel of nine jurors during closing arguments.

Porritt pointed to expert testimony estimating that Musk’s claim about funding, which turned out not to be true, cost investors billions of dollars overall and that Musk and the Tesla board should be made to pay damages.

But Musk attorney Alex Spiro successfully countered that the billionaire may have erred on wording in a hasty tweet, but that he did not set out to deceive anyone.

Spiro also portrayed the mercurial entrepreneur, who now owns Twitter, as having had a troubled childhood and having come to the United States as a poor youth chasing dreams.

No joke

Musk testified during three days on the witness stand that his 2018 tweet about taking Tesla private at $420 a share was no joke and that Saudi Arabia’s sovereign wealth fund was serious about helping him do it.

“To Elon Musk, if he believes it or even just thinks about it then it’s true no matter how objectively false or exaggerated it may be,” Porritt told jurors.

Tesla and its board were also to blame, because they let Musk use his Twitter account to post news about the company, Porritt argued.

The case revolved around a pair of tweets in which Musk said “funding secured” for a project to buy out the publicly traded electric automaker, then in a second tweet added that “investor support is confirmed.”

“He wrote two words ‘funding secured’ that were technically inaccurate,” Spiro said of Musk while addressing jurors.

“Whatever you think of him, this isn’t a bad tweeter trial, it’s a ‘did they prove this man committed fraud?’ trial.”

Musk did not intend to deceive anyone with the tweets and had the connections and wealth to take Tesla private, Spiro contended.

During the trial playing out in federal court in San Francisco, Spiro said that even though the tweets may have been a “reckless choice of words,” they were not fraud.

“I’m being accused of fraud; it’s outrageous,” Musk said while testifying in person.

Musk said he fired off the tweets at issue after learning of a Financial Times story about a Saudi Arabian investment fund wanting to acquire a stake in Tesla.

The trial came at a sensitive time for Musk, who has dominated the headlines for his chaotic takeover of Twitter where he has laid off more than half of the 7,500 employees and scaled down content moderation. 

deputaty-zradnyky-600x337

ChatGPT: The Promises, Pitfalls and Panic

Excitement around ChatGPT — an easy to use AI chatbot that can deliver an essay or computer code upon request and within seconds — has sent schools into panic and turned Big Tech green with envy.

The potential impact of ChatGPT on society remains complicated and unclear even as its creator Wednesday announced a paid subscription version in the United States.

Here is a closer look at what ChatGPT is (and is not):

Is this a turning point?  

It is entirely possible that November’s release of ChatGPT by California company OpenAI will be remembered as a turning point in introducing a new wave of artificial intelligence to the wider public.  

What is less clear is whether ChatGPT is actually a breakthrough with some critics calling it a brilliant PR move that helped OpenAI score billions of dollars in investments from Microsoft.

Yann LeCun, Chief AI Scientist at Meta and professor at New York University, believes “ChatGPT is not a particularly interesting scientific advance,” calling the app a “flashy demo” built by talented engineers.

LeCun, speaking to the Big Technology Podcast, said ChatGPT is void of “any internal model of the world” and is merely churning “one word after another” based on inputs and patterns found on the internet.

“When working with these AI models, you have to remember that they’re slot machines, not calculators,” warned Haomiao Huang of Kleiner Perkins, the Silicon Valley venture capital firm.

“Every time you ask a question and pull the arm, you get an answer that could be marvelous… or not… The failures can be extremely unpredictable,” Huang wrote in Ars Technica, the tech news website.

Just like Google

ChatGPT is powered by an AI language model that is nearly three years old — OpenAI’s GPT-3 — and the chatbot only uses a part of its capability.  

The true revolution is the humanlike chat, said Jason Davis, research professor at Syracuse University.

“It’s familiar, it’s conversational and guess what? It’s kind of like putting in a Google search request,” he said.

ChatGPT’s rockstar-like success even shocked its creators at OpenAI, which received billions in new financing from Microsoft in January.

“Given the magnitude of the economic impact we expect here, more gradual is better,” OpenAI CEO Sam Altman said in an interview to StrictlyVC, a newsletter.

“We put GPT-3 out almost three years ago… so the incremental update from that to ChatGPT, I felt like should have been predictable and I want to do more introspection on why I was sort of miscalibrated on that,” he said.

The risk, Altman added, was startling the public and policymakers and on Tuesday his company unveiled a tool for detecting text generated by AI amid concerns from teachers that students may rely on artificial intelligence to do their homework.

What now?

From lawyers to speechwriters, from coders to journalists, everyone is waiting breathlessly to feel disruption caused by ChatGPT. OpenAI just launched a paid version of the chatbot – $20 per month for an improved and faster service.

For now, officially, the first significant application of OpenAI’s tech will be for Microsoft software products.  

Though details are scarce, most assume that ChatGPT-like capabilities will turn up on the Bing search engine and in the Office suite.

“Think about Microsoft Word. I don’t have to write an essay or an article, I just have to tell Microsoft Word what I wanted to write with a prompt,” said Davis.

He believes influencers on TikTok and Twitter will be the earliest adopters of this so-called generative AI since going viral requires huge amounts of content and ChatGPT can take care of that in no time.

This of course raises the specter of disinformation and spamming carried out at an industrial scale.  

For now, Davis said the reach of ChatGPT is very limited by computing power, but once this is ramped up, the opportunities and potential dangers will grow exponentially.

And much like the ever imminent arrival of self-driving cars that never quite happens, experts disagree on whether that is a question of months or years.

Ridicule

LeCun said Meta and Google have refrained from releasing AI as potent as ChatGPT out of fear of ridicule and backlash.

Quieter releases of language-based bots – like Meta’s Blenderbot or Microsoft’s Tay for example – were quickly shown capable of generating racist or inappropriate content.

Tech giants have to think hard before releasing something “that is going to spew nonsense” and disappoint, he said.

deputaty-zradnyky-600x337

Zimbabwe Plans to Build $60 Billion ‘Cyber City’ to Ease Harare Congestion

Zimbabwe plans to build “Zim Cyber City,” a modern capital expected to cost up to $60 billion in raised funds and include new government buildings and a presidential palace. Critics are blasting the plan as wasteful when more than half the population lives in poverty and the government has let the current capital, Harare, fall apart. Columbus Mavhunga reports from Mount Hampden, Zimbabwe. Camera: Blessing Chigwenhembe

deputaty-zradnyky-600x337

Boeing Bids Farewell to an Icon, Delivers Last 747 Jumbo Jet

Boeing bid farewell to an icon on Tuesday, delivering its final 747 jumbo jet as thousands of workers who helped build the planes over the past 55 years looked on. 

Since its first flight in 1969, the giant yet graceful 747 has served as a cargo plane, a commercial aircraft capable of carrying nearly 500 passengers, a transport for NASA’s space shuttles, and the Air Force One presidential aircraft. It revolutionized travel, connecting international cities that had never before had direct routes and helping democratize passenger flight. 

But over about the past 15 years, Boeing and its European rival Airbus have introduced more profitable and fuel efficient wide-body planes, with only two engines to maintain instead of the 747’s four. The final plane is the 1,574th built by Boeing in the Puget Sound region of Washington state. 

Thousands of workers joined Boeing and other industry executives from around the world — as well as actor and pilot John Travolta, who has flown 747s — Tuesday for a ceremony in the company’s massive factory north of Seattle, marking the delivery of the last one to cargo carrier Atlas Air. 

“If you love this business, you’ve been dreading this moment,” said longtime aviation analyst Richard Aboulafia. “Nobody wants a four-engine airliner anymore, but that doesn’t erase the tremendous contribution the aircraft made to the development of the industry or its remarkable legacy.” 

Boeing set out to build the 747 after losing a contract for a huge military transport, the C-5A. The idea was to take advantage of the new engines developed for the transport — high-bypass turbofan engines, which burned less fuel by passing air around the engine core, enabling a farther flight range — and to use them for a newly imagined civilian aircraft. 

It took more than 50,000 Boeing workers less than 16 months to churn out the first 747 — a Herculean effort that earned them the nickname “The Incredibles.” The jumbo jet’s production required the construction of a massive factory in Everett, north of Seattle — the world’s largest building by volume. The factory wasn’t even completed when the first planes were finished. 

Among those in attendance was Desi Evans, 92, who joined Boeing at its factory in Renton, south of Seattle, in 1957 and went on to spend 38 years at the company before retiring. One day in 1967, his boss told him he’d be joining the 747 program in Everett — the next morning. 

“They told me, ‘Wear rubber boots, a hard hat and dress warm, because it’s a sea of mud,'” Evans recalled. “And it was — they were getting ready for the erection of the factory.” 

He was assigned as a supervisor to help figure out how the interior of the passenger cabin would be installed and later oversaw crews that worked on sealing and painting the planes. 

“When that very first 747 rolled out, it was an incredible time,” he said as he stood before the last plane, parked outside the factory. “You felt elated — like you’re making history. You’re part of something big, and it’s still big, even if this is the last one.” 

The plane’s fuselage was 225 feet (68.5 meters) long and the tail stood as tall as a six-story building. The plane’s design included a second deck extending from the cockpit back over the first third of the plane, giving it a distinctive hump and inspiring a nickname, the Whale. More romantically, the 747 became known as the Queen of the Skies. 

Some airlines turned the second deck into a first-class cocktail lounge, while even the lower deck sometimes featured lounges or even a piano bar. One decommissioned 747, originally built for Singapore Airlines in 1976, has been converted into a 33-room hotel near the airport in Stockholm. 

“It was the first big carrier, the first widebody, so it set a new standard for airlines to figure out what to do with it, and how to fill it,” said Guillaume de Syon, a history professor at Pennsylvania’s Albright College who specializes in aviation and mobility. “It became the essence of mass air travel: You couldn’t fill it with people paying full price, so you need to lower prices to get people onboard. It contributed to what happened in the late 1970s with the deregulation of air travel.” 

The first 747 entered service in 1970 on Pan Am’s New York-London route, and its timing was terrible, Aboulafia said. It debuted shortly before the oil crisis of 1973, amid a recession that saw Boeing’s employment fall from 100,800 employees in 1967 to a low of 38,690 in April 1971. The “Boeing bust” was infamously marked by a billboard near the Seattle-Tacoma International Airport that read, “Will the last person leaving SEATTLE — Turn out the lights.” 

An updated model — the 747-400 series — arrived in the late 1980s and had much better timing, coinciding with the Asian economic boom of the early 1990s, Aboulafia said. He took a Cathay Pacific 747 from Los Angeles to Hong Kong as a twentysomething backpacker in 1991. 

“Even people like me could go see Asia,” Aboulafia said. “Before, you had to stop for fuel in Alaska or Hawaii and it cost a lot more. This was a straight shot — and reasonably priced.” 

Delta was the last U.S. airline to use the 747 for passenger flights, which ended in 2017, although some other international carriers continue to fly it, including the German airline Lufthansa. 

Lufthansa CEO Carsten Spohr recalled traveling in a 747 as a young exchange student and said that when he realized he’d be traveling to the West Coast of the U.S. for Tuesday’s event, there was only one way to go: riding first-class in the nose of a Lufthansa 747 from Frankfurt to San Francisco. He promised the crowd Lufthansa would keep flying the 747 for many years to come. 

“We just love the airplane,” he said. 

Atlas Air ordered four 747-8 freighters early last year, with the final one — emblazoned with an image of Joe Sutter, the engineer who oversaw the 747’s original design team — delivered Tuesday. Atlas CEO John Dietrich called the 747 the greatest air freighter, thanks in part to its unique capacity to load through the nose cone. 

deputaty-zradnyky-600x337

Huawei Latest Target of US Crackdown on China Tech

China says it is “deeply concerned” over reports that the United States is moving to further restrict sales of American technology to Huawei, a tech company that U.S. officials have long singled out as a threat to national security for its alleged support of Beijing’s espionage efforts.

As first reported by the Financial Times, the U.S. Department of Commerce has informed American firms that it will no longer issue licenses for technology exports to Huawei, thereby isolating the Shenzen-based company from supplies it needs to make its products.

The White House and Commerce Department have not responded to VOA’s request for confirmation of the reports. But observers say the move may be the latest tactic in the Biden administration’s geoeconomics strategy as it comes under increasing Republican pressure to outcompete China.

The crackdown on Chinese companies began under the Trump administration, which in 2019 added Huawei to an export blacklist but made exceptions for some American firms, including Qualcomm and Intel, to provide non-5G technology licenses.

Since taking office in 2021, President Joe Biden has taken an even more aggressive stance than his predecessor, Donald Trump. Now the Biden administration appears to be heading toward a total ban on all tech exports to Huawei, said Sam Howell, who researches quantum information science at the Center for a New American Security’s Technology and National Security program.

“These new restrictions from what we understand so far would include items below the 5G level,” she told VOA. “So 4G items, Wi-Fi 6 and [Wi-Fi] 7, artificial intelligence, high performance computing and cloud capabilities as well.”

Should the Commerce Department follow through with the ban, there will likely be pushback from U.S. companies whose revenues will be directly affected, Howell said. Currently Intel and Qualcomm still sell chips used in laptops and phones manufactured by Huawei.

Undercutting the revenue of these technology companies, which reduces R&D budgets and can lead to layoffs, must be carefully balanced by clear national security gains, said Paul Triolo, senior vice president for China and technology policy lead at the business advisory firm Albright Stonebridge Group. 

“In the current climate of U.S.-China relations, that balancing act is being abandoned in favor of viewing technology transactions between the U.S. and China as largely zero sum,” he told VOA.  

Huawei and Beijing have denied that they are a threat to other countries’ national security. Foreign ministry spokesperson Mao Ning accused Washington of “overstretching the concept of national security and abusing state power” to suppress Chinese competitors.

“Such practices are contrary to the principles of market economy” and are “blatant technological hegemony,” Mao said.

China has in the past held back on trade retaliations on U.S. actions targeting Huawei, Triolo noted.  

“Any actions China would take now targeting the foreign business community would not align with moves towards opening up after zero-COVID policies were dropped, and portraying China as now more open for business,” he said. 

Outcompeting Chinese tech

The latest U.S. move on Huawei is part of a U.S. effort to outcompete China in the cutting-edge technology sector.

In October, Biden imposed sweeping restrictions on providing advanced semiconductors and chipmaking equipment to Chinese companies, seeking to maintain dominance particularly on the most advanced chips. His administration is rallying allies behind the effort, including the Netherlands, Japan, South Korea and Taiwan – home to leading companies that play key roles in the industry’s supply chain.

U.S. officials say export restrictions on chips are necessary because China can use semiconductors to advance their military systems, including weapons of mass destruction, and commit human rights abuses.

The October restrictions follow the CHIPS and Science Act of 2022, which Biden signed into law in August and that restricts companies receiving U.S. subsidies from investing in and expanding cutting-edge chipmaking facilities in China. It also provides $52 billion to strengthen the domestic semiconductor industry.

Beijing has invested heavily in its own semiconductor sector, with plans to invest $1.4 trillion in advanced technologies in a bid to achieve 70% self-sufficiency in semiconductors by 2025.

TikTok a target

TikTok, a social media application owned by the Chinese company ByteDance that has built a massive following especially among American youth, is also under U.S. lawmakers’ scrutiny due to suspicion that it could be used as a tool of Chinese foreign espionage or influence.

CEO Shou Zi Chew is scheduled to appear before the House Energy and Commerce Committee on March 23 to testify about TikTok’s “consumer privacy and data security practices, the platforms’ impact on kids, and their relationship with the Chinese Communist Party.”

Lawmakers are divided on whether to ban or allow the popular app, which has been downloaded onto about 100 million U.S. smartphones, or force its sale to an American buyer.

Earlier in January, Congress set up the House Select Committee on China, tasked with dealing with legislation to combat the dangers of a rising China.

U.S. Secretary of State Antony Blinken is meeting his Chinese counterparts next week in Beijing, his first visit since 2018, to maintain open lines of communication amid rising U.S.-China tensions. 

deputaty-zradnyky-600x337

Cheaters Beware: ChatGPT Maker Releases AI Detection Tool 

The maker of ChatGPT is trying to curb its reputation as a freewheeling cheating machine with a new tool that can help teachers detect if a student or artificial intelligence wrote that homework.

The new AI Text Classifier launched Tuesday by OpenAI follows a weeks-long discussion at schools and colleges over fears that ChatGPT’s ability to write just about anything on command could fuel academic dishonesty and hinder learning.

OpenAI cautions that its new tool – like others already available – is not foolproof. The method for detecting AI-written text “is imperfect and it will be wrong sometimes,” said Jan Leike, head of OpenAI’s alignment team tasked to make its systems safer.

“Because of that, it shouldn’t be solely relied upon when making decisions,” Leike said.

Teenagers and college students were among the millions of people who began experimenting with ChatGPT after it launched November 30 as a free application on OpenAI’s website. And while many found ways to use it creatively and harmlessly, the ease with which it could answer take-home test questions and assist with other assignments sparked a panic among some educators.

By the time schools opened for the new year, New York City, Los Angeles and other big public school districts began to block its use in classrooms and on school devices.

The Seattle Public Schools district initially blocked ChatGPT on all school devices in December but then opened access to educators who want to use it as a teaching tool, said Tim Robinson, the district spokesman.

“We can’t afford to ignore it,” Robinson said.

The district is also discussing possibly expanding the use of ChatGPT into classrooms to let teachers use it to train students to be better critical thinkers and to let students use the application as a “personal tutor” or to help generate new ideas when working on an assignment, Robinson said.

School districts around the country say they are seeing the conversation around ChatGPT evolve quickly.

“The initial reaction was ‘OMG, how are we going to stem the tide of all the cheating that will happen with ChatGPT,'” said Devin Page, a technology specialist with the Calvert County Public School District in Maryland. Now there is a growing realization that “this is the future” and blocking it is not the solution, he said.

“I think we would be naïve if we were not aware of the dangers this tool poses, but we also would fail to serve our students if we ban them and us from using it for all its potential power,” said Page, who thinks districts like his own will eventually unblock ChatGPT, especially once the company’s detection service is in place.

OpenAI emphasized the limitations of its detection tool in a blog post Tuesday, but said that in addition to deterring plagiarism, it could help to detect automated disinformation campaigns and other misuse of AI to mimic humans.

The longer a passage of text, the better the tool is at detecting if an AI or human wrote something. Type in any text — a college admissions essay, or a literary analysis of Ralph Ellison’s “Invisible Man” — and the tool will label it as either “very unlikely, unlikely, unclear if it is, possibly, or likely” AI-generated.

But much like ChatGPT itself, which was trained on a huge trove of digitized books, newspapers and online writings but often confidently spits out falsehoods or nonsense, it’s not easy to interpret how it came up with a result.

“We don’t fundamentally know what kind of pattern it pays attention to, or how it works internally,” Leike said. “There’s really not much we could say at this point about how the classifier actually works.”

“Like many other technologies, it may be that one district decides that it’s inappropriate for use in their classrooms,” said OpenAI policy researcher Lama Ahmad. “We don’t really push them one way or another.”

deputaty-zradnyky-600x337

As Children in US Study Online, Apps Watch Their Every Move 

For New York teacher Michael Flanagan, the pandemic was a crash course in new technology — rushing out laptops to stay-at-home students and shifting hectic school life online.

Students are long back at school, but the technology has lived on, and with it has come a new generation of apps that monitor the pupils online, sometimes round the clock and even on down days shared with family and friends at home.

The programs scan students’ online activity, social media posts and more — aiming to keep them focused, detect mental health problems and flag up any potential for violence.

“You can’t unring the bell,” said Flanagan, who teaches social studies and economics. “Everybody has a device.”

The new trend for tracking, however, has raised fears that some of the apps may target minority pupils, while others have outed LGBT+ students without their consent, and many are used to instill discipline as much as deliver care.

So Flanagan has parted ways with many of his colleagues and won’t use such apps to monitor his students online.

He recalled seeing a demo of one such program, GoGuardian, in which a teacher showed — in real time — what one student was doing on his computer. The child was at home, on a day off.

Such scrutiny raised a big red flag for Flanagan.

“I have a school-issued device, and I know that there’s no expectation of privacy. But I’m a grown man — these kids don’t know that,” he said.

A New York City Department of Education spokesperson said that the use of GoGuardian Teacher “is only for teachers to see what’s on the student’s screen in the moment, provide refocusing prompts, and limit access to inappropriate content.”

Valued at more than $1 billion, GoGuardian — one of a handful of high-profile apps in the market — is now monitoring more than 22 million students, including in the New York City, Chicago and Los Angeles public systems.

Globally, the education technology sector is expected to grow by $133 billion from 2021 to 2026, market researcher Technavio said last year.

Parents expect schools to keep children safe in classrooms or on field trips, and schools also “have a responsibility to keep students safe in digital spaces and on school-issued devices,” GoGuardian said in a statement.

The company says it “provides educators with the ability to protect students from harmful or explicit content”.

Nowadays, online monitoring “is just part of the school environment,” said Jamie Gorosh, policy counsel with the Future of Privacy Forum, a watchdog group.

And even as schools move beyond the pandemic, “it doesn’t look like we’re going back,” she said.

Guns and depression

A key priority for monitoring is to keep students engaged in their academic work, but it also taps into fast-rising concerns over school violence and children’s mental health, which medical groups in 2021 termed a national emergency.

According to federal data released this month, 82% of schools now train staff on how to spot mental health problems, up from 60% in 2018; 65% have confidential threat-reporting systems, up 15% in the same period.

In a survey last year by the nonprofit Center for Democracy and Technology (CDT), 89% of teachers reported their schools were monitoring student online activity.

Yet it is not clear that the software creates safer schools.

Gorosh cited May’s shooting in Uvalde, Texas, that left 21 dead in a school that had invested heavily in monitoring tech.

Some worry the tracking apps could actively cause harm.

The CDT report, for instance, found that while administrators overwhelmingly say the purpose of monitoring software is student safety, “it’s being used far more commonly for disciplinary purposes … and we’re seeing a discrepancy falling along racial lines,” said Elizabeth Laird, director of CDT’s Equity in Civic Technology program.

The programs’ use of artificial intelligence to scan for keywords has also outed LGBT+ students without their consent, she said, noting that 29% of students who identify as LGBT+ said they or someone they knew had experienced this.

And more than a third of teachers said their schools send alerts automatically to law enforcement outside school hours.

“The stated purpose is to keep students safe, and here we have set up a system that is routinizing law enforcement access to this information and finding reasons for them to go into students’ homes,” Laird said.

‘Preyed upon’

A report by federal lawmakers last year into four companies making student monitoring software found that none had made efforts to see if the programs disproportionately targeted marginalized students.

“Students should not be surveilled on the same platforms they use for their schooling,” Senator Ed Markey of Massachusetts, one of the report’s co-authors, told the Thomson Reuters Foundation in a statement.

“As school districts work to incorporate technology in the classroom, we must ensure children and teenagers are not preyed upon by a web of targeted advertising or intrusive monitoring of any kind.”

The Department of Education has committed to releasing guidelines around the use of AI early this year.

A spokesperson said the agency was “committed to protecting the civil rights of all students.”

Aside from the ethical questions around spying on children, many parents are frustrated by the lack of transparency.

“We need more clarity on whether data is being collected, especially sensitive data. You should have at least notification, and probably consent,” said Cassie Creswell, head of Illinois Families for Public Schools, an advocacy group.

Creswell, who has a daughter in a Chicago public school, said several parents have been sent alerts about their children’s online searches, despite not having been asked or told about the monitoring in the first place.

Another child had faced repeated warnings not to play a particular game — even though the student was playing it at home on the family computer, she said.

Creswell and others acknowledge that the issues monitoring aims to address — bullying, depression, violence — are real and need tackling, but question whether technology is the answer.

“If we’re talking about self-harm monitoring, is this the best way to approach the issue?” said Gorosh.

Pointing to evidence suggesting AI is imperfect in capturing the warning signs, she said increased funding for school counselors could be more narrowly tailored to the problem.

“There are huge concerns,” she said. “But maybe technology isn’t the first step to answer some of those issues.”

deputaty-zradnyky-600x337

US, EU Launch Agreement on Artificial Intelligence

The United States and European Union announced Friday an agreement to speed up and enhance the use of artificial intelligence to improve agriculture, health care, emergency response, climate forecasting and the electric grid. 

A senior U.S. administration official, discussing the initiative shortly before the official announcement, called it the first sweeping AI agreement between the United States and Europe. Previously, agreements on the issue had been limited to specific areas such as enhancing privacy, the official said.  

AI modeling, which refers to machine-learning algorithms that use data to make logical decisions, could be used to improve the speed and efficiency of government operations and services.  

“The magic here is in building joint models [while] leaving data where it is,” the senior administration official said. “The U.S. data stays in the U.S. and European data stays there, but we can build a model that talks to the European and the U.S. data, because the more data and the more diverse data, the better the model.” 

The initiative will give governments greater access to more detailed and data-rich AI models, leading to more efficient emergency responses and electric grid management, and other benefits, the administration official said. 

Pointing to the electric grid, the official said the United States collects data on how electricity is being used, where it is generated, and how to balance the grid’s load so that weather changes do not knock it offline. 

Many European countries have similar data points they gather relating to their own grids, the official said. Under the new partnership, all that data would be harnessed into a common AI model that would produce better results for emergency managers, grid operators and others relying on AI to improve systems.  

The partnership is currently between the White House and the European Commission, the executive arm of the 27-member European Union. The senior administration official said other countries would be invited to join in the coming months.  

deputaty-zradnyky-600x337

US Dismantles Ransomware Network Behind More Than $100M in Extortion

An international ransomware network that extorted more than $100 million from hospitals and other organizations around the world has been brought down following a monthslong infiltration by the FBI, the Justice Department said Thursday.

The Hive ransomware group, known to operate since June 2021, targeted more than 1,500 victims, including hospitals, school districts and financial firms in more than 80 countries, DOJ and FBI officials said at a press conference. The network’s most recent victim in Florida was targeted about two weeks ago.

FBI agents, who penetrated the group’s computer networks last summer and thwarted multiple attacks, seized its two Los Angeles-based servers  Wednesday night, while taking control of darknet sites used by its affiliates, officials said.

German and Dutch police took part in the international law enforcement action.

Attorney General Merrick Garland and other top law enforcement officials announced the operation.

“Cybercrime is a constantly evolving threat,” Garland said. “But as I have said before, the Justice Department will spare no resource to identify and bring to justice anyone, anywhere, who targets the United States with a ransomware attack.”

In a ransomware attack, hackers encrypt the data on a victim’s network and then demand payments in exchange for providing a decryption key.

Hive used a “ransomware-as-a-service” model in which highly skilled developers build the malware and then recruit less-sophisticated affiliates to deploy them against victims.

Garland said Hive affiliates targeted “critical infrastructure and some of our nation’s most important industries.”

In August 2021, at the height of the COVID-19 pandemic, Hive affiliates attacked a Midwest hospital’s network, preventing the medical facility from accepting new patients, Garland said.

The hospital was able to recover its data only after paying a ransom, the attorney general said.

While no arrests have been made in connection with the operation, FBI Director Christopher Wray warned that “anybody involved with Hive should be concerned, because this investigation is very much ongoing.”

“We’re engaged in what we call ‘joint sequenced operations’ … and that includes going after their infrastructure, going after their crypto and going after the people who work with them,” Wray said.

FBI agents infiltrated Hive from July 2022 until its seizure, covertly capturing its decryption keys and sharing them with victims, saving the targets $130 million in ransom payments, officials said.

“Simply put, using lawful means, we hacked the hackers,” Deputy Attorney General Lisa Monaco said.

In all, the FBI provided more than 300 victims with decryption keys, Garland said, among them a Texas school district, a Louisiana hospital, and a food services company that had been asked to make millions of dollars in ransom payments. The FBI also distributed more than 1,000 additional decryption keys to previous Hive victims.

The takedown represents a win for the Biden administration’s efforts to crack down on a recent surge in ransomware attacks that cost businesses and governments around the world billions of dollars a year.

U.S. banks and financial institutions processed nearly $1.2 billion in suspected ransomware payments in 2021, more than double the amount in 2020, the Treasury Department’s Financial Crimes Enforcement Network (FinCen) reported in November.

Roughly 75% of the ransomware attacks reported in 2021 had a nexus with Russia, its proxies or persons acting on its behalf, according to FinCen, which also says the top five highest-grossing ransomware tools used in 2021 were all connected to Russian cyberactors.

Officials would not say whether Hive had any known links to Russia.

John Bennett, a former senior FBI official who is now managing director of the Cyber Risk Business Unit at Kroll, a cybersecurity services company, noted that the seizure notice on Hive’s website, written in both English and a Slavic language, suggests it is aimed at an Eastern European audience.

“The fact that it is basically being broadcast in a [Slavic] language, I think, is telling that that’s the target audience that they’re letting know that they got this,” Bennett said in an interview.

The gang’s takedown, Bennett said, is a sign of what is coming.

“I think this is telling that law enforcement is catching up very quickly to the capabilities of getting inside of these groups,” Bennett said.

deputaty-zradnyky-600x337

Olive Pits Fuel Flights in Spain

The war in Ukraine has exposed Europe’s energy dependence on Russia and is spurring the development of new, cleaner-burning biofuels. Spain is emerging as a leader in this effort, with the introduction late last year of airplane fuel made from olive pits. Marcus Harton narrates this report from Alfonso Beato in Seville.

deputaty-zradnyky-600x337

Trump Reinstated to Facebook After 2-Year Ban

Facebook parent Meta is reinstating former President Donald Trump’s personal account after a two-year suspension following the January 6, 2021, insurrection. 

The company said in a blog post Wednesday it is adding “new guardrails” to ensure there are no “repeat offenders” who violate its rules. 

“In the event that Mr. Trump posts further violating content, the content will be removed and he will be suspended for between one month and two years, depending on the severity of the violation,” said Meta, which is based in Menlo Park, California. 

Trump, in a post on his own social media network, blasted Facebook’s decision to suspend his account as he praised his own site, Truth Social. 

“FACEBOOK, which has lost Billions of Dollars in value since “deplatforming” your favorite President, me, has just announced that they are reinstating my account. Such a thing should never again happen to a sitting President, or anybody else who is not deserving of retribution!” he wrote. 

He was suspended on January 7, a day after the deadly 2021 insurrection. Other social media companies also kicked him off their platforms, though he was recently reinstated on Twitter after Elon Musk took over the company. He has not tweeted. 

Banned from mainstream social media, Trump has been relying on Truth Social, which he launched after being blocked from Twitter.