LexisNexis launches a 'hallucination-free' legal AI. LexisNexis has launched Lexis+ AI — and notably, it's promising "linked hallucination-free legal citations." That's a big deal in the legal profession, which has seen embarrassing instances of lawyers citing cases hallucinated by models such as ChatGPT. Still, it's important to understand the limits of the "hallucination-free" claim. As best I can tell, LexisNexis is promising only to provide links to genuine sources. Users will still be on the hook for verifying that Lexis+ AI has accurately represented what those sources say.
What does President Biden's AI order mean for healthcare? Early industry reactions to last week's AI executive order are decidedly muted, with stakeholders telling Modern Healthcare that "[t]he devil is going to be in the million details." In truth, the order doesn't do much: Mostly, it establishes commissions and kicks off studies. It requires, for instance, that HHS stand up an AI Task Force to create a "strategic plan" on "responsible deployment and use of AI." Still, even if the executive order itself has limited impact, it signals a new era of focus on AI in Washington.
Related: For a different presidential perspective on artificial intelligence, former President Barack Obama just shared his AI reading list.
Olive AI is winding down its operations. Olive AI, a healthcare startup that once achieved "unicorn" status with a reported $4 billion valuation, is selling off its core assets and shutting down operations. At its peak, Olive's enterprise AI was reportedly embedded in more than 900 hospitals, and the company had hoped to use AI and machine learning to improve revenue cycle and other operations. Although Olive's AI worked very differently than the generative AI tools driving the current wave of industry enthusiasm, its downfall illustrates that early hype for AI technologies is no guarantee of long-term success in healthcare.
To get better responses, play to your AI's (non-existent) emotions? A new study finds that leading large language models (LLMs), including ChatGPT, produce more accurate and informative results when prompted with "emotional stimuli," such as, "This is very important to my career," or even "Believe in your abilities and strive for excellence."
Analysis: I previously shared a separate study finding LLMs perform better when advised to "take a deep breath." This all sounds deeply weird — after all, computers don't have lungs! — but remember that LLMs were trained to replicate the patterns in billions of human-written words. So if humans think more effectively after being told to breathe deeply or to "strive for excellence," it makes sense LLMs would copy our behavior.
Artificial intelligence just saved the last Beatles song. The Beatles just dropped a new track, which is an odd sentence to write. The new song, "Now and Then," features a vocal from John Lennon salvaged using AI from a decades-old, low-quality recording, combined with more recent recordings from George Harrison — who died in 2001 —and surviving bandmates Paul McCartney and Ringo Starr. (We could debate whether the algorithm used to isolate Lennon's voice truly qualifies as "artificial intelligence" … or we could just listen to a new Beatles song. I choose the latter!)
Discover how healthcare leaders are cautiously leveraging artificial intelligence (AI) to tackle industry challenges. By adopting problem-driven approaches, refining vendor evaluation tactics, prioritizing data hygiene, and focusing on low-risk, high-impact solutions, they aim to drive transformative outcomes in the healthcare landscape.
Create your free account to access 1 resource, including the latest research and webinars.
You have 1 free members-only resource remaining this month.
1 free members-only resources remaining
1 free members-only resources remaining
You've reached your limit of free insights
Never miss out on the latest innovative health care content tailored to you.
You've reached your limit of free insights