Daily Briefing

AI Roundup: What the Hollywood writers' deal means for other knowledge workers


Learn what the resolution of the Hollywood writers' strike could imply for the future of AI in other knowledge industries, uncover ChatGPT's new abilities to "see" and "speak," and more, in this week's roundup of AI-related healthcare news from Advisory Board's Thomas Seay. 

What the Hollywood writers' deal means for other knowledge workers. One core conflict in Hollywood’s recent writers’ strike was how, and whether, studios could use AI to write screenplays. The resolution is interesting in itself (AI can't be used in place of a writer; writers can use AI tools if they wish but can't be required to) — but this op-ed argues it also sets a precedent for future worker negotiations, as more and more white-collar workers, including those in health care, find their tasks exposed to AI automation.

ChatGPT can now see, hear, and speak. ChatGPT can now interpret and create images, as well as to listen to and respond with audio. These new image-recognition capabilities are especially powerful when combined with ChatGPT's other skills. Two powerful examples: A New York Times writer snapped a photo of a wordless set of furniture-assembly instructions; ChatGPT converted it into step-by-step, written directions. And an OpenAI employee, using a prelaunch version of these capabilities, sketched a webpage on a piece of paper; ChatGPT then coded a working website.

Google's Bard now lets websites opt out of AI training. Following in the footsteps of OpenAI's ChatGPT, Google now allows website owners to block their content from being used to train Google's AI models. This "opt-out" approach seems to be emerging as an industry norm, but it's worth noting that blocking OpenAI or Google from scraping your website now won't have any effect on their already-trained, and already-released, AI models.

For the technically inclined: How to make the most of Claude's (extremely long) context window

OK, this next section is super-wonky, but if you power through it, you'll find a deep lesson in how to create more effective prompts for any large language model (LLM)—not just Claude.

All LLMs have a limited "context window," which is essentially a measure of how many words (or, technically, "tokens") they can consider as part of a single request. For ChatGPT, it's 8,000 tokens; for Anthropic's Claude, it's a whopping 100,000 tokens.

Models with short context windows can be troublesome because they can't fit long documents or requests in their memory. You might, for instance, ask such an LLM to summarize a long document, only to discover that it summarized only the brief section that fit into its context window.

But models with long context windows pose their own challenges. If you feed Claude 100,000 tokens (say, if you're asking questions about a full-length book), it may struggle to identify the pieces of the text relevant to your request. Imagine asking a human to read the full text of Moby Dick and then, after they finish, asking, "So how many times did Melville use the word 'whale'?" Your human reader might, in some sense, remember the whole book, but they'll still struggle to answer such granular questions.

Claude's developer, Anthropic, just unveiled a blog post with some handy tips. Their most intriguing idea is to ask the AI to use the first part of its response as a "scratchpad" into which it can copy particularly relevant portions of your input — the equivalent of a human taking notes each time it notices the word "whale" in the text. Doing so improves Claude's recall of relevant passages and helps it answer questions more accurately.

Most of us don't write book-length AI prompts, so we aren't likely to encounter this specific problem. Still, the scratchpad is a nifty prompt engineering hack that takes advantage of LLMs' one-word-at-a-time thought process. By telling the AI to use a scratchpad, you're essentially asking it to use more words — and thus do more thinking — in its answer: to first establish context, then act on that context.

I've similarly had good results in framing similarly multi-step requests to LLMs: for instance, asking ChatGPT to first write a plan to solve a problem, and then execute that plan; or to ask it draft a paragraph, then critique and rewrite its own creation. By breaking complex requests into easy-to-execute chunks, you can often nudge LLMs into providing much better results.

 


SPONSORED BY

INTENDED AUDIENCE

AFTER YOU READ THIS

AUTHORS

Thomas Seay

Managing director

TOPICS

RELATED RESOURCES

Don't miss out on the latest Advisory Board insights

Create your free account to access 1 resource, including the latest research and webinars.

Want access without creating an account?

   

You have 1 free members-only resource remaining this month.

1 free members-only resources remaining

1 free members-only resources remaining

You've reached your limit of free insights

Become a member to access all of Advisory Board's resources, events, and experts

Never miss out on the latest innovative health care content tailored to you.

Benefits include:

Unlimited access to research and resources
Member-only access to events and trainings
Expert-led consultation and facilitation
The latest content delivered to your inbox

You've reached your limit of free insights

Become a member to access all of Advisory Board's resources, events, and experts

Never miss out on the latest innovative health care content tailored to you.

Benefits include:

Unlimited access to research and resources
Member-only access to events and trainings
Expert-led consultation and facilitation
The latest content delivered to your inbox
AB
Thank you! Your updates have been made successfully.
Oh no! There was a problem with your request.
Error in form submission. Please try again.