Alt + E S V

Monki Gras Recap, Day 2

Share via Twitter Share via Facebook Share via Linkedin Share via Reddit

We’re recapping the talks presented at the 2024 Monki Gras. Below is a summary of the talks from the final day of the conference. A summary of the talks from the first day is available here.


Stochastic Children by Dormain Drewitz and Rachel Stephens

Drewitz and Stephens (oh hai!) explored the overlap between effectively prompting as parents and prompting an LLM. Can we think like parents in how we approach our interactions with LLMs, and can LLMs show us anything about being better parents?

We explored the ways we get our children to do things and whether it translates to LLMs. We also highlighted the tradeoffs that come from spending the time (sometimes years!) to train the model/child with necessary context so your prompts can be brief and one-shot, as compared to providing in-depth instructions in the moment.

Also: the power of sticker charts!

Tweet screenshot (click link for full link to tweet)

A week’s work in 6 hours – how LLMs will revolutionise the study of humanities by Ian Miell

Miell used LLMs to rewrite one of his essays from university without referencing the original essay and without re-reading the texts. The prompt was ‘How was new Liberalism different from the old Liberalism?’

Miell spoke about the power and limitations of LLMs. While there were definitely limitations in the LLMs ability to do deeper thinking, it was an excellent tool reducer and learning accelerator. It can’t write a decent university-level history essay yet, but it shone at helping plan the essay writing.

He also included such insightful references to Douglas Adams and Garry Kasparov.

Tweet screenshot (click link for full link to tweet)

Tweet screenshot (click link for full link to tweet)

Miell’s slides are available here.

Cake: an open source tale by Julia Ferraioli

Ferraioli shared the primary tenants of open source:

  • can I learn about the code and examine its inputs and outputs?
  • can I inspect it and find bugs?
  • can I change it to suit my purposes?
  • can I give it to others?

People have expectations of what they can do when something is called ‘open source,’ and the LLM community is not always meeting these expectations with their current language.

The problem we sometimes face with LLMs is that 1. people use ‘open source’ inconsistently and 2. it’s not always clear how to apply some of the core principles of open source to key LLM concepts (for example, what does it mean to share how the input data was cleaned in an open source way?) This is a challenging problem, but one we need to pay attention to.

Tweet screenshot (click link for full link to tweet)

Lines in the sand by Jim Boulton

Boulton is a tech historian who tells stories about unsung tech heros using comics. Boulton shared his experience creating a comic book about Lynn Conway using LLM prompts to generate the images. Each image required hundreds of iterations to create a cohesive look and feel throughout the comic.

Tweet screenshot (click link for full link to tweet)

I love the above quote, but Boulton’s talk also had one of my favorite lines from the event: “I discovered AI’s not very good at fingers, but then I talked to an illustrator and they said ‘illustrators aren’t really good at fingers, either.’”

His comics on Lynn Conway and Alan Emtage are delightful and can be purchased here.

Tweet screenshot (click link for full link to tweet)

“I Didn’t Grow Up Speaking Code”: GitHub Copilot as a Programming-as-a-Second-Language Tool by Kristen Foster-Marks

Foster-Marks’ talk was a delightful dive into what it means to master coding as a language. Her talk drew parallels to what second language acquisition; it’s not enough to be able to code, but we also have to be able to “speak code” to describe what it’s doing. What are some examples of when we “speak code”?

  • interviews
  • PRs and code reviews
  • paired programming
  • live streaming
  • pseudo coding
  • explaining things to business stakeholders
  • rubber ducking

Foster-Marks then explored the consequences of poor “fluency” for both individuals and teams, and then discussed how AI code generation tools like GitHub Copilot can help enhance this skill.

Read more about Foster-Mark’s talk here

Tweet screenshot (click link for full link to tweet)

From Text to Flaws: vulnerabilities in applications with Generative AI and LLMs by Paul Molin

Molin spoke about prompt injections and how to ameliorate their harms. As the popularity of GenAI grows, we need to understand how it can be exploited and what options we have to protect against manipulation or data exfiltration.

Tweet screenshot (click link for full link to tweet)

Some potential steps to limit vulnerabilities include:

  1. Use LLM to analyze input/output
  2. Compare the input to a list of known injections
  3. Use 2 LLMs with different accesses
  4. Preflight prompts, detect injections before execution
  5. Human in the loop
  6. Canaries

Tweet screenshot (click link for full link to tweet)

(That said, my favorite part of the talk was the intro where Molin compared the sizes of cats and bananas, which appealed to one of my favorite things, measuring using uncommon metrics (like giraffes/hour)

Tweet screenshot (click link for full link to tweet)

Working with ChatGPT by Jessica West

West shared some behind the scenes stories about creating the visual assets used for this year’s Monki Gras. West collaborated with designer Jack James to create the logos and imagery for the event, but–given the theme of the conference–the team of course used GenAI as an element of the design process.

West walked through the design team’s iterative process of brainstorming and creating with ChatGPT, and detailed the challenges of trying to bring the GenAI concepts to fruition with James’ design work.

(Also, be sure to see her infographics with pictures of James Governor as ‘beer Jesus.’)

Tweet screenshot (click link for full link to tweet)

Why prompt engineering is BS by Kyle Roche

Griptape, co-founded by Roche, focuses on creating “off-prompt” AI solutions. Their product hypothesis centers around developer tooling preferences and enterprise data privacy concerns, and thus they created a framework and opinionated deployment environment for writing AI-powered Python applications. Roche’s talk discussed a vision when AI developers don’t have to care about prompt engineering at all.

Tweet screenshot (click link for full link to tweet)

Generative AI’s Hallucination Problem and How to Fix It. by Emil Eifrem

Eifrem’s talk did a fantastic job connecting related technology concepts, as talked about his experience with graph technology as a co-founder of Neo4j related to this next generation of AI tools. He explained knowledge graphs and how they relate to retrieval augmented generation (RAG).

Tweet screenshot (click link for full link to tweet)

Eifrem called AI a once in a generation shift. While RAG can add context to an LLM, GraphRAG can unlock additional context from the knowledge graph and help LLMs avoid hallucinations.

Tweet screenshot (click link for full link to tweet)

Tweet screenshot (click link for full link to tweet)

AI for Developer Productivity by Paige Bailey

Tweet screenshot (click link for full link to tweet)

Bailey gave us a behind the scenes tour of how Google is using language models and copilots to build internally. She discussed how GenAI can reduce the friction from having an idea to implementing it, with the goal of “giving developers more good days.”

Tweet screenshot (click link for full link to tweet)

In particular I liked her examples of advances are now happening in days and weeks, not decades. I also liked her context around what 1-10M tokens actually “feels” like.

Tweet screenshot (click link for full link to tweet)


Thanks again to all speakers, sponsors, and attendees. You made the return of Monki Gras special!

No Comments

Leave a Reply

Your email address will not be published. Required fields are marked *