As a part of my job search regimen, I am posting 2-3x per week on LinkedIn to boost engagement with my profile. How lame.
I need a job though, so no shame here.
And honestly, LinkedIn is so boring, why not publish opinionated and borderline-unhinged thoughts on building software products with your friends?
I’m publishing this here to broadcast these thoughts outside of LinkedIn, and for anyone who thinks building software and “personal knowledge management systems” might be interesting.
We're calling it Context 👨🎨
I am working on an AI education start up idea with my friend Dror Margalit.
We're working on a prototype for an AI-driven chatbot that helps users learn how to code.
We are curious to see if "hard-prompting" ChatGPT with a framework that allows users to select their preferred learning style is an improvement over using ChatGPT off the shelf.
By "hard-prompting", I mean by leveraging the OpenAI API and it's ability to include context in every message sent via embeddings. In this case, the context is the user's preferred learning style.
I first came across this idea when looking for ways to prompt the OpenAI LLM to give me better feedback when I am coding (and simultaneously still learning new things about coding and software development).
Specifically, I tried using the "Mr. Ranedeer" prompt and was somewhat pleased with the results. It's clunky, but having the ability to switch between a more technical learning style, and a more "expressionistic", story and analogy-driven learning style, was kind of an "a-ha" moment.
After many alternating days of delight and frustration using OpenAI's LLM, my experience with the Mr. Ranedeer made me realize that the OpenAI's LLM could be improved...a lot.
One thing I believe– there is room for LLMs to really "niche down" and serve a specific audience, with specific needs and wants.
I also believe that the typical SaaS UX model of "we reserve the right to change this software at any time", is a flawed one. It harms users, and ultimately, companies, because users lose trust in a product when their favorite feature is constantly altered, or even disappears entirely.
Everyone is building a stupid AI startup
Anyways...
Yes, everyone and their cousin is building an AI startup.
Yes, most of these are seemingly lame wrappers around the OpenAI API.
Yes, we are building an AI thing that is similar to lots of other products out there.
None of them are that great, imo.
I actually use ChatGPT a ton, but I often find it lacking in terms of...
UX, actually.
There are lots of things, but mostly my user experience of it is frustrating because the OpenAI LLM's performance is not consistent.
Some days, the AI is helpful and consistently gives me high-quality output and code samples.
Other days, it spits out low-quality and obvious lists of things to try, instead of giving me code snippets and giving instruction in regard to implementation.
So–we are building this thing, which is essentially another OpenAI-wrapper-app, and we are making a bet that we can improve the UX of using OpenAI by leveraging embeddings to give users the ability to choose their own learning style, along with other similar parameters.
CONSISTENT CONTEXT
While we are using the OpenAI API for our prototype, in the future we will experiment with using open source models such as Mistral, as well as using a vectordb to give the AI richer and more consistent context about the user and their preferred learning style.
Two key words here: CONSISTENT CONTEXT.
OpenAI's LLM is not consistent, and frankly, it doesn't seem like they care that much about including valuable context about their users wants and experiences that would improve the product 10x or 100x.
I do not claim to know everything about the AI education space.
I have no illusions that the first prototype we build will be a learning experience.
And frankly, I don't care that what we're building is similar to a lot of other apps out there.