Why I'm naming my startup 'LinguiMe' (and other terrible decisions)
No, I'm not serious about LinguiMe.
Naming a company too early is a waste of time_
One counterproductive pattern I’ve noticed my mind going back to regularly is what I call the jumping the damn gun syndrome.
In today’s episode of this phenomenon, I waste brain cycles thinking of a company name for the language learning app I’ve been building, even before the app has been in anyone’s hands. It’s foolhardy to waste time (and more importantly, brain cycles) on this because the chances are overwhelming against the name having any meaning whatsoever at this stage. There’s a practically zero chance of the “company” succeeding when the product hasn’t been used by anyone.
The lesson here is that thinking of a great company or product name is more about fun and less about utility. It may just be a founder’s version of tasting the sweetness of a daydream. We’re better off spending the time building and getting untitled apps into the hands of potential customers.
Anyway, I’ll share two name ideas I came up with for my AI app for serious language learners. These are the ones I’m happy to part with:
LinguiMe - a play on the pasta + language + me
Language Friend
Terrible, I know. You can have em’ if you like, no hard feelings if they become billion dollar brands.
/ / /
A word about words_
I’ve been head over heels for dictation ever since I bought Lenny’s Product Pass, which included a year’s subscription to a great speech-to-text transcription product called Wispr Flow.
Wispr Flow is a cloud-based service that lets you speak words into existence, and what’s impressive are:
The fact that it gets 99% of words that I speak correctly into text
The first-class support on mobile, enabling for the first time, the ability to transcend two-thumbed typing while on the go (a game-changer for me as someone who has copious amounts of ideas on the road)
The fact that it transcribes with formatting (bullet points), correction (“Build a web app - no, a mobile-optimised web app - for serious language learners” = “Build a mobile-optimised web app for seriously language learners”), and does so usually with less than 1 second latency
It is an amazing piece of engineering and a welcome evolution to the way we use computers. It’s particularly useful for builders who are prompting AI to build / fix / plan stuff for 7 hours a day. Through extensive use over the last 1-2 months, I’d say I’ve reduced the time it takes for me to get instructions down as text by 3-5 times (minus 67 to 80%). That’s significant.
But.
But -- and this took me a while to realise -- dictating does not equate to faster typing. There’s a gulf between typing words and dictating words that become typed; one turns ready thoughts into words, the other turns rough thoughts into less rough thoughts.
I noticed that I’ve been writing a lot less ever since I started using products like Wispr Flow (paid) and MacWhisper (free). In hindsight, I know why. I’d mistaken dictation as a replacement for writing altogether, which is a rookie but fatal mistake, because writing is thinking. I wrote in my notes:
Dictation, “writing” by voice, is a different thing from writing by typing. Yes, both involve thinking, but dictation is more “fire off the top of my head” versus typing, which is slower, more deliberate. Typing (or handwriting, which is TOO slow for me) is more like “considering what’s in my head, then translating into words.”
I spoke to a friend who’s trying to found his own company in the space of AI content creation and he asked me a question: “If we could create an AI that actually truly does write in your tone of voice, would you use it?”
To this I replied no, I won’t, although I can imagine certain kinds of people who would. I’m simply not busy enough to need someone to be producing written materials on my behalf. I’m not busy enough to give up writing by myself, because for me, writing is thinking.
Writing is thinking. That was the aha moment that I had forgotten.
Hence, I’m writing this, by hand, without any AI in transcribing speech to text or doing autonomous thinking in my tone of voice.
/ / /
AI is the likely to be our last invention, because unlike all other tools humans have invented, this one can invent things itself.
We’re inventing an inventor, and there is a very high probability that this is going to be an inventor that will be more intelligent than us.
This likely means AI will take over most, if not all, jobs once they surpass human intelligence, because the AIs (and its copies) will design, manufacture, and enable ways of doing things that are economically more productive than the current way done by humans.
When drivers -- under which a large percentage of human population is employed -- lose their jobs when they’re replaced by autonomous AI cars, governments don’t need to retrain them for another industry to keep them occupied and productive, because if 99% of jobs are doable (and doable more cheaply) by AI, there isn’t going to be fields for humans to go into.
A lot of these ideas came from the excellent podcast episode on DOAC with Dr Roman Yampolskiy:
Doomsday prepping never looked so good and necessary.
/ / /
The greatest risky enterprise left in the wrong hands_
I’m simultaneously reading two books on AI:
The Singularity Is Near by Ray Kurzweil
Life 3.0 by Max Tegmark (MIT professor)
And, on the daily now, I use all sorts of AI products:
Claude to plan my product-building and learn new ideas
Perplexity to research on myriad topics and scrape social insights
Wispr Flow for speech to text
Replit for building and extending my software products
Descript for editing my YouTube video interviews
What’s my point? Well, just the fact that I feel I can no longer ignore the future because it’s already here, and the pace of progress is accelerating, and that acceleration is accelerating, so IMO it’s time to start paying attention.
I had a moment today where I thought about what the world would be like in 2030 and I wasn’t so self-assured that it would be a brighter future because Sam Altman and Elon Musk and “China” and some very smart people are trying to win a commercial race when it’s really a potential race to humanity’s destruction.
I’ll paraphrase a scene in the DOAC podcast that puts this well into perspective --
Steven (host): “If i know that drinking the water from this cup constitutes a 1% chance of me dying, I wouldn’t drink it, even if I know I’ll be rewarded 1 billion dollars if I survived. I want to live!”
Dr Roman (guest, literally guy who coined the term “AI safety”): “And it’s not just you dying. It’s everyone dying. It’s not for you to decide [whether to drink the water or not].”
Drinking the water is the metaphor for building towards AGI and beyond (i.e. ASI). And the more you read up on AI, the more you realise that the chance of ASI ending humanity is much more than the hypothetical 1% of drinking Steven’s cup of water.
I’m letting AI safety take more of my head space, and so should you.