- Published on
Everything I learned building Macro
- Authors
- Name
- Sam Kececi
In 4 years as CTO of Macro, this is everything* I learned building a company from nothing to something.
I learned these things mostly by doing, failing, and trying again. Some of these things I still fail at, but at least I learned them! I'm confident that you'll find something to glean, regardless of what line of work you are in.
In no particular order:
- First impressions and initial momentum are sticky.
- Like launching a cannon: once the projectile is in the air, there's not much you can do to steer it. You can course correct on the fly, but it's hard to fight inertia.
- On the product side, people form an opinion quickly. Capture their attention and wonder immediately, and things become very easy.
- This principle is just as true on the interpersonal side.
- As I've grown as CTO and engineering manager, there has been a delineation in the way my engineering team views me. Engineers that joined early saw me at my most novice and those that joined late see me as a more confident and experienced manager.
- Those that joined late never saw the inexperienced Sam, and those that joined early always remember the inexperienced Sam.
- In theory, the solution is to throw away the past. There is no fixed self. In reality, impressions are sticky.
- Bad 1:1s are worse than useless. Bad 1:1s are just "going through the motions." They make you feel like you're in an interview.
- Good 1:1s are genuine connection. They probably aren't even labelled as "1:1s" on your calendar. They might look like going over to someone's desk and chatting or asking to go for a walk.
- There is never a "eureka" moment where it all clicks at a startup. The goal should be to keep the slope positive (taken over a wide enough window) and as steep as possible.
- There seem to be turning points when a company is viewed from the outside. Big releases, fundraising announcements. But those are simply a result of the rate that information is shared (and what information is shared).
- Advice that you hear from a successful startup founder might not work for you.
- For any topic, I can show you two equally successful founders that have polar opposite opinions.
- "Embrace fully remote" (Linear, Vercel, Mercury) vs. "5-days in person is the only way" (Anduril, Anthropic, ScaleAI).
- Which one is "correct"? Whose advice should you follow?
- The actual "correct" answer is the one that works for you.
- When coding or designing a system, write out the highest level pseudo code possible. Make it absurdly simple.
input = getUserInput(); response = sendInputToServer(input); handleResponseAccordingly(response);
- Then fill in the functions and rename for clarity. Repeat this for each function recursively. Continue as many levels down as you need until it becomes trivial to complete.
- I apply this same method of thinking to any task that feels too big to fit in my brain in one go (I call it my context window). It helps alleviate fear, procrastination, and decision paralysis. Making anything is literally just doing one small thing at a time. That's why Legos are so fun.
- If you're putting off going to the gym, set a goal to go and do one single rep. Chances are, once you're there, you'll find it easy (and maybe enjoyable) to keep going.
- If you're putting off writing a blog post, set a goal to write one sentence.
- An ounce of programming is worth a ton of Training and Inference.
- Because of the ease and ubiquity of language models, it becomes second-nature to just chuck in data and prompt to get an output. But some old-fashioned thinking can usually get you very far.
- The hybrid approach is usually always correct. Chuck it in a language model, but do pre and post-processing in a deterministic way. Format the text, strip out terms, dynamically include certain information (if statements!).
- Consider the simple example of formatting a JSON string. Sheldon, who has been coding COBOL since 1874, writes an algorithm to parse it how he needs. He hosts a basic webservice and gives it some minimal resources – it's just running 5 lines of code.
- Billy just learned about language models and the shiny new OpenAI structured output functionality. So he uses bolt.new and the Replit Agent to spin up a webservice on a serverless Kubernetes cluster, connected to a vector database for embeddings, using the OpenAI API to pass in the JSON and get back the output. 10% of the time the model hallucinates, and 5% of the time OpenAI servers are down.
- In this case, Billy could learn a few things from ol' Sheldon. So could many "engineers" on Twitter.
- Using RAG is a constant dilemma. Gemini has a 2 million token context window, but that doesn't necessarily mean that shoving the whole thing is the right approach.
- How do you know* when to use RAG versus when to just throw everything in an LLM? I should clarify that in this case I define RAG to mean storing documents (of any kind) in a vector database and performing some variation of semantic similarity retrieval on those documents, to then pass into a language model. But RAG is a generic term - Google search is "RAG" because it generates an output by retrieving things!
- The biggest problem with vector DB + RAG is that "Semantic Similarity" is NOT always the most sensible way to fetch data to solve your problem. For example, if I ask "What was the most important thing I can learn from this data?", the RAG query will search the embedding space for the text like "learnings, insight, importance". This is not exactly what I want! There may be many amazing learnings that don't explicitly contain that text.
- The best way I know currently to get around this is to add "middleware" to your query that asks a language model to come up with some proposed "similarity searches" to fetch from the Vector DB. Continuing our example, we need some understanding of the data to think about what a key learning might be. So if we know it's about dinosaurs, similarity search could look for "diet, extinction, stegosaurus size, ...". But this requires knowledge of the data! This reduces down until we essentially land back at "just chuck it all into a language model to figure out the reasoning."
- RAG excels in three areas. The first, and most obvious, is extremely large context bases that exceed 2 million tokens. You need some way to condense it.
- The second is when the problem can be simplified down to a similarity search problem. For example, you are looking for a passage or segment that mirrors what the user inputs.
- Third, an angle that I have not seen frequently discussed, is when the quality of output will improve by omitting irrelevant data.
- When in doubt, chuck it all in Gemini.
Ok, back to the non-nerd stuff!
- "If you don't know yet what you should work on, the most important thing is to figure it out. You should not grind at a lot of hard work until you figure out what you should be working on." —Naval
- At the beginning of Macro (CoParse back then), we spent too little time thinking whether to build something, and too much time building whatever.
- We built many things that never moved the needle. We spent a lot of time hashing out things that didn't matter.
- We probably didn't spend enough time thinking about one-way door decisions.
- We didn't spend enough time thinking about the ramifications of forking LibreOfficeKit (an existing open source docx editor) versus making our own editor from scratch. We thought forking was a non-consequential idea and would save us time. So we went with it. In reality, it was company shaping.
- Decisions that seem small can be company shaping because they create a butterfly effect. When we forked LibreOfficeKit, it changed the way we presented the product. And the way we viewed what we were building. We promised improvements to customers. Sales cycles became contingent upon things working. It became harder and harder to abandon.
- The sunk cost fallacy is a fallacy.
- It's harder than you think to not do something.
- The same goes for not building something. We built a lot of things that we should not have.
- Like many ambitious people, the compulsion towards doing something... anything!! can often be irresistible.
- It helps to have a "thing." This goes for the way you present yourself. But you can't fake it. You can't pretend to be cool and unique. That would not be cool, nor unique.
- "No one can compete with you on being you."
- Craig Federici does this well. Charli XCX does this well. Steve Jobs did this better than everyone. They have a vibe that can be labelled as "them".
- I would often tell people, "Just Steve Jobs it!" Everyone got what I meant. If your name can be used as a verb, you've made it.
- Litmus test: if you're playing charades, can you successfully pretend to be that person? If so, that person has the special sauce.
- e.g. Sundar Pichai fails the test. How tf do you act as Sundar Pichai?
- A coworker told me: "You're going Sam-mode again!" when I got obsessed about a crazy idea. I took it as a compliment.
- When building, build the higher level thing that can create the lower level thing. The software equivalent of teach a man to fish.
- For example: instead of building a hard-coded set of analytics charts, build a tool that can make charts from data.
- Instead of building a car, Henry Ford built a manufacturing process to build cars.
- Think extremes. Helpful for: finding edge-cases, load testing, deriving a framework for reality.
- Any investment that is predicated on things staying as they are is doomed to fail.
- This was inspired by the rise and fall of Peloton and Zoom stock (and many others) during COVID. People (capital markets) thought that COVID would last forever. They didn't realize that everything is impermanent.
- I was subsequently reminded of this principle in Buddhism. Nothing is permanent, things always change. Act accordingly.
- Nothing will ever be perfect. Don't let perfect be the enemy of great.
- IQ is sometimes more of a hindrance than a benefit.
- What happens when you throw away "intelligent" thinking and just do what's right in front of you?
As with anything I write - please let me know your unfiltered thoughts. Email me: sam.kececi+blog@gmail.com