Embracing Failure – How we tested 10 different startup ideas over 10 weeks
Since 9 out of 10 startups fail, why not make 10? This is the third post in the Megacool Series about the method we used to run 10 hackathons to find the most promising startup idea.
“Don’t start a company just to start a company.”
That's the advice we ignored when we embarked on our Megacool journey, leaving our comfortable lives in Norway for the fast-paced startup Mecca in San Francisco. We were in our mid-twenties, hungry for a new startup adventure and highly naive about what lay ahead.
This is the third part of my Megacool startup journey. Here’s an overview:
Part 1: How the Megacool journey began
👉Part 3: Embracing Failure – How we tested 10 different startup ideas over 10 weeks👈 YOU ARE HERE
Part 4: Hackathon 1-3: From AI travel agent to photo management adventures
Part 5: Hackathon 4-7: The messy middle
Part 8: How we built our co-founder team
Part 9: From idea to live product
Part 10: Our bootstrapping hustle
Part 11: How we raised $1.6m in funding
Part 12: How we got acquired
Part 13: From Alpha to Acquired: The product, growth, and business model journey
Part 14: The emotional founder roller coaster
Part 15: The epilogue: Reflections on the whole journey
In the startup world, they say 9 out of 10 startups fail. But we knew failure could be a powerful teacher. Having worked together at the mobile game studio Dirtybit, my co-founder, Nicolaj Broby Petersen, and I painfully knew failing together could teach us how to best work together.
The startup scene is full of dreams of becoming a unicorn, but only a few achieve it. Nobody can predict which startup idea will become successful at the idea stage.
From the games industry, we learned the value of testing ideas rigorously. To ensure that you only work on the most promising game concepts, each game idea is meticulously tested through its different phases; game concept, prototype, production, soft launch, and full (marketing) launch. The German game studio Wooga launched several successful games this way through a process they coined “The Hit Filter”1.
Knowing that experienced founders have higher success rates2, we decided to keep trying, regardless of failure. This would be my third entrepreneurial journey and Nicolaj's fourth.
"The odds of producing an influential or successful idea are a positive function of the total number of ideas generated." (Dean Simonton referenced in Adam Grant’s Originals)
It's all about trying, failing, and trying again!
Rather than going all-in on our first idea, we drew on Wooga’s Hit Filter and Lean Startup methodology3. To increase our chances of success, we embarked on ten hackathons over ten weeks during the 2015 summer.
The goal was to find the most promising concepts through prototypes, validation, and high-velocity learning cycles.
This was our strategy after the kick-off session on June 15th, 2015:
“Create value-adding services for people to be more productive and efficient with their work by prototyping 10 different concepts to identify which generates the most traction within a limited time. Then focus solely on that product and start the seed funding process.”
As the summer and weeks passed, through lots of trial, error, and iteration, we ended up with the hackathon process outlined in the image below. Future posts will address reflections on what I’d do differently today.
Each hackathon would last between 1-5 days, depending on feedback. Embracing failure was a fundamental principle. If an idea was shit, we had no qualms about returning to the drawing board and starting on a new idea.
Idea generation
During the months between that fateful cold 2015 February morning and June 7th, when we took off the training wheels and truly started our new startup adventure, we had created a backlog of very rough ideas and problems we wanted to solve.
At first, I had struggled to contribute due to frequent visits from my imposter syndrome, aka my saboteur4. I've never viewed myself as a product visionary. Especially compared to Nicolaj, who’s amazing at spotting trends.
I often exclaimed, "This solution has to exist already!!" only to realize it didn't. I jokingly told Nicolaj to slap me the next time I'd say something like that to help me think from the standpoint of endless opportunities instead of limiting myself from the get-go. Joke aside, I had to train my idea muscle. Eventually, we contributed to the backlog under the understanding that "No idea was too stupid."
when it comes to idea generation, quantity is the most predictable path to quality. (Adam Grant, Originals)
To further help facilitate the idea capture, I kept a notebook “tied to my hip,” where I jot down epiphanies as they came.
The backlog would continue to evolve through the summer as we got more and more inspiration. It was made up of problems and frustrations we were experiencing ourselves in our day-to-day lives. We would ask people in our network and people we performed user interviews on for ideas. Inspiration would also be derived from reading about emerging technologies, such as messaging and AI bots5. What Megacool actually became originated during an Angel Hack6 hackathon we attended.
Idea selection
When selecting ideas for the different hackathons, we gravitated towards the problems/ideas we felt/needed ourselves. Sometimes, we developed more data/background on the problem/solution, which helped us understand its potential if executed well. This was especially true for what Megacool became, as we pulled on our experience from the games industry.
Planning
The overall plan, desired outcome/goal, and Make it/break criteria
We kicked off each hackathon with a planning session. That included deciding on the desired measurable goal/outcome: What information were we seeking to determine "Hey, this idea seems to have legs"! And give ourselves more time. What information would kill it by clarifying that we were not on the right track and needed to return to the drawing board or end the hackathon and move on to the next idea?
An example of a goal was to create a prototype we could use ourselves and demo to others to gather interest.
Make it: Signals to give the idea more time:
50% of those we interview say they are willing to pay us to solve the pain/problem
If someone pays us upfront to make sure we build it
Break it: Signs that we needed to kill the idea:
Less than 50% of those we speak with acknowledge the pain/problem
No one is interested in paying to solve this problem
Assumptions
While planning, we also aligned on assumptions. This is important because team members often assume they are on the same page, only to be surprised when they write them down: Who is the ideal customer/user profile? Who has the biggest pain point for the problem we were solving? How are they solving this today? What features are important to them?
Hypothesis
From the assumptions, we would create a hypothesis statement: We believe that (doing this / building this feature) for (these people/personas) will achieve (this outcome). We will know this is true when we see (this market feedback, quantitative measure, or qualitative insight).
Examples of hypothesis statements we used: We believe that people are willing to pay for a personal relationship management system to structure their relationships. We know this is true when someone is willing to pay for it.
Learn
Our learning phase was focused on qualitative user interviews/testing. Our goal for each Learn iteration was to speak with at least ten people before moving to the next phase. That way, we would review the feedback as a whole instead of being swayed by outlier responses (which was really hard not to do). As you’ll read in a future post, this was quite hard in practice.
Depending on the idea and where we'd likely find the ideal customer, we'd either interview people at our co-working space, on the street, or visit stores. If someone knew relevant people we should talk with, we'd ask for introductions and interview them. If we had relevant people in our network, we'd run the interview/test over a video call.
When interviewing users, we focused on the following questions:
What are you trying to get done? (To gather context)
How do you currently do this? (Analyze their workflow)
What could be better about how you do this? (To find opportunities)
We would then strive to ask why five times to get to the bottom of each response. We also avoided leading questions that would result in yes/no answers.
Toward the end of the interview, we would ask them:
How much would you be willing to pay to use/do this today? (To put them on the spot and truly test their eagerness)
Can we contact you when we’ve made further progress on this? (To collect contact details and to double confirm that this is a pain point for them)
Do you know of anyone else that we should speak to about this? (You’d be surprised by how powerful this question is. The best way to interview relevant users)
Do you have any other pain points/problems we should try to solve? (To help fill our backlog of ideas)
Best practices that we learned along the way:
Approaching strangers took a lot of work. But as we got more and more practice and challenged each other, this also became easier.
It’s incredibly important that we smile when approaching someone. Similarly, we tried to get them to laugh within the first 30 seconds to ease the tension and potential awkwardness. It helped them open up.
If a person isn't within our target customer group and doesn't know anyone they can introduce us to, we'd politely thank them for their time and find someone else.
If we didn't know the person, we'd tell them that we were interviewing/testing on behalf of someone else. That way, they were more likely to give us honest feedback.
If we knew the person, we’d ask them to “kill our baby” to increase the chance of honest and direct feedback.
Best practice we learned for doing user testing:
The ask was usually: “We’re trying to identify errors and shortcomings with this experience/product.”
If we’re showing an image mockup, we need to explain that it’s not dynamic OR make it black and white to visualize that it’s not touchable.
It was great to be two people doing these interviews together. That way, one focused on asking the questions while the other took notes. We would rotate on who did what.
Measure
Once we'd spoken with ten people, we'd review our answers. We'd update our assumptions and hypotheses based on the new learnings when relevant.
We could often go for a walk discussing this, as walks helped us think better. We needed to think better so we wouldn't engage in acrimonious discussions regarding the product direction.
When we didn't walk, we preferred to discuss in front of a whiteboard to use visuals to aid the discussion.
Build
Back in 2015, we often used Keynote for early demos/mockups. Today, you have endless better and faster tools at your disposal, especially after ChatGPT entered our toolbox. The most crucial thing in this phase is to use a tool that allows you to iterate rapidly.
The Zappos Story was one of our favorite stories and inspiration at the time: Nick Swinmurn wanted to build an online shoe store. He assumed that the timing and market needed to be aligned but wanted to test his hypothesis first. He started by taking photos of shoes from the local store and told the store that he would buy them at full price if someone bought them. He put the pictures online and got orders. He confirmed his hypothesis and learned much from the experience, saving him time and giving him unique knowledge. For example (and this would be obvious today), customers wanted to return shoes that didn't fit. This is called the Wizard of Oz Experiment: Use humans behind the service during user testing to verify the hypothesis and then build the technical solution to handle it for scale afterward.
The purpose of the hackathon was to validate a business idea by addressing the main problem we were solving. Thus, our time should prioritize speaking with and learning from potential customers, not perfecting the mockup/prototype. This meant it was allowed to take shortcuts to fake the ideal user experience. We had read the book “Fake It Make It”7 in preparation for the summer and embraced much of its preaching.
We would try to stay simple and old school with black-and-white mockups. We were never at a stage where our concepts were showing the final design, and colors made the person we interviewed caught up in design feedback and not the core problem we were addressing.
Result
At the end of the hackathon, it was really important to reflect back on what we achieved and learned.
Did we achieve the goal/desired outcome?
First, we'd review the results, initial plan, goal/desired outcome, assumptions, and hypothesis and capture our learnings. As the desired outcome/goal should be measurable, there should be no doubt whether it was achieved or not.
We'd also capture important decisions, challenges we ran into, questions we had, and accomplishments we were proud of.
What's the most important next step if we continue?
Independent of the above outcome, we’d write down all the potential action points should we proceed with this idea. Sometimes we experienced that an idea needed to mature over time as new information became relevant. This is something that happened with what Megacool ended up becoming.
Retrospective – What worked and what did not work
We evaluate everything we do through a retrospective. Here’s how we’d do them. Early in our hackathons, we held a quick retrospective at the end of each day. As our hackathons matured, we switched to only one at the end of each hackathon.
![](https://substackcdn.com/image/fetch/w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff8c89405-0347-4aa0-a0dc-9c3464a294e3_960x960.png)
The journey was an emotional roller coaster, with hopeful ideas crashing down to earth. But we embraced the unknown and followed the mantra, "It's better to have a plan than to follow the plan."
Are you curious about the ten hackathons we ran? And whether we would have done anything differently to the above in retrospect? Read the next part here: Hackathon 1-3: From AI Travel Agent to Photo Management Adventures
Subscribe to get future posts in this series:
Fun fact: As I was writing this from my co-working space, a founder came over and asked if he could ask me some questions. He asked me to be as honest as possible as his feelings wouldn't be hurt. Afterward, I had to tell him about this post I was working on. Welcome to my Substack, Will 👋
A very special thanks to and for reading drafts of this and helping me think better!
Data from the article titled “Are Experienced Founders Better?”. It shares data on the correlation between founder experience and company success
This was big in 2015
https://www.angelhack.com/