
B2B Tool
Workers Compensation Calculator
Complex calculations made simple with progressive disclosure and smart validation.
I work end-to-end across business strategy, product thinking and UX engineering.
Where customer needs, business goals and operational realities do not naturally align.
I spend my time with family, playing football, gaming and writing — spotting patterns across business, product and human behavior
Competitive advantage is not in pricing nor features, but rather in helping humans to understand the true value of a product and benefits of a service
I thrive in small, autonomous teams where creative problem solving is not just encouraged — it is expected, supported and turned into action

B2B Tool
Complex calculations made simple with progressive disclosure and smart validation.

Personal Project · Product Architecture · Cross-domain orchestration
Product systems architecture for a music collaboration platform — four technology layers orchestrated through UX-driven constraints.
By being a trusted player maker between cross-functional teams, my goal is to define sweet spots between solutions that are viable, desirable and technically feasible.
Muselink visitor → waitlist conversion
Above typical SaaS benchmark
Paid acquisition spend
Product design experience
Customers reached at Vertikal Helse
Claims upload success (Vertikal Helse)
AEO — Answer Engine Optimization — is writing pages so AI engines like Google's AI Overviews, ChatGPT, Perplexity, Gemini, and Claude can read them, extract the answer, and cite the source. It is not a replacement for SEO; it is the layer above it.
A growing share of product discovery now happens inside chat interfaces. If an AI engine summarizes my industry without naming me, I don't exist in that conversation. AEO is how I make sure I do.
Open every page with a Quick Answer block — 40 to 70 words of plain prose that directly answers the page's main question. AI Overviews and LLMs grab from the top. If the answer is buried halfway down the page, they pull from someone who didn't bury it.
Phrase H2 and H3 headings as the questions a real person would ask — not 'Claims Platform Architecture' but 'How do you cut manual case routing by 96%?' Question-format headings match how AI engines structure retrieval.
Above the fold, in plain semantic HTML — a `<p>` inside a clearly delimited block — never inside an accordion, modal, or JavaScript-rendered widget. Crawlers and extractors read static HTML first; anything gated by interaction is often missed.
On this section the Quick Answer sits directly under the H2. On a product page I'd put it under the hero. The rule is the same: the first visible paragraph should be the answer a reader — or an AI engine — actually wants.
AI engines retrieve by intent: a query goes in, a passage comes out. Question-format headings turn each section into a self-contained answer block, which is exactly the unit AI Overviews and ChatGPT extract.
It is also better writing. A statement heading like 'Claims Architecture' promises nothing. 'How did we cut manual routing by 96%?' promises an answer and forces the body copy to deliver one.
On this site: sitewide Person and WebSite JSON-LD in the root layout, a ProfilePage + BreadcrumbList graph on the landing page, and CreativeWork-style schema on each case study. That tells the engines who I am and which entity each page is about — instead of treating me as another anonymous source.
Structured data will not rescue a page with weak content, but it makes good content easier to cite. The engines parse it before they parse the prose.
llms.txt is a plain-text manifest at the root of the site that tells LLM crawlers what the site is, what is on it, and where the canonical content lives. Mine names every public route, every featured project, and this AEO playbook.
Paired with a permissive robots.txt — I allow every AI crawler in, not block them — it gives the engines a clean, machine-readable picture of the site. Result: when someone asks ChatGPT or Perplexity about end-to-end product design in Norway, my profile is in the answer set.
Start from Google Search Console. Export queries, impressions, CTR, and average position. Look for queries where you have high impressions and zero clicks — those are AI-Overview answers being served from someone else's page that could be served from yours.
Then write the answer better than they did: Quick Answer up top, question-format headings, concrete numbers, internal links to related work. I treat this as a weekly cadence, not a one-time content sprint.
Client-side-only rendering. If a crawler sees an empty `<div>` and a JS bundle, the page is effectively invisible. This portfolio is built on Next.js with server-rendered pages, so every paragraph, heading, and JSON-LD block ships in the initial HTML.
The other big one is bundle and image bloat. Heavy JavaScript pushes LCP past three seconds, kills Core Web Vitals, and drops you out of mobile rankings. Optimize images, code-split aggressively, and ship the smallest possible critical path.
Muselink is the SaaS platform I solo-built in 2025 for music collaboration. I owned product vision, UX, engineering direction, and go-to-market as the sole founder — the landing page, the product, and the launch all shipped from one person.
With zero paid acquisition and no prior brand recognition, the waitlist landing page converted at 15.6% visitor → signup — roughly 3× typical SaaS benchmarks. The traffic that found it came through AI citations and organic discovery, not ads.
Set up Google Search Console before launch. Even with zero traffic it starts collecting query data, and within a few weeks that data becomes your content strategy.
Write to specific questions, not broad topics. 'How do you onboard B2B SME customers in Norway?' beats 'The Ultimate Guide to B2B Onboarding' every time. Set up structured data and llms.txt from day one. And do not fight AI Overviews — become the source they pull from.
Same playbook applies to any product page or portfolio: write the answer first, structure the page so machines can read it, and let the search engines and the LLMs do the distribution.
Ready to turn your challenges into valuable outcomes?