Stop the gap! How you can turn ideas into real, coded things without being technical - and it's not vibe coding

Dec 28, 2025

Mark Anthony Burgarelli (Burgz)

32-bit pixel-art illustration of a person crossing a gap between the words “Idea” and “Reality,” with “Code” placed in the gap.

If you’re someone who thinks about digital interactions, you’ve probably hit the following wall.


You can see the idea in your head.


You know how it should move, respond, feel, and maybe even sound.


But getting it out of your head and into something on the screen as you envision it usually means compromise or finding a developer to make it for you.


This isn’t a post about becoming a developer or minimizing dev work.


It’s about what happens when a barrier between your creative intent and the execution it takes to make it happen drops low enough that you can finally build the thing you’re imagining.


The Problem


For years, non-technical people, like me, have been working around a hard limitation. We can design screens.


We can use website builders.


We can wire flows.


We can fake states or manipulate components into mimicry.


But the moment that emulation falls short of what we see in our mind, we hit a wall.


That wall is code. And unless you happen to be a designer who is fluent in front-end development, then you know you’re going to hit that limit.


So we compromise. We adjust our ideas to fit what the tool can simulate.


We settle for “close enough” because truly bringing an interaction to life would require engineering effort we don’t have access to, or we’re not ready to invest yet.


Figma is excellent at describing interfaces but it is not built to truly express behavior or the depths of interactions we want to see. And as an innovator, this limits how quickly you can prototype and test assumptions.


The Insight


I am not talking about vibe coding here. With vibe coding, while we can preview creations, we have to consider hosting, file transfers, data security, and more.


On mobile, there is often latency before you can even experience a sandbox environment to preview your creations, not to mention barriers for those who don’t know how to test native apps like on TestFlight.


And if you are trying to rapidly test product ideas, these technical implementations can slow things down or create complete stops.


I am talking about creating very specific creative materializations of our imagination into fully functional, ready-to-ship user experience interactions.


What do I mean?


I mean that I can take an idea like “I don’t want people to read my case studies, I want them to experience my case studies” and I, as a non-technical person, can create that experience exactly how I envision it!


Maybe creating a case study doesn’t speak to you, so here’s another example: “I want to gamify the user experience during an onboarding process.”


All you need is your imagination, the ability to articulate thoughts to something like ChatGPT, willingness to learn Framer (or other that supports code editing but Framer enables the most interactive experience out of the box), and a coding agent like Cursor.


When you can describe an interaction precisely and have it materialize, not as a mock-up, but as something runnable, your thinking changes.


You stop designing around limitations and start designing toward intent.


The idea survives the journey.


That shift is subtle but profound. It means the fidelity of your thinking is no longer capped by the tool.


What Changed for Me


What changed for me was combining tools that each remove a different piece of friction.


With Figma, I can design the static looks, but if I want to use the dev mode and export the file I have to figure out what to do with the code, not to mention I would have to create the motion effects outside of Figma.


There are still significant layers of friction between my Figma design and what I ultimately envision.


How do I know?


Because I have tried creating immersive experiences with Figma, and no matter how much I tried, it still felt like a PowerPoint presentation with motion being improvised by changing frames or variants.


Framer gave me a real, interactive surface.


Not a simulation.


Not a handoff artifact.


Something that behaves like software because behind what I see is code and I can manipulate that code. And if I can manipulate it, then all I need to do is find something that can produce the code, like Cursor.


By brainstorming with ChatGPT and asking for it to provide the prompts I can hand off to Cursor, I create this workflow that is very effective at transposing my thoughts into interactions.


The Workflow

From idea to interaction


In the most basic sense, I would articulate my idea for an interaction to ChatGPT then have it create the prompts for Cursor to understand, I would paste those prompts into Cursor, then Cursor would give me the code I need to paste into Framer.


Once it’s in Framer I could preview it immediately and tweak (I know some CSS) and test.


If I find minor errors I would iterate with Cursor and if the component was way off I would go back to ChatGPT to refine the prompt for Cursor.


This cycle enabled me to do some really amazing things…




By following the process described above, I turned one of my UX research case studies into a video game.


The idea was that you, the user, would play as me moving through a world you control and experiencing the case study from the inside.


Not to overlook either, but I did want you to just chose a case study from a list of case studies. I want you to insert the case study game cartridge into a console that you picked up, dragged into place, and flipped the power on.


All code, created from scratch, by me (again a non-technical).


I imagined music in the background, multiple rooms, collectible items, cutscenes, and small interactive moments. I built the entire experience from scratch and fully coded it, despite not being a technical developer.


In another instance, I wanted to rethink what a personal contact card could be.




Instead of a standard business card, I asked: What would something more on-brand look like for me?


I landed on the idea of a Pokémon-style card that could be dragged, spun, and rendered with a holographic effect.


I followed the same creative-to-technical process to build it. The component lets me upload front and back images so I can retain granular control over the visual design while still treating the interaction itself as a first-class object.


Working alongside code, not around it


When I know exactly what I want to happen, how something should animate, respond, or transition, I can describe it in plain language and work alongside the code instead of fighting it.


ChatGPT fills in the gaps when I need help thinking through logic or syntax and Cursor helps me troubleshoot errors or make improvements.


The result is not “design plus code.” It’s direct creation.


I’m no longer limited to what a component library allows. I’m not approximating interactions with frames and arrows.


If I can articulate the behavior, I can usually get it on screen. That’s new!


What This Replaces


This replaces makeshift prototypes, workarounds, and the quiet resignation that some ideas are “engineering problems” by default.


It replaces the pattern where designers stop at visual intent and hope someone else fills in the behavior later.


It also replaces the need to over-spec interactions in documents that still fail to capture how something should feel.


Even if the code is written again from scratch by a technical team member, the true nature of the desired outcome can be fully communicated because the technical team member can actually experience the intended outcome; the interaction itself becomes the artifact.


What It Doesn't Replace


It doesn’t replace production engineering. It doesn’t scale codebases. And it won’t save vague thinking. You still need to know what you’re trying to make.


But when you do, the distance between imagination and reality gets very short.


Closing


For a long time, creative technologists talked about “bringing ideas to life.” In practice, most of us learned to water ideas down instead.


What tools like Framer and Cursor change is not speed for speed’s sake.


They change who gets to finish the thought.


When the coding barrier dissolves just enough, designers stop describing ideas and start making them real.


And that changes what kinds of ideas are even worth having.

Experience my portfolio here 👉 https://www.burgarel.li/