Articles on Smashing Magazine — For Web Designers And Developers https://www.smashingmagazine.com/ Recent content in Articles on Smashing Magazine — For Web Designers And Developers Mon, 12 May 2025 08:59:10 GMT https://validator.w3.org/feed/docs/rss2.html manual en Articles on Smashing Magazine — For Web Designers And Developers https://www.smashingmagazine.com/images/favicon/app-icon-512x512.png https://www.smashingmagazine.com/ All rights reserved 2025, Smashing Media AG Development Design UX Mobile Front-end <![CDATA[Integrating Design And Code With Native Design Tokens In Penpot]]> https://smashingmagazine.com/2025/05/integrating-design-code-native-design-tokens-penpot/ https://smashingmagazine.com/2025/05/integrating-design-code-native-design-tokens-penpot/ Thu, 08 May 2025 10:00:00 GMT This article is a sponsored by Penpot

It’s already the fifth time I’m writing to you about Penpot — and what a journey it continues to be! During this time, Penpot’s presence in the design tools scene has grown strong. In a market that recently felt more turbulent than ever, I’ve always appreciated Penpot for their clear mission and values. They’ve built a design tool that not only delivers great features but is also open-source and developed in active dialogue with the community. Rather than relying on closed formats and gated solutions, Penpot embraces open web standards and commonly used technologies — ensuring it works seamlessly across platforms and integrates naturally with code.

Their latest release is another great example of that approach. It’s also one of the most impactful. Let me introduce you to design tokens in Penpot.

Design tokens are an essential building block of modern user interface design and engineering. But so far, designers and engineers have been stuck with third-party plugins and cumbersome APIs to collaborate effectively on design tokens and keep them in sync. It’s high time we had tools and processes that handle this better, and Penpot just made it happen.

About Design Tokens

Design tokens can be understood as a framework to document and organize your design decisions. They act as a single source of truth for both designers and engineers and include all the design variables, such as colors, typography, spacing, fills, borders, and shadows.

The concept of design tokens has grown in popularity alongside the rise of design systems and the increasing demand for broader standards and guidelines in user interface design. Design tokens emerged as a solution for managing increasingly complex systems while keeping them structured, scalable, and extensible.

The goal of using design tokens is not only to make design decisions more intentional and maintainable but also to make it easier to keep them in sync with code. In the case of larger systems, it is often a one-to-many relationship. Design tokens allow you to keep the values agnostic of their application and scale them across various products and environments.

Design tokens create a semantic layer between the values, the tools used to define them, and the software that implements them.

On top of maintainability benefits, a common reason to use design tokens is theming. Keeping your design decisions decoupled means that you can easily swap the values across multiple sets. This allows you to change the appearance of the entire interface with applications ranging from simple light and dark mode implementations to more advanced use cases, such as handling multiple brands or creating fully customizable and adjustable UIs.

Implementation Challenges

Until recently, there was no standardized format for maintaining design tokens — it remained a largely theoretical concept, implemented differently across teams and tools. Every design tool or frontend framework has its own approach. Syncing code with design tools was also a major pain point, often requiring third-party plugins and unreliable synchronization solutions.

However, in recent years, W3C, the international organization responsible for developing open standards and protocols for the web, brought to life a dedicated Design Tokens Community Group with the goal of creating an open standard for products and design tools to handle design tokens. Once this standard gets more widely adopted, it will give us hope for a more predictable and standardized approach to design tokens across the industry.

To make that happen, work has to be done on two ends, both design and development. Penpot is the very first design tool to implement design tokens in adherence to the standard that the W3C is working on. It also solves the problem of third-party dependencies by offering a native API with all the values served in the official, standardized format.

Design Tokens In Practice

To better understand design tokens and how to use them in practice, let’s take a look at an example together. Let’s consider the following user interface of a login screen:

Imagine we want this design to work in light and dark mode, but also to be themable with several accent colors. It could be that we’re using the same authentication system for websites of several associated brands or several products. We could also want to allow the user to customize the interface to their needs.

If we want to build a design that works for three accent colors, each with light and dark themes, it gives us six variants in total:

Designing all of them by hand would not only be tedious but also difficult to maintain. Every change you make would have to be repeated in six places. In the case of six variants, that’s not ideal, but it’s still doable. But what if you also want to support multiple layout options or more brands? It could easily scale into hundreds of combinations, at which point designing them manually would easily get out of hand.

This is where design tokens come to the rescue. They allow you to effectively maintain all the variants and test all the possible combinations, even hundreds of them, while still building a single design without repetitive work.

You can start by creating a design in one of the variants before starting to think about the tokens. Having a design already in place might make it easier to plan your tokens’ hierarchy and structure accordingly.

In this case, I created three components: 2 types of buttons and input, and combined them with text layers into several Flex layouts to build out this screen. If you’d like to first learn more about building components and layouts in Penpot, I would recommend you revisit some of my previous articles:

Now that we have the design ready, we can start creating tokens. You can create your first token by heading to the tokens tab of the left sidebar and clicking the plus button in one of the token categories. Let’s start by creating a color.

In Penpot, you can reference other tokens in token values by wrapping them in curly brackets. So, if you select “slate.1” as your text color, it will reference the “slate.1” value from any other set that is currently active. With the light set active, the text will be black. And with the dark set active, the text will be white.

This allows us to switch between brands and modes and test all the possible combinations.

What’s Next?

I hope you enjoyed following this example. If you’d like to check out the file presented above before creating your own, you can duplicate it here.

Colors are only one of many types of tokens available in Penpot. You can also use design tokens to maintain values such as spacing, sizing, layout, and so on. The Penpot team is working on gradually expanding the choice of tokens you can use. All are in accordance with the upcoming design tokens standard.

The benefits of the native approach to design tokens implemented by Penpot go beyond ease of use and standardization. It also makes the tokens more powerful. For example, they already support math operations using the calc() function you might recognize from CSS. It means you can use math to add, multiply, subtract, etc., token values.

Once you have the design token in Penpot ready, the next step is to bring it over to your code. Already today, you can export the tokens in JSON format, and soon, an API will be available that connects and imports the tokens directly into your codebase. You can follow Penpot on LinkedIn, BlueSky, and other social media to be the first to hear about the next updates. The team behind Penpot is also planning to make its design tokens implementation even more powerful in the near future with support for gradients, composite tokens (tokens that store multiple values), and more.

To learn more about design tokens and how to use them, check out the following links:

Conclusion

By adding support for native design tokens, Penpot is making real progress on connecting design and code in meaningful ways. Having all your design variables well documented and organized is one thing. Doing that in a scalable and maintainable way that is based on open standards and is easy to connect with code &mdahs; that’s yet another level.

The practical benefits are huge: better maintainability, less friction, and easier communication across the whole team. If you’re looking to bring more structure to your design system while keeping designers and engineers in sync, Penpot’s design tokens implementation is definitely worth exploring.

Tried it already? Share your thoughts! The Penpot team is active on social media, or just share your feedback in the comments section below.

]]>
hello@smashingmagazine.com (Mikołaj Dobrucki)
<![CDATA[Smashing Animations Part 1: How Classic Cartoons Inspire Modern CSS]]> https://smashingmagazine.com/2025/05/smashing-animations-part-1-classic-cartoons-inspire-css/ https://smashingmagazine.com/2025/05/smashing-animations-part-1-classic-cartoons-inspire-css/ Wed, 07 May 2025 08:00:00 GMT Browser makers didn’t take long to add the movement capabilities to CSS. The simple :hover pseudo-class came first, and a bit later, the transitions between two states. Then came the ability to change states across a set of @keyframes and, most recently, scroll-driven animations that link keyframes to the scroll position.

Even with these added capabilities, CSS animations have remained relatively rudimentary. They remind me of the Hanna-Barbera animated series I grew up watching on TV.

These animated shorts lacked the budgets given to live-action or animated movies. They were also far lower than those available when William Hanna and Joseph Barbera made Tom and Jerry shorts while working for MGM Cartoons. This meant the animators needed to develop techniques to work around their cost restrictions and the technical limitations of the time.

They used fewer frames per second and far fewer cells. Instead of using a different image for each frame, they repeated each one several times. They reused cells as frequently as possible by zooming and overlaying additional elements to construct a new scene. They kept bodies mainly static and overlayed eyes, mouths, and legs to create the illusion of talking and walking. Instead of reducing the quality of these cartoons, these constraints created a charm often lacking in more recent, bigger-budget, and technically advanced productions.

The simple and efficient techniques developed by Hanna-Barbera’s animators can be implemented using CSS. Modern layout tools allow web developers to layer elements. Scaleable Vector Graphics (SVG) can contain several frames, and developers needn’t resort to JavaScript; they can use CSS to change an element’s opacity, position, and visibility. But what are some reasons for doing this?

Animations bring static experiences to life. They can improve usability by guiding people’s actions and delighting or surprising them when interacting with a design. When carefully considered, animations can reinforce branding and help tell stories about a brand.

Introducing Mike Worth

I’ve recently been working on a new website for Emmy-award-winning game composer Mike Worth. He hired me to create a bold, retro-style design that showcases his work. I used CSS animations throughout to delight and surprise his audience as they move through his website.

Mike loves ’80s and ’90s animation — especially Disney’s Duck Tales). Unsurprisingly, my taste in cartoons stretches back a little further to the 1960s Hanna-Barbera shows like Dastardly and Muttley in Their Flying Machines, Scooby-Doo, The Perils of Penelope Pitstop, Wacky Races, and, of course, Yogi Bear.

So, to explain how this era of animation relates to CSS, I’ve chosen an episode of The Yogi Bear Show, “Home Sweet Jellystone,” first broadcast in 1961. In this story, Ranger Smith inherits a mansion and (spoiler alert) leaves Jellystone.

Dissecting Movement

In this episode, Hanna-Barbera’s techniques become apparent as soon as a postman arrives with a telegram for Ranger Smith. The camera pans sideways across a landscape painting by background artist Robert Gentle to create the illusion that the postman is moving.

The background loops when a scene lasts longer than a single pan of Robert Gentle’s landscape painting, with bushes and trees appearing repeatedly.

This can be recreated using a single element and an animation that changes the position of its background image:

@keyframes background-scroll {
  0% { background-position: 2750px 0; }
  100% { background-position: 0 0; }
}

div {
  overflow: hidden;
  width: 100vw;
  height: 540px;
  background-image: url("…");
  background-size: 2750px 540px;
  background-repeat: repeat-x;
  animation: background-scroll 5s linear infinite;
}

The economy of movement was essential for producing these animated shorts cheaply and efficiently. The postman’s motorcycle bounces, and only his head position and facial expressions change, which adds a subtle hint of realism.

Likewise, only Ranger Smith’s facial expression and leg positions change throughout his walk cycle as he dashes through his mansion. The rest of his body stays static.

In a discarded scene from my design for his website, the orangutan adventurer mascot I created for Mike Worth can be seen driving across the landscape.

I drew directly from Hanna-Barbera’s bouncing and scrolling technique for this scene by using two keyframe animations: background-scroll and bumpy-ride. The infinitely scrolling background works just like before:

@keyframes background-scroll {
  0% { background-position: 960px 0; }
  100% { background-position: 0 0; }
}

I created the appearance of his bumpy ride by animating changes to the keyframes’ translate values:

@keyframes bumpy-ride {
  0% { translate: 0 0; }
  10% { translate: 0 -5px; }
  20% { translate: 0 3px; }
  30% { translate: 0 -3px; }
  40% { translate: 0 5px; }
  50% { translate: 0 -10px; }
  60% { translate: 0 4px; }
  70% { translate: 0 -2px; }
  80% { translate: 0 7px; }
  90% { translate: 0 -4px; }
  100% { translate: 0 0; }
}

figure {
  /* ... */
  animation: background-scroll 5s linear infinite;
}

img {
  /* ... */
  animation: bumpy-ride 1.5s infinite ease-in-out;
}

Watch the episode and you’ll see these trees appear over and over again throughout “Home Sweet Jellystone.” Behind Yogi and Boo-Boo on the track, in the bushes, and scaled up in this close-up of Boo-Boo:

The animators also frequently layered foreground elements onto these background paintings to create a variety of new scenes:

In my deleted scene from Mike Worth’s website, I introduced these rocks into the foreground to add depth to the animation:

If I were using bitmap images, this would require just one additional image:

<figure>
  <img id="bumpy-ride" src="..." alt="" />
  <img id="apes-rock" src="..." alt="" />
</figure>
figure {
  position: relative; 

  #bumpy-ride { ... }

  #apes-rock {
    position: absolute;
    width: 960px;
    left: calc(50% - 480px);
    bottom: 0;
  }
}

Likewise, when the ranger reads his telegram, only his eyes and mouth move:

If you’ve wondered why both Ranger Smith and Yogi Bear wear collars and neckties, it’s so the line between their animated heads and faces and static bodies is obscured:

SVG delivers incredible performance and also offers fantastic flexibility when animating elements. The ability to embed one SVG inside another and to manipulate groups and other elements using CSS makes it ideal for animations.

I replicated how Hanna-Barbera made Ranger Smith and other characters’ mouths move by first including a group that contains the ranger’s body and head, which remain static throughout. Then, I added six more groups, each containing one frame of his mouth moving:

<svg>
  <!-- static elements -->
  <g>...</g>

  <!-- animation frames -->
  <g class="frame-1">...</g>
  <g class="frame-2">...</g>
  <g class="frame-3">...</g>
  <g class="frame-4">...</g>
  <g class="frame-5">...</g>
  <g class="frame-6">...</g>
</svg>

I used CSS custom properties to define the speed at which characters’ mouths move and how many frames are in the animation:

:root {
  --animation-duration: 1s;
  --frame-count: 6;
}

Then, I applied a keyframe animation to show and hide each frame:

@keyframes ranger-talking {
  0% { visibility: visible; }
  16.67% { visibility: hidden; }
  100% { visibility: hidden; }
}

[class*="frame"] {
  visibility: hidden;
  animation: ranger-talking var(--animation-duration) infinite;
}

Before finally setting a delay, which makes each frame visible at the correct time:

.frame-1 {
  animation-delay: calc(var(--animation-duration) * 0 / var(--frame-count));
}

/* ... */

.frame-6 {
  animation-delay: calc(var(--animation-duration) * 5 / var(--frame-count));
}

In my design for Mike Worth’s website, animation isn’t just for decoration; it tells a compelling story about him and his work. Every movement reflects his brand identity and makes his website an extension of his creative world.

Think beyond movement the next time you reach for a CSS animation. Consider emotions, identity, and mood, too. After all, a well-considered animation can do more than catch someone’s eye. It can capture their imagination.

Mike Worth’s website will launch in June 2025, but you can see examples from this article on CodePen now.

]]>
hello@smashingmagazine.com (Andy Clarke)
<![CDATA[Masonry In CSS: Should Grid Evolve Or Stand Aside For A New Module?]]> https://smashingmagazine.com/2025/05/masonry-css-should-grid-evolve-stand-aside-new-module/ https://smashingmagazine.com/2025/05/masonry-css-should-grid-evolve-stand-aside-new-module/ Tue, 06 May 2025 13:00:00 GMT You’ve got a Pinterest-style layout to build, but you’re tired of JavaScript. Could CSS finally have the answer? Well, for a beginner, taking a look at the pins on your Pinterest page, you might be convinced that the CSS grid layout is enough, but not until you begin to build do you realise display: grid with additional tweaks is less than enough. In fact, Pinterest built its layout with JavaScript, but how cool would it be if it were just CSS? If there were a CSS display property that gave such a layout without any additional JavaScript, how awesome would that be?

Maybe there is. The CSS grid layout has an experimental masonry value for grid-template-rows. The masonry layout is an irregular, flowing grid. Irregular in the sense that, instead of following a rigid grid pattern with spaces left after shorter pieces, the items in the next row of a masonry layout rise to fill the spaces on the masonry axis. It’s the dream for portfolios, image galleries, and social feeds — designs that thrive on organic flow. But here’s the catch: while this experimental feature exists (think Firefox Nightly with a flag enabled), it’s not the seamless solution you might expect, thanks to limited browser support and some rough edges in its current form.

Maybe there isn’t. CSS lacks native masonry support, forcing developers to use hacks or JavaScript libraries like Masonry.js. Developers with a good design background have expressed their criticism about the CSS grid form of masonry, with Rachel highlighting that masonry’s organic flow contrasts with Grid’s strict two-dimensional structure, potentially confusing developers expecting Grid-like behaviour or Ahmad Shadeed fussing about how it makes the grid layout more complex than it should be, potentially overwhelming developers who value Grid’s clarity for structured layouts. Geoff also echoes Rachel Andrew’s concern that “teaching and learning grid to get to understand masonry behaviour unnecessarily lumps two different formatting contexts into one,” complicating education for designers and developers who rely on clear mental models.

Perhaps there might be hope. The Apple WebKit team just sprung up a new contender, which claims not only to merge the pros of grid and masonry into a unified system shorthand but also includes flexbox concepts. Imagine the best of three CSS layout systems in one.

Given these complaints and criticisms — and a new guy in the game — the question is:

Should CSS Grid expand to handle Masonry, or should a new, dedicated module take over, or should item-flow just take the reins?
The State Of Masonry In CSS Today

Several developers have attempted to create workarounds to achieve a masonry layout in their web applications using CSS Grid with manual row-span hacks, CSS Columns, and JavaScript libraries. Without native masonry, developers often turn to Grid hacks like this: a grid-auto-rows trick paired with JavaScript to fake the flow. It works — sort of — but the cracks show fast.

For instance, the example below relies on JavaScript to measure each item’s height after rendering, calculate the number of 10px rows (plus gaps) the item should span while setting grid-row-end dynamically, and use event listeners to adjust the layout upon page load and window resize.

/* HTML */
<div class="masonry-grid">
  <div class="masonry-item"><img src="image1.jpg" alt="Image 1"></div>
  <div class="masonry-item"><p>Short text content here.</p></div>
  <div class="masonry-item"><img src="image2.jpg" alt="Image 2"></div>
  <div class="masonry-item"><p>Longer text content that spans multiple lines to show height variation.</p></div>
</div>
/* CSS */
.masonry-grid {
  display: grid;
  grid-template-columns: repeat(auto-fill, minmax(200px, 1fr)); /* Responsive columns */
  grid-auto-rows: 10px; /* Small row height for precise spanning */
  grid-auto-flow: column; /* Fills columns left-to-right */
  gap: 10px; /* Spacing between items */
}

.masonry-item {
  /* Ensure content doesn’t overflow */
  overflow: hidden;
}

.masonry-item img {
  width: 100%;
  height: auto;
  display: block;
}

.masonry-item p {
  margin: 0;
  padding: 10px;
}
// JavaScript

function applyMasonry() {
  const grid = document.querySelector('.masonry-grid');
  const items = grid.querySelectorAll('.masonry-item');

  items.forEach(item => {
    // Reset any previous spans
    item.style.gridRowEnd = 'auto';

    // Calculate the number of rows to span based on item height
    const rowHeight = 10; 
    const gap = 10; 
    const itemHeight = item.getBoundingClientRect().height;
    const rowSpan = Math.ceil((itemHeight + gap) / (rowHeight + gap));

    // Apply the span
    item.style.gridRowEnd = span ${rowSpan};
  });
}

// Run on load and resize
window.addEventListener('load', applyMasonry);
window.addEventListener('resize', applyMasonry);

This Grid hack gets us close to a masonry layout — items stack, gaps fill, and it looks decent enough. But let’s be real: it’s not there yet. The code sample above, unlike native grid-template-rows: masonry (which is experimental and only exists on Firefox Nightly), relies on JavaScript to calculate spans, defeating the “no JavaScript” dream. The JavaScript logic works by recalculating spans on resize or content change. As Chris Coyier noted in his critique of similar hacks, this can lead to lag on complex pages.

Also, the logical DOM order might not match the visual flow, a concern Rachel Andrew raised about masonry layouts generally. Finally, if images load slowly or content shifts (e.g., lazy-loaded media), the spans need recalculation, risking layout jumps. It’s not really the ideal hack; I’m sure you’d agree.

Developers need a smooth experience, and ergonomically speaking, hacking Grid with scripts is a mental juggling act. It forces you to switch between CSS and JavaScript to tweak a layout. A native solution, whether Grid-powered or a new module, has to nail effortless responsiveness, neat rendering, and a workflow that does not make you break your tools.

That’s why this debate matters — our daily grind demands it.

Option 1: Extending CSS Grid For Masonry

One way forward is to strengthen the CSS Grid with masonry powers. As of this writing, CSS grids have been extended to accommodate masonry. grid-template-rows: masonry is a draft of CSS Grid Level 3 that is currently experimental in Firefox Nightly. The columns of this layout will remain as a grid axis while the row takes on masonry. The child elements are then laid out item by item along the rows, as with the grid layout’s automatic placement. With this layout, items flow vertically, respecting column tracks but not row constraints.

This option leaves Grid as your go-to layout system but allows it to handle the flowing, gap-filling stacks we crave.

.masonry-grid {
  display: grid;
  gap: 10px;
  grid-template-columns: repeat(auto-fill, minmax(200px, 1fr));
  grid-template-rows: masonry;
}

First off, the grid-masonry style builds on CSS Grid’s familiarity and robust tooling (e.g., DevTools support). As a front-end developer, there’s a chance you’ve played with grid-template-columns or grid-area, so you’re halfway up the learning matrix. Masonry only extends the existing capabilities, eliminating the need to learn a whole new syntax from scratch. Also, Grid’s robust tooling comes along with Chrome DevTools’ grid overlay or Firefox’s layout inspector, removing the need for JavaScript hacks.

Not so fast: there are limitations. Grid’s specifications already include properties like align-content and grid-auto-flow. Stacking masonry on the list risks turning it into a labyrinth.

Then there are the edge cases. What happens when you want an item to span multiple columns and flow masonry-style? Or when gaps between items don’t align across columns? The specs are still foggy here, and early tests hint at bugs like items jumping unpredictably if content loads dynamically. This issue could break layouts, especially on responsive designs. The browser compatibility issue also exists. It’s still experimental, and even with polyfills, it does not work on other browsers except Firefox Nightly. Not something you’d want to try in your next client’s project, right?

Option 2: A Standalone Masonry Module

What if we had a display: masonry approach instead? Indulge me for a few minutes. This isn’t just wishful thinking. Early CSS Working Group chats have floated the idea, and it’s worth picturing how it could improve layouts. Let’s dive into the vision, how it might work, and what it gains or loses in the process.

Imagine a layout system that doesn’t lean on Grid’s rigid tracks or Flexbox’s linear flow but instead thrives on vertical stacking with a horizontal twist. The goal? A clean slate for masonry’s signature look: items cascading down columns, filling gaps naturally, no hacks required. Inspired by murmurs in CSSWG discussions and the Chrome team’s alternative proposal, this module would prioritise fluidity over structure, giving designers a tool that feels as intuitive as the layouts they’re chasing. Think Pinterest but without JavaScript scaffolding.

Here’s the pitch: a display value named masonry kicks off a flow-based system where items stack vertically by default, adjusting horizontally to fit the container. You’d control the direction and spacing with simple properties like the following:

.masonry {
  display: masonry;
  masonry-direction: column;
  gap: 1rem;
}

Want more control? Hypothetical extras like masonry-columns: auto could mimic Grid’s repeat(auto-fill, minmax()), while masonry-align: balance might even out column lengths for a polished look. It’s less about precise placement (Grid’s strength) and more about letting content breathe and flow, adapting to whatever screen size is thrown at it. The big win here is a clean break from Grid’s rigid order. A standalone module keeps them distinct: Grid for order, Masonry for flow. No more wrestling with Grid properties that don’t quite fit; you get a system tailored to the job.

Of course, it’s not all smooth sailing. A brand-new spec means starting from zero. Browser vendors would need to rally behind it, which can be slow. Also, it might lead to confusion of choice, with developers asking questions like: “Do I use Grid or Masonry for this gallery?” But hear me out: This proposed module might muddy the waters before it clears them, but after the water is clear, it’s safe for use by all and sundry.

Item Flow: A Unified Layout Resolution

In March 2025, Apple’s WebKit team proposed Item Flow, a new system that unifies concepts from Flexbox, Grid, and masonry into a single set of properties. Rather than choosing between enhancing Grid or creating a new masonry module, Item Flow merges their strengths, replacing flex-flow and grid-auto-flow with a shorthand called item-flow. This system introduces four longhand properties:

  • item-direction
    Controls flow direction (e.g., row, column, row-reverse).
  • item-wrap
    Manages wrapping behaviour (e.g., wrap, nowrap, wrap-reverse).
  • item-pack
    Determines packing density (e.g., sparse, dense, balance).
  • item-slack
    Adjusts tolerance for layout adjustments, allowing items to shrink or shift to fit.

Item Flow aims to make masonry a natural outcome of these properties, not a separate feature. For example, a masonry layout could be achieved with:

.container {
  display: grid; /* or flex */
  item-flow: column wrap dense;

  /* long hand version */
  item-direction: column;
  item-wrap: wrap;
  item-pack: dense;

  gap: 1rem;
}

This setup allows items to flow vertically, wrap into columns, and pack tightly, mimicking masonry’s organic arrangement. The dense packing option, inspired by Grid’s auto-flow: dense, reorders items to minimise gaps, while item-slack could fine-tune spacing for visual balance.

Item Flow’s promise lies in its wide use case. It enhances Grid and Flexbox with features like nowrap for Grid or balance packing for Flexbox, addressing long-standing developer wishlists. However, the proposal is still in discussion, and properties like item-slack face naming debates due to clarity issues for non-native English speakers.

The downside? Item Flow is a future-facing concept, and it has not yet been implemented in browsers as of April 2025. Developers must wait for standardisation and adoption, and the CSS Working Group is still gathering feedback.

What’s The Right Path?

While there is no direct answer to that question, the masonry debate hinges on balancing simplicity, performance, and flexibility. Extending the Grid with masonry is tempting but risks overcomplicating an already robust system. A standalone display: masonry module offers clarity but adds to CSS’s learning curve. Item Flow, the newest contender, proposes a unified system that could make masonry a natural extension of Grid and Flexbox, potentially putting the debate to rest at last.

Each approach has trade-offs:

  • Grid with Masonry: Familiar but potentially clunky, with accessibility and spec concerns.
  • New Module: Clean and purpose-built, but requires learning new syntax.
  • Item Flow: Elegant and versatile but not yet available, with ongoing debates over naming and implementation.

Item Flow’s ability to enhance existing layouts while supporting masonry makes it a compelling option, but its success depends on browser adoption and community support.

Conclusion

So, where do we land after all this? The masonry showdown boils down to three paths: the extension of masonry into CSS Grid, a standalone module for masonry, or Item Flow. Now, the question is, will CSS finally free us from JavaScript for masonry, or are we still dreaming?

Grid’s teasing us with a taste, and a standalone module’s whispering promises — but the finish line’s unclear, and WebKit swoops in with a killer merge shorthand, Item Flow. Browser buy-in, community push, and a few more spec revisions might tell us. For now, it’s your move — test, tweak, and weigh in. The answer’s coming, one layout at a time.

References

]]>
hello@smashingmagazine.com (Gabriel Shoyombo)
<![CDATA[How To Launch Big Complex Projects]]> https://smashingmagazine.com/2025/05/how-launch-big-complex-projects/ https://smashingmagazine.com/2025/05/how-launch-big-complex-projects/ Mon, 05 May 2025 10:00:00 GMT How To Measure UX and Design Impact by yours truly.]]> Think about your past projects. Did they finish on time and on budget? Did they end up getting delivered without cutting corners? Did they get disrupted along the way with a changed scope, conflicted interests, unexpected delays, and surprising blockers?

Chances are high that your recent project was over schedule and over budget — just like a vast majority of other complex UX projects. Especially if it entailed at least some sort of complexity, be it a large group of stakeholders, a specialized domain, internal software, or expert users. It might have been delayed, moved, canceled, “refined,” or postponed. As it turns out, in many teams, shipping on time is an exception rather than the rule.

In fact, things almost never go according to plan — and on complex projects, they don’t even come close. So, how can we prevent it from happening? Well, let’s find out.

99.5% Of Big Projects Overrun Budgets And Schedules

As people, we are inherently over-optimistic and over-confident. It’s hard to study and process everything that can go wrong, so we tend to focus on the bright side. However, unchecked optimism leads to unrealistic forecasts, poorly defined goals, better options ignored, problems not spotted, and no contingencies to counteract the inevitable surprises.

Hofstadter’s Law states that the time needed to complete a project will always expand to fill the available time &- even if you take into account Hofstadter’s Law. Put differently, it always takes longer than you expect, however cautious you might be.

As a result, only 0.5% of big projects make the budget and the schedule — e.g., big relaunches, legacy re-dos, big initiatives. We might try to mitigate risk by adding 15–20% buffer — but it rarely helps. Many of these projects don’t follow “normal” (Bell curve) distribution, but are rather “fat-tailed”.

And there, overruns of 60–500% are typical and turn big projects into big disasters.

Reference-Class Forecasting (RCF)

We often assume that if we just thoroughly collect all the costs needed and estimate complexity or efforts, we should get a decent estimate of where we will eventually land. Nothing could be further from the truth.

Complex projects have plenty of unknown unknowns. No matter how many risks, dependencies, and upstream challenges we identify, there are many more we can’t even imagine. The best way to be more accurate is to define a realistic anchor — for time, costs, and benefits — from similar projects done in the past.

Reference-class forecasting follows a very simple process:

  • First, we find the reference projects that have the most similarities to our project.
  • If the distribution follows the Bell curve, use the mean value + 10–15% contingency.
  • If the distribution is fat-tailed, invest in profound risk management to prevent big challenges down the line.
  • Tweak the mean value only if you have very good reasons to do so.
  • Set up a database to track past projects in your company (for cost, time, benefits).
Mapping Out Users’ Success Moments

Over the last few years, I’ve been using the technique called “Event Storming,” suggested by Matteo Cavucci many years back. The idea is to capture users’ experience moments through the lens of business needs. With it, we focus on the desired business outcome and then use research insights to project events that users will be going through to achieve that outcome.

The image above shows the process in action — with different lanes representing different points of interest, and prioritized user events themed into groups, along with risks, bottlenecks, stakeholders, and users to be involved — as well as UX metrics. From there, we can identify common themes that emerge and create a shared understanding of risks, constraints, and people to be involved.

Throughout that journey, we identify key milestones and break users’ events into two main buckets:

  1. User’s success moments (which we want to dial up ↑);
  2. User’s pain points or frustrations (which we want to dial down ↓).

We then break out into groups of 3–4 people to separately prioritize these events and estimate their impact and effort on Effort vs. Value curves by John Cutler.

The next step is identifying key stakeholders to engage with, risks to consider (e.g., legacy systems, 3rd-party dependency, etc.), resources, and tooling. We reserve special time to identify key blockers and constraints that endanger a successful outcome or slow us down. If possible, we also set up UX metrics to track how successful we actually are in improving the current state of UX.

It might seem like a bit too much planning for just a UX project, but it has been helping quite significantly to reduce failures and delays and also maximize business impact.

When speaking to businesses, I usually speak about better discovery and scoping as the best way to mitigate risk. We can, of course, throw ideas into the market and run endless experiments. But not for critical projects that get a lot of visibility, e.g., replacing legacy systems or launching a new product. They require thorough planning to prevent big disasters, urgent rollbacks, and... black swans.

Black Swan Management

Every other project encounters what's called a Black Swan — a low probability, high-consequence event that is more likely to occur when projects stretch over longer periods of time. It could be anything from restructuring teams to a change of priorities, which then leads to cancellations and rescheduling.

Little problems have an incredible capacity to compound large, disastrous problems — ruining big projects and sinking big ambitions at a phenomenal scale. The more little problems we can design around early, the more chances we have to get the project out the door successfully.

So we make projects smaller and shorter. We mitigate risks by involving stakeholders early. We provide less surface for Black Swans to emerge. One good way to get there is to always start every project with a simple question: “Why are we actually doing this project?” The answers often reveal not just motivations and ambitions, but also the challenges and dependencies hidden between the lines of the brief.

And as we plan, we could follow a “right-to-left thinking”. We don’t start with where we are, but rather where we want to be. And as we plan and design, we move from the future state towards the current state, studying what’s missing or what’s blocking us from getting there. The trick is: we always keep our end goal in mind, and our decisions and milestones are always shaped by that goal.

Manage Deficit Of Experience

Complex projects start with a deep deficit of experience. To increase the chances of success, we need to minimize the chance of mistakes even happening. That means trying to make the process as repetitive as possible — with smaller “work modules” repeated by teams over and over again.

🚫 Beware of unchecked optimism → unrealistic forecasts.
🚫 Beware of “cutting-edge” → untested technology spirals risk.
🚫 Beware of “unique” → high chance of exploding costs.
🚫 Beware of “brand new” → rely on tested and reliable.
🚫 Beware of “the biggest” → build small things, then compose.

It also means relying on reliable: from well-tested tools to stable teams that have worked well together in the past. Complex projects aren’t a good place to innovate processes, mix-n-match teams, and try out more affordable vendors.

Typically, these are extreme costs in disguise, skyrocketing delivery delays, and unexpected expenses.

Think Slow, Act Fast

In the spirit of looming deadlines, many projects rush into delivery mode before the scope of the project is well-defined. It might work for fast experiments and minor changes, but that’s a red flag for larger projects. The best strategy is to spend more time in planning before designing a single pixel on the screen.

But planning isn’t an exercise in abstract imaginative work. Good planning should include experiments, tests, simulations, and refinements. It must include the steps of how we reduce risks and how we mitigate risks when something unexpected (but frequent in other similar projects) happens.

Good Design Is Good Risk Management

When speaking about design and research to senior management, position it as a powerful risk management tool. Good design that involves concept testing, experimentation, user feedback, iterations, and refinement of the plan is cheap and safe.

Eventually it might need more time than expected, but it’s much — MUCH! — cheaper than delivery. Delivery is extremely cost-intensive, and if it relies on wrong assumptions and poor planning, then that’s when the project becomes vulnerable and difficult to move or re-route.

Wrapping Up

The insights above come from a wonderful book on How Big Things Get Done by Prof. Bent Flyvbjerg and Dan Gardner. It goes in all the fine details of how big projects fail and when they succeed. It’s not a book about design, but a fantastic book for designers who want to plan and estimate better.

Not every team will work on a large, complex project, but sometimes these projects become inevitable — when dealing with legacy, projects with high visibility, layers of politics, or an entirely new domain where the company moves.

Good projects that succeed have one thing in common: they dedicate a majority of time to planning and managing risks and unknown unknowns. They avoid big-bang revelations, but instead test continuously and repeatedly. That’s your best chance to succeed — work around these unknowns, as you won’t be able to prevent them from emerging entirely anyway.

New: How To Measure UX And Design Impact

Meet Measure UX & Design Impact (8h), a practical guide for designers and UX leads to shape, measure and explain your incredible UX impact on business. Recorded and updated by Vitaly Friedman. Use the friendly code 🎟 IMPACT to save 20% off today. Jump to the details.

Video + UX Training

$ 495.00 $ 799.00 Get Video + UX Training

25 video lessons (8h) + Live UX Training.
100 days money-back-guarantee.

Video only

$ 250.00$ 395.00
Get the video course

25 video lessons (8h). Updated yearly.
Also available as a UX Bundle with 2 video courses.

]]>
hello@smashingmagazine.com (Vitaly Friedman)
<![CDATA[WCAG 3.0’s Proposed Scoring Model: A Shift In Accessibility Evaluation]]> https://smashingmagazine.com/2025/05/wcag-3-proposed-scoring-model-shift-accessibility-evaluation/ https://smashingmagazine.com/2025/05/wcag-3-proposed-scoring-model-shift-accessibility-evaluation/ Fri, 02 May 2025 11:00:00 GMT Since their introduction in 1999, the Web Content Accessibility Guidelines (WCAG) have shaped how we design and develop inclusive digital products. The WCAG 2.x series, released in 2008, introduced clear technical criteria judged in a binary way: either a success criterion is met or not. While this model has supported regulatory clarity and auditability, its “all-or-nothing” nature often fails to reflect the nuance of actual user experience (UX).

Over time, that disconnect between technical conformance and lived usability has become harder to ignore. People engage with digital systems in complex, often nonlinear ways: navigating multistep flows, dynamic content, and interactive states. In these scenarios, checking whether an element passes a rule doesn’t always answer the main question: can someone actually use it?

WCAG 3.0 is still in draft, but is evolving — and it represents a fundamental rethinking of how we evaluate accessibility. Rather than asking whether a requirement is technically met, it asks how well users with disabilities can complete meaningful tasks. Its new outcome-based model introduces a flexible scoring system that prioritizes usability over compliance, shifting focus toward the quality of access rather than the mere presence of features.

Draft Status: Ambitious, But Still Evolving

WCAG 3.0 was first introduced as a public working draft by the World Wide Web Consortium (W3C) Accessibility Guidelines Working Group in early 2021. The draft is still under active development and is not expected to reach W3C Recommendation status for several years, if not decades, by some accounts. This extended timeline reflects both the complexity of the task and the ambition behind it:

WCAG 3.0 isn’t just an update — it’s a paradigm shift.

Unlike WCAG 2.x, which focused primarily on web pages, WCAG 3.0 aims to cover a much broader ecosystem, including applications, tools, connected devices, and emerging interfaces like voice interaction and extended reality. It also rebrands itself as the W3C Accessibility Guidelines (while the WCAG acronym remains the same), signaling that accessibility is no longer a niche concern — it’s a baseline expectation across the digital world.

Importantly, WCAG 3.0 will not immediately replace 2.x. Both standards will coexist, and conformance to WCAG 2.2 will continue to be valid and necessary for some time, especially in legal and policy contexts.

This expansion isn’t just technical.

WCAG 3.0 reflects a deeper philosophical shift: accessibility is moving from a model of compliance toward a model of effectiveness.

Rules alone can’t capture whether a system truly works for someone. That’s why WCAG 3.0 leans into flexibility and future-proofing, aiming to support evolving technologies and real-world use over time. It formalizes a principle long understood by practitioners:

Inclusive design isn’t about passing a test; it’s about enabling people.
A New Structure: From Success Criteria To Outcomes And Methods

WCAG 2.x is structured around four foundational principles — Perceivable, Operable, Understandable, and Robust (aka POUR) — and testable success criteria organized into three conformance levels (A, AA, AAA). While technically precise, these criteria often emphasize implementation over impact.

WCAG 3.0 reorients this structure toward user needs and real outcomes. Its hierarchy is built on:

  • Guidelines: High-level accessibility goals tied to specific user needs.
  • Outcomes: Testable, user-centered statements (e.g., “Users have alternatives for time-based media”).
  • Methods: Technology-specific or agnostic techniques that help achieve the outcomes, including code examples and test instructions.
  • How-To Guides: Narrative documentation that provides practical advice, user context, and design considerations.

This shift is more than organizational. It reflects a deeper commitment to aligning technical implementation with UX. Outcomes speak the language of capability, which is about what users should be able to do (rather than just technical presence).

Crucially, outcomes are also where conformance scoring begins to take shape. For example, imagine a checkout flow on an e-commerce website. Under WCAG 2.x, if even one field in the checkout form lacks a label, the process may fail AA conformance entirely. However, under WCAG 3.0, that same flow might be evaluated across multiple outcomes (such as keyboard navigation, form labeling, focus management, and error handling), with each outcome receiving a separate score. If most areas score well but the error messaging is poor, the overall rating might be “Good” instead of “Excellent”, prompting targeted improvements without negating the entire flow’s accessibility.

From Binary Checks To Graded Scores

Rather than relying on pass or fail outcomes, WCAG 3.0 introduces a scoring model that reflects how well accessibility is supported. This shift allows teams to recognize partial successes and prioritize real improvements.

How Scoring Works

Each outcome in WCAG 3.0 is evaluated through one or more atomic tests. These can include the following:

  • Binary tests: “Yes” and “no” outcomes (e.g., does every image have alternative text?)
  • Percentage-based tests: Coverage-based scoring (e.g., what percentage of form fields have labels?)
  • Qualitative tests: Rated judgments based on criteria (e.g., how descriptive is the alternative text?)

The result of these tests produces a score for each outcome, often normalized on a 0-4 or 0-5 scale, with labels like Poor, Fair, Good, and Excellent. These scores are then aggregated across functional categories (vision, mobility, cognition, etc.) and user flows.

This allows teams to measure progress, not just compliance. A product that improves from “Fair” to “Good” over time shows real evolutiona concept that doesn’t exist in WCAG 2.x.

Critical Errors: A Balancing Mechanism

To ensure that severity still matters, WCAG 3.0 introduces critical errors, which are high-impact accessibility failures that can override an otherwise positive score.

For example, consider a checkout flow. Under WCAG 2.x, a single missing label might cause the entire flow to fail conformance. WCAG 3.0, however, evaluates multiple outcomes — like form labeling, keyboard access, and error handling — each with its own score. Minor issues, such as unclear error messages or a missing label on an optional field, might lower the rating from “Excellent” to “Good”, without invalidating the entire experience.

But if a user cannot complete a core action, like submitting the form, making a purchase, or logging in, that constitutes a critical error. These failures directly block task completion and significantly reduce the overall score, regardless of how polished the rest of the experience is.

On the other hand, problems with non-essential features — like uploading a profile picture or changing a theme color — are considered lower-impact and won’t weigh as heavily in the evaluation.

Conformance Levels: Bronze, Silver, Gold

In place of categorizing conformance in tiers of Level A, Level AA, and Level AAA, WCAG 3.0 proposes three different conformance tiers:

  • Bronze: The new minimum. It is comparable to WCAG 2.2 Level AA, but based on scoring and foundational outcomes. The requirements are considered achievable via automated and guided manual testing.
  • Silver: This is a higher standard, requiring broader coverage, higher scores, and usability validation from people with disabilities.
  • Gold: The highest tier. Represents exemplary accessibility, likely requiring inclusive design processes, innovation, and extensive user involvement.

Unlike in WCAG 2.2, where Level AAA is often seen as aspirational and inconsistent, these levels are intended to incentivize progression. They can also be scoped in the sense that teams can claim conformance for a checkout flow, mobile app, or specific feature, allowing iterative improvement.

What You Should Do Now

While WCAG 3.0 is still being developed, its direction is clear. That said, it’s important to acknowledge that the guidelines are not expected to be finalized in a few years. Here’s how teams can prepare:

  • Continue pursuing WCAG 2.2 Level AA. It remains the most robust, recognized standard.
  • Familiarize yourself with WCAG 3.0 drafts, especially the outcomes and scoring model.
  • Start thinking in outcomes. Focus on what users need to accomplish, not just what features are present.
  • Embed accessibility into workflows. Shift left. Don’t test at the end — design and build with access in mind.
  • Involve users with disabilities early and regularly.

These practices won’t just make your product more inclusive; they’ll position your team to excel under WCAG 3.0.

Potential Downsides

Even though WCAG 3.0 presents a bold step toward more holistic accessibility, several structural risks deserve early attention, especially for organizations navigating regulation, scaling design systems, or building sustainable accessibility practices. Importantly, many of these risks are interconnected: challenges in one area may amplify issues in others.

Subjective Scoring

The move from binary pass or fail criteria to scored evaluations introduces room for subjective interpretation. Without standardized calibration, the same user flow might receive different scores depending on the evaluator. This makes comparability and repeatability harder, particularly in procurement or multi-vendor environments. A simple alternative text might be rated as “adequate” by one team and “unclear” by another.

Reduced Compliance Clarity

That same subjectivity leads to a second concern: the erosion of clear compliance thresholds. Scored evaluations replace the binary clarity of “compliant” or “not” with a more flexible, but less definitive, outcome. This could complicate legal enforcement, contractual definitions, and audit reporting. In practice, a product might earn a “Good” rating while still presenting critical usability gaps for certain users, creating a disconnect between score and actual access.

Legal and Policy Misalignment

As clarity around compliance blurs, so does alignment with existing legal frameworks. Many current laws explicitly reference WCAG 2.x and its A, AA, and AAA levels (e.g. Section 508 of the Rehabilitation Act of 1973, European Accessibility Act, The Public Sector Bodies (Websites and Mobile Applications) (No. 2) Accessibility Regulations 2018).

Until WCAG 3.0 is formally mapped to those standards, its use in regulated contexts may introduce risk. Teams operating in healthcare, finance, or public sectors will likely need to maintain dual conformance strategies in the interim, increasing cost and complexity.

Risk Of Minimum Viable Accessibility

Perhaps most concerning, this ambiguity can set the stage for a “minimum viable accessibility” mindset. Scored models risk encouraging “Bronze is good enough” thinking, particularly in deadline-driven environments. A team might deprioritize improvements once they reach a passing grade, even if essential barriers remain.

For example, a mobile app with strong keyboard support but missing audio transcripts could still achieve a passing tier, leaving some users excluded.

Conclusion

WCAG 3.0 marks a new era in accessibility — one that better reflects the diversity and complexity of real users. By shifting from checklists to scored evaluations and from rigid technical compliance to practical usability, it encourages teams to prioritize real-world impact over theoretical perfection.

As one might say, “It’s not about the score. It’s about who can use the product.” In my own experience, I’ve seen teams pour hours into fixing minor color contrast issues while overlooking broken keyboard navigation, leaving screen reader users unable to complete essential tasks. WCAG 3.0’s focus on outcomes reminds us that accessibility is fundamentally about functionality and inclusion.

At the same time, WCAG 3.0’s proposed scoring models introduce new responsibilities. Without clear calibration, stronger enforcement patterns, and a cultural shift away from “good enough,” we risk losing the very clarity that made WCAG 2.x enforceable and actionable. The promise of flexibility only works if we use it to aim higher, not to settle earlier.

For teams across design, development, and product leadership, this shift is a chance to rethink what success means. Accessibility isn’t about ticking boxes — it’s about enabling people.

By preparing now, being mindful of the risks, and focusing on user outcomes, we don’t just get ahead of WCAG 3.0 — we build digital experiences that are truly usable, sustainable, and inclusive.

Further Reading On SmashingMag

]]>
hello@smashingmagazine.com (Mikhail Prosmitskiy)
<![CDATA[Make Every Day Count (May 2025 Wallpapers Edition)]]> https://smashingmagazine.com/2025/04/desktop-wallpaper-calendars-may-2025/ https://smashingmagazine.com/2025/04/desktop-wallpaper-calendars-may-2025/ Wed, 30 Apr 2025 10:00:00 GMT Sometimes, it doesn’t take a lot to get inspired. A short bike ride to soak in the sun, a coffee break with a friend, or listening to your favorite song might be just what you need to spark some fresh ideas on a busy day. And if that doesn’t do the trick, we have a little extra inspiration boost for you: desktop wallpapers!

For this post, artists and designers from across the globe once again challenged their creative skills and designed desktop wallpapers to cater for some fresh inspiration this May — just like it has been a monthly tradition here at Smashing Magazine for more than 14 years already. You’ll find their artworks compiled below, along with a selection of May favorites from our wallpapers archives that are just too good to be forgotten. A big thank-you to everyone who shared their designs with us this month — this post wouldn’t be possible without your wonderful support!

If you too would like to get featured in one of our upcoming wallpapers posts, please don’t hesitate to submit your design. We can’t wait to see what you’ll come up with! Happy May!

  • You can click on every image to see a larger preview.
  • We respect and carefully consider the ideas and motivation behind each and every artist’s work. This is why we give all artists the full freedom to explore their creativity and express emotions and experience through their works. This is also why the themes of the wallpapers weren’t anyhow influenced by us but rather designed from scratch by the artists themselves.

Squeeze The Day

“Happy National Lemonade Day! Whether you like it sweet, tart, sparkling, or spiked — today’s the perfect excuse to pour yourself a glass of sunshine. Support a local lemonade stand, whip up your own zesty creation, or just soak in the summer vibes. However you sip it, make it refreshing, bold, and bright. Cheers to lemons and all the lemonade moments life brings!” — Designed by PopArt Studio from Serbia.

My Crazy Thoughts

“In this illustration, I want to express myself, just as I am, with all my upside-down, crazy thoughts. There are little things that make me happy: I love sitting quietly by the window and watching the rain. I enjoy watching the changes in the sky, the colors, and the clouds. It feels like every cloud says something to me, reminding me how important it is to give myself time. Nature always speaks to us through colors, shapes, and space. But nowadays, people are too busy — buying time, spending hours on movies and OTT platforms — always trying to prove themselves to others.
 We have forgotten how to simply be with ourselves, to connect with nature.” — Designed by Design Studio from India.

Lily Of The Valley

“In May, a very particular flower blooms, adorning the fields with little white bells. Associated with the first of May in France (‘la fête du travail’), the Lily of the Valley (‘muguet’ in French) is a very recognizable plant, and this one is entirely made of paper in the traditional papercraft art, without any glue, respecting nature in every way.” — Designed by Caroline Boire from France.

May The Fourth Be With You

“I love Star Wars and spring! I chose to combine those aesthetics to create a minimal wallpaper design for those who wanted a sweet memory of C3PO and R2D2. Culturally, I believe Star Wars is huge both nationally and internationally and teaches loads of good lessons, so what better theme to pull from! I also enjoy hand-drawn elements, so I drew this image with charcoal brushes in Procreate and then dropped it in as a jpeg.” — Designed by Chloe Mills from Texas, United States.

Ladies And Gentlemen

Designed by Ricardo Gimenes from Spain.

Through The Castle’s Eye

“Through a crumbling castle window, nature weaves its way back — framing a white house, green trees, and soft skies. A peaceful glimpse of history and new life intertwined.” — Designed by LibraFire from Serbia.

Crayfish Party

Designed by Ricardo Gimenes from Spain.

Under The Flower Moon

“Two ladybugs sat quietly on a flower, watching the Flower Moon rise high above. It was May, the time when blossoms wake and the moon whispers of new beginnings. Together, they listened.” — Designed by Ginger IT Solutions from Serbia.

International Labour Day

“International Labour Day on May 1 celebrates the contributions and achievements of workers worldwide. Originating from 19th-century labor movements advocating for an eight-hour workday, it highlights the importance of fair wages, safe workplaces, and workers’ rights. Many countries hold events, parades, and rallies to honor this important day.” — Designed by Design Studio from India.

Hello May

“The longing for warmth, flowers in bloom, and new beginnings is finally over as we welcome the month of May. From celebrating nature on the days of turtles and birds to marking the days of our favorite wine and macarons, the historical celebrations of the International Workers’ Day, Cinco de Mayo, and Victory Day, to the unforgettable ‘May the Fourth be with you’. May is a time of celebration — so make every May day count!” — Designed by PopArt Studio from Serbia.

Navigating The Amazon

“We are in May, the spring month par excellence, and we celebrate it in the Amazon jungle.” — Designed by Veronica Valenzuela Jimenez from Spain.

Bat Traffic

Designed by Ricardo Gimenes from Sweden.

Understand Yourself

“Sunsets in May are the best way to understand who you are and where you are heading. Let’s think more!” — Designed by Igor Izhik from Canada.

Poppies Paradise

Designed by Nathalie Ouederni from France.

The Mushroom Band

“My daughter asked me to draw a band of mushrooms. Here it is!” — Designed by Vlad Gerasimov from Georgia.

April Showers Bring Magnolia Flowers

“April and May are usually when everything starts to bloom, especially the magnolia trees. I live in an area where there are many and when the wind blows, the petals make it look like snow is falling.” — Designed by Sarah Masucci from the United States.

ARRR2-D2

Designed by Ricardo Gimenes from Sweden.

Add Color To Your Life!

“This month is dedicated to flowers, to join us and brighten our days giving a little more color to our daily life.” — Designed by Verónica Valenzuela from Spain.

Lake Deck

“I wanted to make a big painterly vista with some mountains and a deck and such.” — Designed by Mike Healy from Australia.

Tentacles

Designed by Julie Lapointe from Canada.

Today, Yesterday, Or Tomorrow

Designed by Alma Hoffmann from the United States.

The Monolith

Designed by Ricardo Gimenes from Sweden.

Asparagus Say Hi!

“In my part of the world, May marks the start of seasonal produce, starting with asparagus. I know spring is finally here and summer is around the corner when locally-grown asparagus shows up at the grocery store.” — Designed by Elaine Chen from Toronto, Canada.

Spring Gracefulness

“We don’t usually count the breaths we take, but observing nature in May, we can’t count our breaths being taken away.” — Designed by Ana Masnikosa from Belgrade, Serbia.

Blooming May

“In spring, especially in May, we all want bright colors and lightness, which was not there in winter.” — Designed by MasterBundles from Ukraine.

Enjoy May!

“Springtime, especially May, is my favorite time of the year. And I like popsicles — so it’s obvious isn’t it?” — Designed by Steffen Weiß from Germany.

Geo

Designed by Amanda Focht from the United States.

Be On Your Bike!

“May is National Bike Month! So, instead of hopping in your car, grab your bike and go. Our whole family loves that we live in our bike-friendly community. So, bike to work, to school, to the store, or to the park — sometimes it is faster. Not only is it good for the environment, but it is great exercise!” — Designed by Karen Frolo from the United States.

Duck

Designed by Madeline Scott from the United States.

Flying In The Air

“We recently changed our workplace and now we’re in a windy place, so we like the idea of flying in the air, somehow.” — Designed by Monk Software from Italy.

May Your May Be Magnificent

“May should be as bright and colorful as this calendar! That’s why our designers chose these juicy colors.” — Designed by MasterBundles from Ukraine.

Popping Into Spring

“Spring has sprung, and what better metaphor than toast popping up and out of a fun-colored toaster!” — Designed by Stephanie Klemick from Emmaus Pennsylvania, USA.

Make A Wish

Designed by Julia Versinina from Chicago, USA.

The Green Bear

Designed by Pedro Rolo from Portugal.

Birds Of May

“Inspired by a little-known ‘holiday’ on May 4th known as ‘Bird Day’. It is the first holiday in the United States celebrating birds. Hurray for birds!” — Designed by Clarity Creative Group from Orlando, FL.

Beautiful Things

Designed by Elise Vanoorbeek from Belgium.

]]>
hello@smashingmagazine.com (Cosima Mielke)
<![CDATA[How To Turn Your Figma Designs Into Live Apps With Anima Playground]]> https://smashingmagazine.com/2025/04/anima-playground-figma-designs-live-apps/ https://smashingmagazine.com/2025/04/anima-playground-figma-designs-live-apps/ Tue, 29 Apr 2025 10:00:00 GMT This article is a sponsored by Anima App

For years, designers and developers have been stuck in a frustrating loop. Designers create stunning UIs in Figma, only for developers to spend hours — or days — coding them from scratch. Along the way, details get lost, tweaks pile up, and before you know it, the whole process turns into a never-ending back-and-forth.

It’s a tale as old as modern product teams: pixel-perfect designs turned into imperfect realities, timelines stretched by repetitive tasks, and collaboration slowed by tool mismatches. Designers work in one world, developers in another — and the bridge between them has always been shaky at best.

But what if you could just… skip the painful part?

That’s where Anima Playground comes in. It’s a tool that transforms your Figma designs into fully functional web apps automatically. No more pixel-matching marathons, no more manual UI rebuilding. Just a smoother, faster way to go from a design to a live product — with AI doing the heavy lifting.

What Is Anima Playground?

Anima Playground is an AI-powered development environment that makes the jump from design to code seamless. It turns your Figma designs into clean, editable, and production-ready React components — instantly. And unlike static design-to-code tools of the past, this one goes further: it lets you add business logic, connect to APIs, and preview real-time changes right inside the playground.

In short: it's not just a handoff tool. It's where design becomes a working app.

Here’s what you can do with Anima Playground:

  • Import Figma designs exactly as they were created — layouts, styles, responsiveness, and all.
  • Generate React components instantly, with support for libraries like MUI and shadcn/ui.
  • Use AI prompts to add logic — from button clicks to dynamic lists and form validation.
  • Customize everything, with full code access and live previews.
How It Works

Easily sync your Figma designs with Anima Playground. All it takes is four quick steps.

1. Import Your Figma Designs

No clunky exports, no third-party converters. Just paste your Figma link, and Anima syncs it directly. It preserves layout, typography, responsiveness, and component structure, exactly as designed.

This step sets the foundation: Anima translates your Figma layers into React code, respecting design fidelity down to the pixel. Designers can rest easy knowing their UI won’t get “lost in translation.”

2. Convert Designs Into React Components

Once imported, your Figma designs are instantly transformed into React components. This includes:

  • Clean JSX structure
  • Tailwind, MUI, or shadcn/ui styling (you choose!)
  • Nested component trees
  • Auto-handling of responsive layouts

You can switch between UI libraries with a simple prompt or setting change — no need to rewrite everything manually. Whether you're building a startup landing page or a complex dashboard, the output is dev-ready and easy to extend.

3. Add Logic With AI-Powered Prompts

Want a button to open a modal? Or a form that sends data to an API? You don’t need to write all that boilerplate yourself.

Just describe what you want using natural language — for example:

“Make this button open a signup modal.”

Anima’s AI will generate the underlying code for you — complete with state management, handlers, and reusable logic. You can always dive in and tweak the output to fit your specific app structure.

This turns design into functional UI with a level of speed that traditional front-end workflows just can’t match.

4. See Live Changes Instantly

As you make changes — whether through prompts or direct code edits — you see them reflected in real-time. Anima Playground acts as a visual IDE, combining the flexibility of code with the immediacy of design tools.

This live feedback loop means less context-switching and faster iterations. Whether you’re testing animations, layout tweaks, or new features, you get to see it before you commit to anything.

More Than Just Design-to-Code

While many tools promise “Figma to code,” Anima Playground goes beyond static conversion. It’s a fully interactive environment where real apps are born — with logic, data, and interactivity.

Some powerful features include:

  • One-click AI suggestions to enhance your UI with logic.
  • Custom component support, allowing teams to inject their own building blocks.
  • Component reuse, letting you structure apps in a scalable way.
  • Flexible framework support, starting with React and planning to support more in the future.

It’s not just for prototyping — it’s for building.

Why It Matters

The design-to-code handoff has been broken for too long. Anima Playground isn’t just another tool. It’s a game-changer. Here’s why:

  • 🚀 Speed
    What used to take days now takes minutes. You skip the repetitive coding, layout guesswork, and context switching.
  • 🎯 Accuracy
    Your designs stay true to the original. No more pixel-matching or guessing which font size the designer used.
  • 🧩 Flexibility
    Developers get full access to the code. It's not a black box — it's fully transparent and editable.
  • 🤝 Collaboration
    Designers and developers finally share the same playground — literally. This tightens feedback loops and shortens build cycles.

By making the workflow smarter, Anima Playground helps teams build better products, faster, and with fewer headaches.

Who Is It For?

Whether you’re a designer, developer, startup founder, or PM, Anima Playground removes the barriers between your ideas and real products.

  • Designers can see their visions come to life, exactly as imagined.
  • Developers can skip the grunt work and focus on logic, architecture, and business needs.
  • Teams can work together in a unified environment — no more waiting for the “handoff.”

It’s perfect for building landing pages, dashboards, internal tools, MVPs, and more.

Are You Ready To Try It?

Anima Playground and the Anima API are redefining the connection between design and development in the era of AI-powered coding. Whether you're a designer, developer, product team member, marketer, or entrepreneur, Anima empowers you to transform visual ideas into concepts within minutes—and into fully functional products within hours.

If you’re tired of the endless design-to-development grind, it’s time to give Anima Playground a spin. Whether you’re a designer who wants to bring your vision to life or a developer looking to speed up the build process, this tool has your back.

Let your designs do more than look good — let them work!

]]>
hello@smashingmagazine.com (Anima Team)
<![CDATA[UX And Design Files Organization Template]]> https://smashingmagazine.com/2025/04/ux-design-files-organization-template/ https://smashingmagazine.com/2025/04/ux-design-files-organization-template/ Mon, 28 Apr 2025 13:00:00 GMT Are you also getting lost in all the files, deliverables, shared docs, PDFs, and reports related to your UX work? What about decisions scattered everywhere between email, Slack conversations, Dropbox folders, SharePoint, Notion, and Figma?

It’s too easy to lose important assets and too difficult to find them just when you need them. While we often speak about how to neatly organize Figma files, we rarely discuss a sensible folder structure for all our UX assets. Well, let’s change that.

(If you're looking for more insights into design patterns or measuring UX, take a look at Smart Interface Design Patterns and How To Measure UX, friendly video courses on design patterns and UX, with a live UX training coming up in a few weeks.)

Organization Starter Kit (Free Template)

A while back, I stumbled upon a neat organizational starter kit by Courtney Pester. It’s an incredibly thorough setup template to get started with and build upon. Surely your projects will require a customized setup, but it will get you running fairly quickly.

In the article, Courtney suggests breaking down all assets and resources into 7 main categories — all representing distinct parts of the project lifecycle, and neatly broken down into sub-folders:

  1. Client resources,
  2. Research & synthesis,
  3. Concept ideation & testing,
  4. Wireframes & prototypes,
  5. Meeting artifacts,
  6. Final deliverables,
  7. UI + Dev handoffs.

Every project starts by duplicating the same main folder template and adjusting it for the needs of the project. Most importantly, we choose a central place where all key assets have to be located — be it Notion, Google Drive, Dropbox, or anything else. If an important detail lands in your email or is sent to you via Slack, it has to end up in that shared space.

I really can’t emphasize enough the importance of having a shared understanding about where the files will be stored and how they will be accessed. Proper organization of assets will not happen automatically — usually, it requires effort and commitment from the entire team to ensure that it doesn't become a place with some bits and pieces, while other critical details and decisions are scattered all over other channels.

Now, when we bring all documents and artefacts together, we end up with a quite lengthy but also comprehensive folder structure:

It might appear quite daunting at first, but of course, the overall structure would change quite significantly depending on what exactly you are working on.

Beware Of Duplications

Probably the most underrated problem in any type of file structure organization is duplication and versioning. Before we start the project, we need to be very clear about what types of files should end up in the shared drive and which shouldn't. You might or might not need intermediate versions of some documents, but you definitely want to keep the final ones.

These are typically the questions I would be raising:

  • Do we need to restrict access to some sections of the folder (e.g., sensitive data)?
  • What naming conventions do we use for files/folders (e.g,. semantic versioning, V1, V2, --FINAL)
  • How do we manage deprecated or outdated files? Do we archive or delete them?
  • What would be the main communication channel for stakeholders/clients?
  • Are there any legal requirements for storing and sharing some specific files?
  • What will happen to the shared space once the project has finished?

Frankly, the reason why I raise these questions isn't only to make decisions and create some shared conventions in the team. A much more important goal is to strengthen communication channels and raise awareness. We want to establish a shared commitment and ownership over that space — mostly to avoid any key decisions falling through the cracks, resulting in severe delays, costs, or cutting corners.

Secure But Easy To Access

It might sound obvious, but worth emphasizing: if the shared space is difficult to use, it will not be used. That’s when people will find workarounds to store some of “their” assets in spaces that are more convenient to use — with pieces of information scattered all over different channels.

The shared space has to be easily accessible for everyone who should be able to access and maintain it. We most certainly want to stay secure, but setting up a multi-layered authentication process with Yubikey and a virtual machine is unnecessary.

For most situations, a password/passkey + 2FA (2-Factor-Authentication) would be perfectly enough.

The Drawbacks Of The Tree Structure

Personally, I do have a small issue with the tree structure. Although it neatly organizes all artefacts in folders, it doesn’t really reflect the project timeline. But different assets are more important at different times of a project lifecycle. And: there are typically dependencies between different parts of a project, so it might also be a good idea to break down by time or at least tag by milestones.

For example, we might want to look up research insights related to a specific part of the project. Or review the video from usability sessions when a specific iteration was tested. Doing so with a high-level tree structure can be a bit challenging and time-consuming.

When organizing artefacts, I try to follow one single principle: put things that belong together close to each other. Typically, it means having a high-level structure with key iterations, broken down by milestones. It can live in Notion or in Miro, with each milestone linked to a Figma mock-up (not uploaded .fig files!).

Useful Tools To Organize UX Work

There are plenty of wonderful tools to help you organize and share your UX work as well:

  • Dovetail to gather customer insights in one place,
  • UserInterviews for recruiting and research work,
  • Maze is another great UX research platform,
  • Glean.ly to use as an atomic research repository,
  • Notion and AirTable for quick look-ups of all files.

And: don’t feel compelled to replicate any file structure entirely. Use it as a foundation to be inspired by and build upon. Customize away for the specific needs of your projects and your team. What works for you works for you. There is really no perfect and universal way that works out of the box.

How do you organize your files and assets? What folder structures and organization systems do you use? Share what works best for you and your team in the comments below.

Happy organizing, everyone!

Useful Resources New: How To Measure UX And Design Impact

Meet Measure UX & Design Impact (8h), a new practical guide for designers and UX leads to measure, track, show and report the impact of your incredible UX work on business. Use the code IMPACT 🎟 to save 20% off today. Jump to the details.

Video + UX Training

$ 495.00 $ 799.00 Get Video + UX Training

25 video lessons (8h) + Live UX Training.
100 days money-back-guarantee.

Video only

$ 250.00$ 395.00
Get the video course

25 video lessons (8h). Updated yearly.
Also available as a UX Bundle with 2 video courses.

]]>
hello@smashingmagazine.com (Vitaly Friedman)
<![CDATA[“Product Kondo”: A Guide To Evaluating Your Organizational Product Portfolio]]> https://smashingmagazine.com/2025/04/product-kondo-guide-evaluating-organizational-product-portfolio/ https://smashingmagazine.com/2025/04/product-kondo-guide-evaluating-organizational-product-portfolio/ Fri, 25 Apr 2025 13:00:00 GMT When building digital products, thinking in terms of single features and urgent client needs can lead to a large portfolio of products with high maintenance costs. At first, this approach makes sense, as you’re offering new value to customers and keeping important clients happy. But, over time, you often end up with a collection of highly bespoke solutions that ignore two key principles:

  1. Your product portfolio should cater to your core customer segments and meet their needs.
  2. Your product portfolio should balance the short-term benefits of bespoke solutions against long-term maintenance costs while aligning with your business strategy.

So the reality often looks like this: large legacy product portfolios have grown over time, and the effort required to clean up is hard to prioritize against other seemingly more pressing topics.

This article highlights the benefits of going through a clean-up exercise and explains how to conduct a “Product Kondo” exercise on your product portfolio. Like Marie Kondo, the Japanese master of cleaning up closets and houses to keep what brings you joy, discard what you no longer need, and organize what you keep into a workable order, this exercise seeks to identify the most valuable items for both your business and your customers. This article discusses the issues with large legacy portfolios and explains how to simplify and organize them into customer-centric portfolios, with stakeholder buy-in throughout the process.

Overflowing Product Cupboards

There are many reasons why an organization might end up with a large legacy product portfolio, which, similar to the cupboards organized by Marie Kondo, is in dire need of a good clean-up. Whether your portfolio is overgrown from crafting bespoke solutions for important enterprise clients (a common B2B scenario), from testing new features with a B2C customer segment, or various other possible reasons, incentive structures chiefly among them — overgrown portfolios are very common. And the problem is they need to not just be developed in the first place, they need to be maintained, and that gets ever more costly and complex over time.

While this might be oversimplified, the general logic holds true: the more bespoke your product portfolio, the harder it is to keep clean and tidy. Or as Marie Kondo would say, “In a messy cupboard, it’s impossible to find the pieces that truly bring you joy.” In this context, joy translates into:

  1. Value for the customer,
  2. Revenue for the business.

If you want to work out how to find that joy in your product portfolio again, this article outlines the practical steps taken for such a “Product Kondo” exercise in a global not-for-profit organization with a large legacy portfolio, including the moment when theory met reality, and the learnings from this effort.

We conducted this clean-up in a globally distributed organization undergoing a wider transformation. For more than 20 years, the organization had been gathering and distributing data in various formats: from raw to modelled data, scores, and advanced data products. However, it had not been focused on customer centricity nor regarded products as strategic differentiators. This meant that key indicators of success for product organizations had never been tracked. So the challenge was to map out and simplify the portfolio with very few indicators available to track product performance (e.g., user analytics data).

So, how do you start understanding where the value lies in your portfolio and what factors are driving this portfolio clean-up in the first place?

As part of the wider organizational transformation, one consideration was to simplify the product portfolio in order to reduce maintenance costs and the technical effort required for a planned migration to a new platform. Another important concern was to align future development with the newly developed business strategy. Therefore, reducing costs and planning for the future were the key drivers.

The “Product Kondo” Portfolio Clean-up

So if you find yourself in a similar situation, where you have a complex legacy portfolio, and where across many years features have been added, but hardly anything has ever been sunset, a “Product Kondo” clean-up, i.e., a cleaning out of your product cupboards, might be what’s needed.

To do that, it’s useful to go in with two ideas:

  • Transparency about the need to simplify;
  • Transparency about how decisions will be made, so teams are on board and able to contribute.

Getting buy-in and building a narrative everyone understands and sees as relevant is crucial when trying to clean up — especially in large companies, where you’ll always find someone who thinks “we need everything,” and the relative importance of different customer segments is unclear, with no accurate portfolio overview in place.

If you’re unclear about the state of your current portfolio, how do you know where to focus next strategically?

Not knowing where the highest value lies in your portfolio and how it all maps out as a whole has another implication: If you don’t know your current status quo, it’s hard to plan ahead and it’s equally hard to get out of the delivery mode many product organizations find themselves in, where you simply build what gets requested, but can’t act as a driver of future growth.

To organize a portfolio in order to define how to handle it going forward, while not having much information to base decisions on, the high-level approach was this:

  1. Define the FOR WHOM (By building a user segmentation matrix).
  2. Establish the STATUS QUO (By auditing previous attempts to map the portfolio).
  3. Agree the HOW (By defining evaluation criteria & prioritisation).
  4. Ensuring BUY-IN (through deep dives with key stakeholders and experts).

Note: Every company is different, especially regarding the information that’s available. So this is not an attempt at building the next framework or providing a one-size-fits-all approach to portfolio organization. Instead, it is a proposed solution for how to approach mapping out your current portfolio to start from a cleaner slate, with your customer segments in mind. These four areas of work should be considered as necessary when attempting a “product kondo” exercise in your own organization.

1. For Whom? Building A User Segmentation Matrix

First things first, if you’re not clear about your primary and secondary customer segments, then this is where to start. If you want teams to be able to focus, it’s crucial to define priorities. Identifying key external user groups/segments, understanding their differences, and assessing their importance to the organization’s overall business success is a great start. Building a user segmentation matrix is a great foundation for prioritizing efforts and aligning services/products around user needs.

Apart from establishing the key jobs-to-be-done, goals, and pain points for each customer segment, it fosters transparency around the following factors:

  • Thinking from a customer perspective.
  • Considering measurable data like user numbers, size of accounts, and revenue.
  • The fact that some user groups are more valuable to an organization than others, hence should be ranked higher in a prioritization effort.

How to define user segments, with different levels of relevance to the organization and its future strategy, is described in more detail here. It was the initial mental model shared across teams prior to starting this portfolio simplification effort.

Next up: Understanding the current status quo and building a “source of truth” of everything considered under the remit of the product organization. Because you need a clear reference point to get started.

2. Status Quo: Auditing And Defining What To Measure

To determine the best approach and size the task ahead, understanding what had been done before was crucial, so as not to reinvent the wheel. It was clear that the organization had a sprawling product catalogue that contained a varied mix of different items, lacking clear definitions and categorization.

The initial audit was about updating the product catalog that had been assembled three years earlier and adding information that would be relevant for assessing relative value. As revenue, user numbers, or development effort had never been tracked, this is where we gathered additional insights on each item from the product owners (POs) responsible.

The assessment criteria were partly taken from the previous effort (criteria 1-9), and further criteria were added to obtain a more holistic picture (criteria 10-15). See the table below.

3. How? Doing The Audit

In order to be transparent about decision-making, it was important to agree on the evaluation criteria and scoring with key stakeholders upfront and ensure every contributor understood that a lack of data would lead to low scores. To that end, we asked all 36 product owners (POs) to submit data for each product under their remit. As the organization had not previously tracked this information, the initial responses were often quite vague, and many cells were left blank.

To increase data quality and make data-based decisions, 1:1 interviews with POs allowed us to answer questions and build out “best guess” assumptions together in cases of missing data.

Note: While not technically perfect, we decided that moving forward with assumptions grounded in subject matter expertise, rather than completely missing data, would be preferable.

Lastly, some inputs like “automation potential” were hard to assess for less technical POs. Our approach here followed the product mindset that while it is important to make data-informed decisions, “done is better than perfect.” So once we had enough confidence in the picture that emerged, we proceeded with scoring in the interest of time.

As a side note regarding data quality: 1. Manually cleaning inputs throughout (e.g., removing duplicates) and 2. following up until clear inputs were provided, helped increase input quality. In addition, predefined ranges led to higher data quality than inputs requiring hard-to-quantify data, like, e.g., expected impact.

3.1. Scoring

Defining the scoring methodology upfront and getting stakeholders to align on the relevance of different criteria transparently was crucial for this work. Particularly keeping in mind that simplifying (in other words, reducing) the portfolio has an immediate impact on various teams, communicating openly about what is being done, how, and why is important, so everyone understands the longer-term goal: to reduce cost, maintenance, and prepare for future growth.

The image below illustrates the three stages that led to the prioritized list and score for each item.

The outcome of this stage now ranked the business and user value for each data product, and the initial expectation was that this was the end of the portfolio cleanup. A list of all items ranked by their value to the business, so that, e.g., the bottom half could be cut and the rest migrated to the new technical platform in order of priority.

At least that was the theory, and this is where it met reality.

Dealing With Change Reality

Once the weighted list was ready and the whole portfolio was ranked, it became clear that what was considered the “Product portfolio” in fact consisted of 12 different types of items, and roughly 70% of them could not be considered actual products.

While inside the organization, everything was called a PRODUCT, it became clear that the types of items referred to as “products” were in fact a mixed bag of trackers, tables, graphs, extracts, data sets, dashboards, reports, tools, scoring, and so on. And many low-ranking internal-facing tools enabled highly relevant customer-facing products.

The list was essentially comparing “apples to oranges,” and that meant that simply cutting the bottom half of lower-scoring items would lead to the whole “house of cards” tumbling down, especially as a lot of items had dependencies on each other.

What To Do?

First and foremost, we worked with leadership to explain the issue of missing categorization in the portfolio and the risks that cutting the lower-scoring half of the list would entail, especially due to the time pressures of the wider ongoing transformation effort.

Next, we proposed to work with key product owners and leaders to help categorize the portfolio correctly, in order to determine how best to handle each item going forward.

We used the following five buckets to enable sorting, with the intention of keeping the “other” category as small as possible.

Aside from simplifying the terminology used, this categorization meant that each category could be handled differently in terms of future work.

For example, all raw data items would be automated, while the process around “low effort” data items didn’t have to be changed going forward, once it was clear how low the manual effort actually was. Notably, the categorization included a “Sunset/Stop” category to allow stakeholders to already move items there during the deep dives of their own volition, rather than through top-down decision making.

4. Getting Buy-in: Building Product Trees

To get buy-in and allow for active contributions from subject matter experts, we planned workshops per customer segment (as defined by the user segmentation matrix — the initial starting point). Aside from organizing the portfolio items, these workshops allowed key people to be actively involved and thereby act as advocates for the future success of this work.

Using Miro boards to share all audit findings, goals, and the purpose of the clean-up, we conducted seven workshops overall. With 4–6 participants, we spent 3 hours categorizing all items per customer segment. In order to avoid groupthink, all participants were asked to cluster their part of the portfolio as part of the preparation.

The “product tree” concept, developed as an innovation game called “prune the product tree” by Luke Hohmann to organize features around customer needs, helped create a shared mental model among participants. In contrast to Hohmann, we applied the product tree concept here to organize the current portfolio logically and actively reduce it, rather than imagine new products.

In this context, the roots of the tree signified raw data, the tree trunk equated to modeled or derived data, with the crown of the tree signifying data products, and the outer branches were left for “other” items — to capture what could not be easily grouped but had to be included.

Grouping items in this way served a second purpose: to guide how to handle them in the future transformation effort. The plan was to automate raw data first, based on priority. While modeled or derived data would have to be checked for complexity to determine future handling. The actual data products identified would be crucial for the company’s future strategy and were to be reimagined with a product mindset going forward.

The tree metaphor worked well here, despite being used in a different way from its original context, as it provided a mental model for categorization. By clustering items, it was possible to better determine their value for each customer segment in the portfolio. According to the feedback gathered after each workshop, the joint mapping and visualization helped teams trust the process and feel actively involved.

Findings

Analyzing the findings from the workshops revealed the complexity of this effort, with many different factors playing into the prioritization. To visualize this complexity, we used the following approach:

  • Mapping out the product tree by swimlanes (as introduced in the workshops).
  • Layering in usage across multiple segments (through color-coding).
  • Adding the level of dependencies (through the type of frame around each item).
  • Then, add the quantitative assessment and ranking through numbering and color-coding.

For each workshop, we cleaned up the boards, making sure to include crucial comments, especially those about future treatment, such as when a legal obligation to deliver would end.

Using swimlanes helped participants organize data items, while the tree metaphor clarified the interconnectedness and dependencies between items. Especially in the context of data products, this makes a lot of sense, e.g., with raw data being at the root of all other possible versions of “products” derived from them, whether these might be scores, modelled data, automated reports, or more advanced products.

Doing this Product Kondo exercise also helped the teams and all stakeholders gain a shared understanding of how the portfolio was structured for each customer segment. The visualization in swimlanes and with colour-coding and various different frames provided a way to illustrate the complex reality that the initial ranked list format wasn’t able to clarify.

Only once this portfolio mapping was in place, and once quantitative as well as qualitative insights were combined, was it possible to make good decisions about how to handle each item going forward.

For example, all items in the “raw data” category would be automated as part of the wider transformation effort, while all items in the “sunset” category would definitely not be considered for migrating over to the new tech platform. Moreover, the items grouped under “low effort” would continue to be handled manually, while all items grouped under “derived & modelled” would have to be assessed further by a team of tech leads to determine whether or not they might be automated in the future. The items most relevant for the future business strategy of this organization were those grouped under “data products”, i.e., those products that would have to be re-imagined with clear customer needs in mind, based on the user segmentation matrix.

Learnings

In total, we achieved a portfolio reduction of 67.8% from 198 items initially to 118 post clean-up. However, what matters here is not simply the reduction but the categorization, i.e., separating and organizing the portfolio into different swimlanes and introducing the product tree metaphor. The product tree visualisation helped all stakeholders understand the interconnectedness of the portfolio, where the roots signify the core product and the branches different, more advanced products or features built on top of that core.

Similarly, the categorization into swimlanes helped to organize and cluster similar items, getting away from comparing apples and oranges in the initial big portfolio audit table. It illustrated very clearly that not all items are alike and can’t be judged and rated in the same way.

It is worth mentioning that there is no one best way to label your swimlanes, but a good starting point is to think of naming different clusters, e.g., from basic to most complex, and to always include a “sunset/stop” cluster and potentially one that covers “redesign/tech upgrade” items. Having these two buckets allows contributors to actively shape the decision-making around the quick-win items, usually the most obviously outdated or clunky parts of the portfolio.

Whether or not you categorize your products in order to determine how to handle them in an organizational transformation, e.g., to assess automation potential, will largely depend on why and when you’re cleaning up your product portfolio. Even outside of a transformation effort, clustering your portfolio into different categories, understanding interconnectedness, and whether or not each customer segment has a well-rounded product tree, with solid roots and future-looking branches, is a useful exercise in sense-making and keeping your organization lean.

Shared Terminology Matters

In all this, our biggest learning was that

Terminology matters because simply referring to things as “products” doesn’t make them so. Comparing like for like is a key factor when assessing a product portfolio.

Correct categorization was the biggest challenge that had to be dealt with first, to enable the organization to iterate and focus on where to play and re-imagine products to match the future business strategy.

When Theory Meets Reality

This portfolio clean-up had to pivot and expand to include a mapping exercise because we hadn’t factored in the unclear terminology used across the organization, and that, instead of simply gathering and ranking, the biggest task was to correctly categorize and structure. And this is likely to be different from organization to organization. So I would always recommend checking which categories of items you’re comparing in your portfolio. If you’re not entirely sure, you should always include a clustering or mapping exercise right from the start.

Product Kondo: The Groundwork For Transformation

If you’re struggling with a large legacy portfolio and no longer confident that everything in it serves a purpose and brings joy to users and the business, it’s time to clean up.

It’s often necessary and needed to focus on the next shiny thing, but if you don’t balance that with cleaning up your existing portfolio, your organization will eventually become slow. Overgrown product portfolios can’t be sustained forever.

Particularly in organizations bound by various contractual obligations, this is the groundwork that enables product teams to iterate.

Moreover, doing this clean-up and clearing out effort across teams is a highly transparent way to include teams in change. And it is a useful way for getting teams to contribute and actively shape a transformation effort. Business decisions have to be taken, but taking them with transparency and in an evidence-guided way ensures that you are bringing people along.

Last but not least — if you don’t have the capacity to do the full portfolio clean-up (which took us about 4 months, with a core team of roughly 4 people) — start smaller. And start with including these considerations in your day-to-day, for example, by always checking if products or features should be stopped or sunset every time you’re launching new products. Or start by mapping out the different categories of items in your portfolio — with swimlanes and the product tree metaphor in mind. What is core, and what is the future state of play?

Upside: Once you’ve got that big picture overview and worked out what to sunset or where to slim down, you have more capacity to focus on current and future priorities strategically.

Reality check: Of course, the work doesn’t stop there. The next step is to align it all back to your user segments and check how your portfolio serves each of these, particularly the primary segments.

Further Reading

]]>
hello@smashingmagazine.com (Talke Hoppmann-Walton)
<![CDATA[Boosting Up Your Creativity Without Endless Reference Scrolling]]> https://smashingmagazine.com/2025/04/neuroscience-designers-boost-creativity-endless-reference-scrolling/ https://smashingmagazine.com/2025/04/neuroscience-designers-boost-creativity-endless-reference-scrolling/ Thu, 24 Apr 2025 10:00:00 GMT The work of a designer largely consists of inventing new things, which requires creativity that is generally believed to depend on inspiration, making it unpredictable and difficult to control. Many designers, as well as those who would like to try their hand at design, are wondering: what to do if inspiration does not come at the right moment?

There are many practical recommendations from experienced designers and design managers on how to work without inspiration. These mainly rely on discipline, planning, and working with references. I would like to suggest an alternative approach: how to boost creativity and “lure” inspiration with the help of neuroscience.

I’m Marina, and I have been deeply interested in neuroscience for a long time. I have tried many methods from my own experience and observed the experience of my colleagues. In this article, I want to share the ways that seemed to me the most effective in luring creativity, which I eventually built into my life routine on an ongoing basis.

How Our Brain Works

The brain has been and remains an important topic that is underexplored, especially in the context of design and design thinking. No other profession represents the blend of creativity and logic quite like design, in my opinion. This raises a fair question: which part of the brain is more important, the left or the right? To start with, let’s briefly refresh which part of the brain is responsible for what:

Left Hemisphere Right Hemisphere
Language and Speech: Language-related activities like speaking, writing, and comprehension Creativity and Artistic Abilities: Imagination, creative thinking, music, visual arts, etc.
Analytical Thinking: Mathematical operations, sequential processing, and problem-solving Emotional Processing: Emotion recognition, facial expressions, tone of voice, gestures
Linear Thinking: Step-by-step way of information processing Holistic Thinking: Looking at the big picture rather than focusing on details

While each part of the brain is responsible for certain functions, they work together to process information. For some activities (analyzing data, solving equations, and working with precise calculations), it might be more important to rely on the left hemisphere, while for others (composing music, acting), the right hemisphere.

However, when it comes to the design process and design thinking, it’s essential to stimulate both hemispheres and not limit the role of a product designer to being either predominantly left- or right-brained.

Interhemispheric Interaction In Product Design: Why Are Both Equally Important?

In product design, the need for well-established interhemispheric interaction is especially noticeable since this work requires a balance between logic and creativity. The left hemisphere’s logical functions help designers break down complex problems, analyze user needs, and organize structured workflows, ensuring the product’s functionality and usability.

For example, logical processes are crucial in creating wireframes and user flows and adhering to technical constraints. On the other side, the right hemisphere’s creative and spatial abilities play a critical role in developing visually appealing designs and innovative user experiences. It’s extremely important for a designer to think outside the box and solve user problems without forgetting about the balanced and attractive visual part at the same time.

A harmonious interaction between the two hemispheres allows product designers to seamlessly integrate both practical functionality and creative innovation. This balance results in products that not only meet technical and user requirements but also deliver an enjoyable, intuitive, and visually captivating user experience.

The Relevance Of This Subject

The idea that two parts of the brain are interconnected and complement each other during creative tasks isn’t new, nor is it my invention. One of the most influential works for product designers is Experiences in Visual Thinking by Robert H. McKim, an Emeritus Professor of Mechanical Engineering. The value of this book lies in the author’s attempt to explain visual thinking through the lenses of psychology, neurology, semantics, art, and perception. This work was later included in Stanford University’s list of recommended readings for engineering and art design students, further highlighting its significance beyond the field of design.

In the context of the brain’s left and right hemispheres, the author explains and demonstrates through a range of experiments that, to achieve productive thinking — the kind that leads to creative actions — we need to achieve an “internal transfer” between the so-called rational and intuitive halves of the brain. In our thinking process, to achieve creativity, we need to build bridges to “integrate the artist and scientist within each one of us.”

He offers a series of exercises (“3-1/Food for Thought,” “3-2/Dominant Eye,” “3-3/Internal Transfer”) that demonstrate that both brain hemispheres complement each other in cognition and creativity, and he offers to practice them to achieve the so-called “internal transfer”.

One of the simplest exercises offered by McKim is the “3-2/Dominant Eye”. Look at the picture and try to describe what you see:

If you see a duck first (most people see it first) — your left hemisphere is more active. This is because the left hemisphere was activated before reading. If you see a rabbit — often after it’s mentioned — your right hemisphere is more active. This exercise shows that we can consciously choose to shift between hemispheres, training ourselves to engage either side more effectively.

In his work, Professor H. McKim not only demonstrates how to activate the left or right hemisphere but also explains the complementary modes of thought, which consist of two stages. The first stage involves generating an array of ideas, often through a visual thinking process, while the second stage focuses on selecting and refining these ideas (or objects) for further development. Creativity is born during the first stage, but to be executed tangibly, it requires the second stage. Even mathematicians do not only think in terms of mathematical symbols; many, particularly creative ones, use vague images and visuals as part of their thought processes.

According to McKim, creativity requires a balanced development of both hemispheres, as creative thinkers are ambidextrous and capable of transferring ideas into actionable steps. Another important aspect of visual thinking is the right environment, which leads to creativity. McKim describes it as “relaxed attention” — a mental state where ideas emerge spontaneously. Relaxed attention is often achieved through side activities like meditation, taking breaks, physical relaxation, and engaging in non-linear thinking, such as doodling or daydreaming.

I will further share my perspective on enhancing creativity through side activities and present my top three mental and physical occupations. However, it’s important to understand the complementary nature of our brain and how visual thinking often stems from diverse activities and practices.

What Helps Creativity

While it is clear that creativity is driven by both the left and right hemispheres, an important question remains: how can we boost creativity while keeping the process enjoyable? It may not be obvious, but non-design-related activities can, in fact, be an opportunity to enhance creativity.

Physical Activity

The interconnection between our body, mind, and thinking process might be key to awakening creativity. Motor skills are controlled by both hemispheres, with the right hemisphere controlling the left side of the body and the left hemisphere controlling the right side. But it also works in the opposite direction — movements trigger active brain activity.

Sports that combine the need to develop a strategy while also requiring active movement may work best for turning up creativity.

Understanding the intricacies of the brain highlights the importance of integrating all parts of the brain. In order to learn, you must first have a sensory experience, then reflect and make connections. Finally, you must take action based on the experience. The knowledge that your first movements, even inside the womb, help build your brain underscores the fact that you actually move to learn. In other words, movement is essential to learning. (Source: Anne Green Gilbert. Brain-Compatible Dance Education, 2019)

Here are the top activities that positively impact creativity, and I will explain why they have this effect.

Tennis

The basis of a good game is a well-thought-out and trained strategy. Tennis requires a quick analysis of the situation, prompt decisions, and maximum involvement. No wonder this sport is called “chess in motion”: in the process, it is developing memory, concentration, and strategic thinking. At the same time, working in a group and communicating during workouts help reduce stress levels and improve mood.

Table tennis also develops concentration. The need to memorize combinations, develop motor skills, visual and motor types of memory, and compare the opponent’s movements, speed, angle of flight of the ball, and its rotational force form the basis of a successful game. It is suitable for those who do not have the opportunity to play lawn tennis.

I asked several designers if they do any of these things in their free time and how they think it affects their productivity and professional skills. Here is what they’ve shared:

“I started playing tennis a couple of years ago. I work out once or twice a week individually with a coach or in a group. This is a sport that requires high concentration during the game. It seems to me that this skill helped me in my work as well; before that, I was often distracted, and it was difficult for me to do the same task for a long time.

At the same time, due to the fact that I have to fully concentrate during the game, I manage to switch from everyday problems and unload my brain. I prefer to play in the morning or afternoon and take a break from work. Therefore, I return to work more energetically and can take a fresh look at my tasks.”

— Ilia Kanazin, Product Designer with 7+ years of experience working in SaaS

Dance

Dance challenges the brain by requiring the integration of movement, rhythm, coordination, and memory, which promotes neuroplasticity, or the brain’s ability to form new neural connections. The more varied the movement patterns and rhythmic complexities, the more the brain is stimulated to adapt and reorganize. Neuroplasticity has a positive effect on memory capacity, learning abilities, and problem-solving skills, which are good for the design process.

At the same time, cognitive flexibility supports the developed design because you always need to adjust your decisions, getting new data from user testing and feedback from the stakeholders. Dancers often have to improvise or adapt to changes in the rhythm and conditions. Also, they constantly learn new movements and combinations of them. Such experience in choreography and expression develops connections between hemispheres, which influences a person’s ability to think creatively in general.

Balance Exercises

In my opinion, the balance board is one of the most convenient and affordable home simulators. With its help, you can do a short workout at any time to take a break from long work and return to work with a fresh look.

Board balance exercises can be quite diverse. It can be added to your usual exercises and diversified with squats, exercises with a slight weight on the upper body, or shoulder and neck warm-up, which will increase cognitive activity as a result.

You can also just stand on the balance board while listening to work calls, which don’t require active participation, watching TV shows, or chatting on the phone with friends.

Case Study

“By training your body to move more creatively, you train your mind to think more creatively.”

— Jennifer Heisz. Move The Body, Heal The Mind, 2022

While it may be challenging to find documented real-life cases that provide clear examples of famous designs fueled by sport and physical activity, there are historically backed examples and research studies demonstrating that physical activity positively influences creativity.

For example, Charles Darwin’s “Thinking Path”. The scientist developed his most famous works, “On the Origin of Species” and “The Descent of Man,” at Down House, where he took daily walks. This activity is known as Darwin’s Thinking Path, and it is well-documented how his walking routine influenced the way he contemplated his scientific theories.

With the emergence of neuroscience as a science in the mid-20th century, we have gained a new perspective on what drives creative thinking, which is ultimately beneficial for design. Neuroscience provides insights into how various activities influence the brain, which, as a result, leads to changes in other fields.

For example, tennis is recognized for its benefits to brain health. It enhances the ability to process sensory information rapidly, improving overall cognitive processing speed and reaction time. In addition, strategic thinking is required in this game and engages the prefrontal cortex — the brain’s hub for decision-making and strategic planning. And we can see how this single activity demonstrates the far-reaching cognitive benefits of physical exercise.

Nowadays, researchers in neuroscience are united in their opinion on what unleashes creativity — physical activity unlocks it. There are even experiments that measure it: Marily Oppezzo, a behavioral and learning scientist at Stanford, studied how walking affects creativity. Her experiment compared walking on a treadmill, walking outdoors, sitting indoors and outdoors, and being pushed in a wheelchair. Surprisingly, even treadmill walking in a dull room boosted creativity by 60% compared to sitting.

“It’s not specific activities but individuals’ experiences of them that determine their effect.”

Amir-Homayoun Javadi, Associate Professor at the University of Kent

Another study goes further, explaining that not all sports impact creativity to the same extent.

“It may surprise you — it wasn’t artistic sports but net and combat sports. Why? Because cultivating a creative mind depends on how we train. In artistic sports (figure skating, gymnastics, synchronized swimming), athletes memorize a series of predefined steps. Although creating these routines may involve creativity, the training itself is structured, predictable, and planned.”

Training that is mostly predictable makes our brain less mentally flexible, in contrast to net and combat sports (such as badminton, tennis, volleyball, and fencing), which make us learn to act instinctively. As we train physically, our brain also adapts, becoming more flexible — particularly in terms of cognitive flexibility. This, in turn, enhances our creativity. (Source: Jennifer Heisz. Move The Body, Heal The Mind, 2022)

Mental Activity

However, physical activity is not the only way to achieve a ‘relaxed attention’ state and learn to balance the left and right hemispheres of the brain. Mental activities also trigger the same process. I have selected the top 3 activities that will enhance your creativity at work.

Learning Foreign Languages

As we discussed above, during the design process, both brain hemispheres are used, and when you’re learning foreign languages, it leads to similar processes in your brain, so you train it through similar activities.

Language processing primarily occurs in the left hemisphere, but emotional intonation and context (e.g., sarcasm, tone) are understood by the right hemisphere. When someone says “Oh, great!” after receiving bad news, the left hemisphere processes the words and grammar, understanding the literal meaning, while the right hemisphere interprets the tone and context, allowing the person to get the real point of the message.

Learning a second language exposes people to new methods of expressing the same thoughts, which promotes creativity. Finding synonyms, understanding idiomatic terms, and gaining the ability to flip between languages all promote divergent thinking, which is the ability to generate several solutions to a given problem.

In parallel, learning foreign languages helps to develop storytelling and self-presentation skills, which are also very useful in a designer’s work.

“I’ve lived in several countries for a long time, so in addition to my native language, I speak three other foreign languages as well. It helps me to build communication with different people, which is very important in the designer’s work.

I think because I know how to say the same thing in different languages, I also use this approach in design. To solve the same problem, I can offer several solutions and choose the most appropriate one together with the stakeholders.

Now I am a Senior Growth Designer, and this job requires constantly looking for non-standard solutions and implementing them quickly. I think the use of different languages contributes to this from the point of view of brain function.

Speaking multiple languages also comes really handy when you are dealing with personas from different nationalities. For example, Western Saas products use a more minimalist approach, whereas Saas from Asia or China, for example, more information is better than less.”

— Maxence Akodjenou, Senior Growth Designer (working on complex B2B apps)

Board Games

Table games develop strategic thinking, require players to anticipate opponents’ moves, solve problems in real-time, and sometimes think outside the box. Traditional games like chess encourage critical thinking, as players must analyze the current situation, weigh potential outcomes, and decide on the best course of action. This improves the brain’s executive functions, including decision-making, planning, and strategic thinking.

Some tabletop games are based on role-playing or storytelling, such as Dungeons & Dragons or Dixit. These games encourage players to invent stories, create characters, and navigate imaginative scenarios, fostering creative thinking and imagination.

Board games also train communication skills, which product designers have to use a lot in their jobs. Playing table games, especially in groups, encourages the participants to convince their teammates of their decisions and carefully listen to others. The games that involve cooperation help the players develop their collaboration skills, such as finding compromises, negotiating, and making concessions.

Music Lessons

Playing a musical instrument has been a widely researched topic in neuroscience in recent decades. It has been proven that music lessons improve cognitive abilities by improving the neural connection between the left and right hemispheres of the brain, which leads to a positive effect on memory, learning ability, and non-verbal thinking, as a result of which the brain as a whole works much more productively in other areas of life.

The brain learns to hear and interpret sounds, which happens only while playing an instrument and is impossible while simply listening to music. As a result, a person is better able to process complex information. Playing musical instruments involves the relationship between the motor, sensory, auditory, visual, and emotional components of the central and peripheral nervous systems. Such brain training includes artistic and aesthetic aspects of learning, which is a unique feature of playing a musical instrument. The combination of linguistic and mathematical activity in the left hemisphere gets used to working in coordination with creative functions in the right hemisphere.

An interesting fact: Albert Einstein often played the violin during moments of deep thinking, claiming that music was an extension of his thought process and helped him solve particularly difficult problems.

A Lesson From Paul Klee

It is worth noting that it works both ways — both your music lessons enhance your creativity in design, and design pushes your success in music.

In the book Enchanted Neurons, Pierre Boulez, French composer and conductor, talks about the lessons that Paul Klee (Swiss-born German artist. His highly individual style was influenced by movements in art that included expressionism, cubism, and surrealism) taught at the Bauhaus (German art school which became famous for its approach to design based on unifying individual artistic vision with the principles of mass production and emphasis on function).

“Theoretical reflection is particularly interesting to me when it is applied to something that is completely foreign to music because it then makes it possible to discover solutions that you would never have found if you had remained bound by the limits of your art.

I’ll give you a personal example: the discovery not only of Klee’s painting but also the lessons that he gave at the Bauhaus, which we spoke about earlier, was extremely important to me, especially from the point of view of composition. I understood how using very simple elements like two motifs made it possible to think about the way in which these two motifs could interact. I remember, in particular, an exercise given by Klee to his students: a straight line and a circle. That’s it. The exercise consisted of trying to invent something, a meeting of this line and this circle.”

— Pierre Boulez, Jean-Pierre Changeux, Philippe Manoury. Enchanted Neurons, 2020

This lesson shared by Pierre Boulez demonstrates how interdisciplinary inspiration — such as the course of visual artist Paul Klee — shaped his creative process and how concepts from outside music can lead to new solutions.

In my opinion, the reverse can also be true: music and its principles can inspire creativity in other disciplines.

“I started composing music even earlier than I started designing. Music has a composition and rhythm-like design. And development in one area also entails a boost in another. It works both ways; success in music develops my design skills. Design helps me make more complex music.

In addition, there is also a practical benefit; I make my own covers for my tracks and use my tracks for my showcases. Plus, I listen to a lot of different music, and it develops my world perception, fills me with energy, and creates the right mood for working on projects.”

— Sergei Diuzhev, Design Leader at MuseScore

Tips For Incorporating A Routine That Sustains Designer’s Creativity

Whenever you feel stuck in your work or overly critical of your designs or prototype, think about the strategies from the above that might help your creative process.

  1. Find something you genuinely enjoy or have always wanted to try and implement in small steps.
    It doesn’t need to be a completely new hobby. For example, you could dance to your favorite music at home, stand on a balance board between work calls, or try something new once a month or quarter with friends, like skating, rock climbing, or other activities. This year, I plan to go skiing for the first time.
  2. Constantly explore new things, even small ones.
    Take different routes to work, cook new dishes, or listen to unfamiliar music. Even if you don’t end up loving it, it’s still valuable because your brain is enriched by the experience.
  3. Meet new people.
    As I mentioned earlier, communication skills are essential for designers, but beyond that, new people can inspire you in unexpected ways. They might introduce you to a new sport, hobby, or activity that you could even try together.

I shared examples of designers who have rebuilt their creativity through activities like tennis, music, and languages, and I feel the impact in my own daily routine when I try new things and hobbies. Whatever approach you decide to follow, I guarantee your brain will feel the difference and reward you with fresh ideas and inspiration.

Conclusion

Creativity may be developed in a variety of ways, including browsing reference sites and putting in a lot of practice — both of which are important. Outside these classic ways, you can engage in activities that not only promote creativity but also improve your mental and physical health.

There are many possibilities for increasing brain activity, and you can develop your own entertaining and useful ways of spending time. Finally, trying something new will generate new thoughts and break down the monotony.

When you experience virtual reality, read poetry or fiction, see a film, listen to a piece of music, or move your body to dance, to name a few of the many arts, you are biologically changed. There is a neurochemical exchange that can lead to what Aristotle called catharsis, or a release of emotion that leaves you feeling more connected to yourself and others afterward. (Source: Susan Magsamen, Ivy Ross. Your Brain on Art, 2023)

Further Reading on Smashing Magazine

]]>
hello@smashingmagazine.com (Marina Chernyshova)
<![CDATA[Building An Offline-Friendly Image Upload System]]> https://smashingmagazine.com/2025/04/building-offline-friendly-image-upload-system/ https://smashingmagazine.com/2025/04/building-offline-friendly-image-upload-system/ Wed, 23 Apr 2025 10:00:00 GMT So, you’re filling out an online form, and it asks you to upload a file. You click the input, select a file from your desktop, and are good to go. But something happens. The network drops, the file disappears, and you’re stuck having to re-upload the file. Poor network connectivity can lead you to spend an unreasonable amount of time trying to upload files successfully.

What ruins the user experience stems from having to constantly check network stability and retry the upload several times. While we may not be able to do much about network connectivity, as developers, we can always do something to ease the pain that comes with this problem.

One of the ways we can solve this problem is by tweaking image upload systems in a way that enables users to upload images offline — eliminating the need for a reliable network connection, and then having the system retry the upload process when the network becomes stable, without the user intervening.

This article is going to focus on explaining how to build an offline-friendly image upload system using PWA (progressive web application) technologies such as IndexedDB, service workers, and the Background Sync API. We will also briefly cover tips for improving the user experience for this system.

Planning The Offline Image Upload System

Here’s a flow chart for an offline-friendly image upload system.

As shown in the flow chart, the process unfolds as follows:

  1. The user selects an image.
    The process begins by letting the user select their image.
  2. The image is stored locally in IndexedDB.
    Next, the system checks for network connectivity. If network connectivity is available, the system uploads the image directly, avoiding unnecessary local storage usage. However, if the network is not available, the image will be stored in IndexedDB.
  3. The service worker detects when the network is restored.
    With the image stored in IndexedDB, the system waits to detect when the network connection is restored to continue with the next step.
  4. The background sync processes pending uploads.
    The moment the connection is restored, the system will try to upload the image again.
  5. The file is successfully uploaded.
    The moment the image is uploaded, the system will remove the local copy stored in IndexedDB.
Implementing The System

The first step in the system implementation is allowing the user to select their images. There are different ways you can achieve this:

  • You can use a simple <input type="file"> element;
  • A drag-and-drop interface.

I would advise that you use both. Some users prefer to use the drag-and-drop interface, while others think the only way to upload images is through the <input type="file"> element. Having both options will help improve the user experience. You can also consider allowing users to paste images directly in the browser using the Clipboard API.

Registering The Service Worker

At the heart of this solution is the service worker. Our service worker is going to be responsible for retrieving the image from the IndexedDB store, uploading it when the internet connection is restored, and clearing the IndexedDB store when the image has been uploaded.

To use a service worker, you first have to register one:

if ('serviceWorker' in navigator) {
  navigator.serviceWorker.register('/service-worker.js')
    .then(reg => console.log('Service Worker registered', reg))
    .catch(err => console.error('Service Worker registration failed', err));
}

Checking For Network Connectivity

Remember, the problem we are trying to solve is caused by unreliable network connectivity. If this problem does not exist, there is no point in trying to solve anything. Therefore, once the image is selected, we need to check if the user has a reliable internet connection before registering a sync event and storing the image in IndexedDB.

function uploadImage() {
  if (navigator.onLine) {
    // Upload Image
  } else {
    // register Sync Event
    // Store Images in IndexedDB
  }
}

Note: I’m only using the navigator.onLine property here to demonstrate how the system would work. The navigator.onLine property is unreliable, and I would suggest you come up with a custom solution to check whether the user is connected to the internet or not. One way you can do this is by sending a ping request to a server endpoint you’ve created.

Registering The Sync Event

Once the network test fails, the next step is to register a sync event. The sync event needs to be registered at the point where the system fails to upload the image due to a poor internet connection.

async function registerSyncEvent() {
  if ('SyncManager' in window) {
    const registration = await navigator.serviceWorker.ready;
    await registration.sync.register('uploadImages');
    console.log('Background Sync registered');
  }
}

After registering the sync event, you need to listen for it in the service worker.

self.addEventListener('sync', (event) => {
  if (event.tag === 'uploadImages') {
    event.waitUntil(sendImages());
  }
});

The sendImages function is going to be an asynchronous process that will retrieve the image from IndexedDB and upload it to the server. This is what it’s going to look like:

async function sendImages() {
  try {
    // await image retrieval and upload
  } catch (error) {
    // throw error
  }
}

Opening The Database

The first thing we need to do in order to store our image locally is to open an IndexedDB store. As you can see from the code below, we are creating a global variable to store the database instance. The reason for doing this is that, subsequently, when we want to retrieve our image from IndexedDB, we wouldn’t need to write the code to open the database again.

let database; // Global variable to store the database instance

function openDatabase() {
  return new Promise((resolve, reject) => {
    if (database) return resolve(database); // Return existing database instance 

    const request = indexedDB.open("myDatabase", 1);

    request.onerror = (event) => {
      console.error("Database error:", event.target.error);
      reject(event.target.error); // Reject the promise on error
    };

    request.onupgradeneeded = (event) => {
        const db = event.target.result;
        // Create the "images" object store if it doesn't exist.
        if (!db.objectStoreNames.contains("images")) {
          db.createObjectStore("images", { keyPath: "id" });
        }
        console.log("Database setup complete.");
    };

    request.onsuccess = (event) => {
      database = event.target.result; // Store the database instance globally
      resolve(database); // Resolve the promise with the database instance
    };
  });
}

Storing The Image In IndexedDB

With the IndexedDB store open, we can now store our images.

Now, you may be wondering why an easier solution like localStorage wasn’t used for this purpose.

The reason for that is that IndexedDB operates asynchronously and doesn’t block the main JavaScript thread, whereas localStorage runs synchronously and can block the JavaScript main thread if it is being used.

Here’s how you can store the image in IndexedDB:

async function storeImages(file) {
  // Open the IndexedDB database.
  const db = await openDatabase();
  // Create a transaction with read and write access.
  const transaction = db.transaction("images", "readwrite");
  // Access the "images" object store.
  const store = transaction.objectStore("images");
  // Define the image record to be stored.
  const imageRecord = {
    id: IMAGE_ID,   // a unique ID
    image: file     // Store the image file (Blob)
  };
  // Add the image record to the store.
  const addRequest = store.add(imageRecord);
  // Handle successful addition.
  addRequest.onsuccess = () => console.log("Image added successfully!");
  // Handle errors during insertion.
  addRequest.onerror = (e) => console.error("Error storing image:", e.target.error);
}

With the images stored and the background sync set, the system is ready to upload the image whenever the network connection is restored.

Retrieving And Uploading The Images

Once the network connection is restored, the sync event will fire, and the service worker will retrieve the image from IndexedDB and upload it.

async function retrieveAndUploadImage(IMAGE_ID) {
  try {
    const db = await openDatabase(); // Ensure the database is open
    const transaction = db.transaction("images", "readonly");
    const store = transaction.objectStore("images");
    const request = store.get(IMAGE_ID);
    request.onsuccess = function (event) {
      const image = event.target.result;
      if (image) {
        // upload Image to server here
      } else {
        console.log("No image found with ID:", IMAGE_ID);
      }
    };
    request.onerror = () => {
        console.error("Error retrieving image.");
    };
  } catch (error) {
    console.error("Failed to open database:", error);
  }
}

Deleting The IndexedDB Database

Once the image has been uploaded, the IndexedDB store is no longer needed. Therefore, it should be deleted along with its content to free up storage.

function deleteDatabase() {
  // Check if there's an open connection to the database.
  if (database) {
    database.close(); // Close the database connection
    console.log("Database connection closed.");
  }

  // Request to delete the database named "myDatabase".
  const deleteRequest = indexedDB.deleteDatabase("myDatabase");

  // Handle successful deletion of the database.
  deleteRequest.onsuccess = function () {
    console.log("Database deleted successfully!");
  };

  // Handle errors that occur during the deletion process.
  deleteRequest.onerror = function (event) {
    console.error("Error deleting database:", event.target.error);
  };

  // Handle cases where the deletion is blocked (e.g., if there are still open connections).
  deleteRequest.onblocked = function () {
    console.warn("Database deletion blocked. Close open connections and try again.");
  };
}

With that, the entire process is complete!

Considerations And Limitations

While we’ve done a lot to help improve the experience by supporting offline uploads, the system is not without its limitations. I figured I would specifically call those out because it’s worth knowing where this solution might fall short of your needs.

  • No Reliable Internet Connectivity Detection
    JavaScript does not provide a foolproof way to detect online status. For this reason, you need to come up with a custom solution for detecting online status.
  • Chromium-Only Solution
    The Background Sync API is currently limited to Chromium-based browsers. As such, this solution is only supported by Chromium browsers. That means you will need a more robust solution if you have the majority of your users on non-Chromium browsers.
  • IndexedDB Storage Policies
    Browsers impose storage limitations and eviction policies for IndexedDB. For instance, in Safari, data stored in IndexedDB has a lifespan of seven days if the user doesn’t interact with the website. This is something you should bear in mind if you do come up with an alternative for the background sync API that supports Safari.
Enhancing The User Experience

Since the entire process happens in the background, we need a way to inform the users when images are stored, waiting to be uploaded, or have been successfully uploaded. Implementing certain UI elements for this purpose will indeed enhance the experience for the users. These UI elements may include toast notifications, upload status indicators like spinners (to show active processes), progress bars (to show state progress), network status indicators, or buttons to provide retry and cancel options.

Wrapping Up

Poor internet connectivity can disrupt the user experience of a web application. However, by leveraging PWA technologies such as IndexedDB, service workers, and the Background Sync API, developers can help improve the reliability of web applications for their users, especially those in areas with unreliable internet connectivity.

]]>
hello@smashingmagazine.com (Amejimaobari Ollornwi)
<![CDATA[What Does It Really Mean For A Site To Be Keyboard Navigable]]> https://smashingmagazine.com/2025/04/what-mean-site-be-keyboard-navigable/ https://smashingmagazine.com/2025/04/what-mean-site-be-keyboard-navigable/ Fri, 18 Apr 2025 13:00:00 GMT Efficient navigation is vital for a functional website, but not everyone uses the internet the same way. While most visitors either scroll on mobile or click through with a mouse, many people only use their keyboards. Up to 10 million American adults have carpal tunnel syndrome, which may cause pain when holding a mouse, and vision problems can make it difficult to follow a cursor. Consequently, you should keep your site keyboard navigable to achieve universal appeal and accessibility.

Understanding Keyboard Navigation

Keyboard navigation allows users to engage with your website solely through keyboard input. That includes using shortcuts and selecting elements with the Tab and Enter keys.

There are more than 500 keyboard shortcuts among operating systems and specific apps your audience may use. Standard ones for web navigation include Ctrl + F to find words or resources, Shift + Arrow to select text, and Ctrl + Tab to move between browser tabs. While these are largely the responsibilities of the software companies behind the specific browser or OS, you should still consider them.

Single-button navigation is another vital piece of keyboard navigability. Users may move between clickable items with the Tab and Shift keys, use the Arrow keys to scroll, press Enter or Space to “click” a link, and exit pop-ups with Esc.

The Washington Post homepage goes further. Pressing Tab highlights clickable elements as it should, but the first button press brings up a link to the site’s accessibility statement first. Users can navigate past this, but including it highlights how the design understands how keyboard navigability is a matter of accessibility.

You should understand how people may use these controls so you can build a site that facilitates them. These navigation options are generally standard, so any deviation or lack of functionality will stand out. Ensuring keyboard navigability, especially in terms of enabling these specific shortcuts and controls, will help you meet such expectations and avoid turning users away.

Why Keyboard Navigation Matters In Web Design

Keyboard navigability is crucial for a few reasons. Most notably, it makes your site more accessible. In the U.S. alone, over one in four people have a disability, and many such conditions affect technology use. For instance, motor impairments make it challenging for someone to use a standard mouse, and users with vision problems typically require keyboard and screen reader use.

Beyond accounting for various usage needs, enabling a wider range of control methods makes a site convenient. Using a keyboard rather than a mouse is faster when it works as it should and may feel more comfortable. Considering how workers spend nearly a third of their workweek looking for information, any obstacles to efficiency can be highly disruptive.

Falling short in these areas may lead to legal complications. Regulations like the Americans with Disabilities Act necessitate tech accessibility. While the ADA has no binding rules for what constitutes an accessible website, it specifically mentions keyboard navigation in its nonbinding guidance. Failing to support such functionality does not necessarily mean you’ll face legal penalties, but courts can use these standards to inform their decision on whether your site is reasonably accessible.

In 2023, Kitchenaid faced a class-action lawsuit for failing to meet such standards. Plaintiffs alleged that the company’s site didn’t support alt text or keyboard navigation, making it inaccessible to users with visual impairments. While the case ultimately settled out of court, it’s a reminder of the potential legal and financial repercussions of overlooking inclusivity.

Outside the law, an inaccessible site presents ethical concerns, as it shows preferential treatment for those who can use a mouse, even if that’s unintentional. Even without legal action, public recognition of this bias may lead to a drop in visitors and a tainted public image.

Elements Of A Keyboard-Navigable Site

Thankfully, ensuring keyboard navigability is a straightforward user experience design practice. Because navigation is standard across OSes and browsers, keyboard-accessible sites employ a few consistent elements.

Focus Indicators

Web Accessibility In Mind states that sites must provide a visual indicator of elements currently in focus when users press Tab. Focus indicators are typically a simple box around the highlighted icon.

These are standard in CSS, but some designers hide them, so avoid using outline:0 or outline:none to limit their visibility. You can also increase the contrast or change the indicator’s color in CSS.

The CNN Breaking News homepage is a good example of a strong focus indicator. Pressing Tab immediately brings up the box, which is bold enough to see easily and even uses a white border when necessary to stand out against black or dark-colored site elements.

Logical Tab Order

The order in which the focus indicator moves between elements also matters. Generally speaking, pressing the Tab key should move it from left to right and top to bottom — the same way people read in English.

A few errors can stand in the way. Disabled buttons disrupt keyboard navigation flow by skipping an element with no explanation or highlighting it without making it clickable. Similarly, an interface where icons don’t fall in a predictable left-to-right, top-to-bottom order will make logical tab movement difficult.

The Sutton Maddock Vehicle Rental site is a good example of what not to do. When you press Tab, the focus indicator jumps from “Contact” to the Facebook link before going backward to the Twitter link. It starts at the right and moves left when it goes to the next line — the opposite order of what feels natural.

Skip Navigation Links

Skip links are also essential. These interactive elements let keyboard users jump to specific content without repeated keystrokes. Remember, these skips must be one of the first areas highlighted when you press Tab so they work as intended.

The HSBC Group homepage has a few skip navigation links. Pressing Tab pulls up three options, letting users quickly jump to whichever part of the site interests them.

Keyboard-Accessible Interactive Elements

Finally, all interactive elements on a keyboard-navigable site should be accessible via keystrokes. Anything people can click on or drag with a cursor should also support navigation and interaction. Enabling this is as simple as letting users select all items with the Tab or Arrow keys and press them with Space or Enter.

Appropriately, this Arizona State University page on keyboard accessibility showcases this concept well. All drop-down menus are possible to open by navigating to them via Tab and pressing Enter, so users don’t need a mouse to interact with them.

How to Test for Keyboard Navigability

After designing a keyboard-accessible UX, you should test it to ensure that it works properly. The easiest way to do this is to explore the site solely with your keyboard. The chart below outlines the criteria to look for when determining whether your site is legitimately keyboard navigable.

Keyboard Navigable Not Keyboard Navigable
Clickable Elements All elements are reachable through the keyboard and open when you press Enter. Only some elements are possible to reach through the keyboard. Some links may be broken or not open when you press Enter.
Focus Indicators Pressing Tab, Space, or Enter brings up a focus indicator that is easy to see in all browsers. Focus indicators may not appear when pressing all buttons. The box may be hard to see or only appear in some browsers.
Skip Navigation Links Pressing Tab for the first time pulls up at least one skip link to take users to much-visited content or menus. Continuing to press Tab moves the focus indicator past these links to highlight elements on the page as normal. No skip links appear when pressing Tab for the first time. Alternatively, they appear after moving through all other elements. Skip links may not be functional.
Screen Reader Support Screen readers can read each element when highlighted with the focus indicator. Some elements may not encourage any action from screen readers when highlighted.

The Web Content Accessibility Guidelines outline two test rules to verify keyboard navigability:

  1. The first ensures all interactive elements are accessible via the Tab key,
  2. The second checks for keyboard scroll functionality.

Employ both standards to review your UX before making a site live.

Typical issues include the inability to highlight elements with the Tab key or things that don’t fall in a natural order. You can discover both problems by trying to access everything with your keyboard. However, you may prefer to conduct a navigability audit through a third party. Many private companies offer these services, but you can also use the Bureau of Internet Accessibility for a basic WCAG audit.

Make Your Site Keyboard Navigable Today

Keyboard navigability ensures you cater to all needs and preferences for an inclusive, accessible website design. While it’s straightforward to implement, it’s also easy to miss, so remember these principles when designing your UX and testing your site.

WCAG provides several techniques you can employ to meet keyboard accessibility standards and enhance your users’ experience:

Follow these guidelines and use WCAG’s test rules to create an accessible site. Remember to re-check it every time you add elements or change your UX.

Additionally, consider the following recommended reads to learn more about keyboards and their role in accessibility:

User-friendliness is an industry best practice that demonstrates your commitment to inclusivity for all. Even users without disabilities will appreciate intuitive, efficient keyboard navigation.

]]>
hello@smashingmagazine.com (Eleanor Hecks)
<![CDATA[Fostering An Accessibility Culture]]> https://smashingmagazine.com/2025/04/fostering-accessibility-culture/ https://smashingmagazine.com/2025/04/fostering-accessibility-culture/ Thu, 17 Apr 2025 08:00:00 GMT A year ago, I learned that my role as an accessibility engineer was at risk of redundancy. It was a tough moment, both professionally and personally. For quite some time, my mind raced with guilt, self-doubt, plain sadness... But as I sat with these emotions, I found one line of thought that felt productive: reflection. What did I do well? What could I have done better? What did I learn?

Looking back, I realized that as part of a small team in a massive organization, we focused on a long-term goal that we also believed was the most effective and sustainable path: gradually shaping the organization’s culture to embrace accessibility.

Around the same time, I started listening to “Atomic Habits” by James Clear. The connection was immediate. Habits and culture are tightly linked concepts, and fostering an accessibility culture was really about embedding accessibility habits into everyone’s processes. That’s what we focused on. It took us time (and plenty of trial and error) to figure this out, and while there’s no definitive playbook for creating an accessibility program at a large organization, I thought it might help others if I shared my experiences.

Before we dive in, here’s a quick note: This is purely my personal perspective, and you’ll find a bias towards culture and action in big organizations. I’m not speaking on behalf of any employer, past or present. The progress we made was thanks to the incredible efforts of every member of the team and beyond. I hope these reflections resonate with those looking to foster an accessibility culture at their own companies.

Goals Vs. Systems

To effectively shape habits, it’s crucial to focus on systems and processes (who we want to become) rather than obsessing over a final goal (or what we want to achieve). This perspective is especially relevant in accessibility.

Take the goal of making your app accessible. If you focus solely on achieving compliance without changing your systems (embedding accessibility into processes and culture), progress will be temporary.

For example, you might request an accessibility audit and fix the flagged issues to achieve compliance. While this can provide “quick” results, it’s often a short-lived solution.

Software evolves constantly: features are rewritten, old code is removed, and new functionality is added. Without an underlying system in place, accessibility issues can quickly resurface. Worse, this approach may reinforce the idea that accessibility is something external, checked by someone else, and fixed only when flagged. Not to mention that it becomes increasingly expensive the later accessibility issues are addressed in the process. It can also feel demoralizing when accessibility becomes synonymous with a long list of last-minute tickets when you are busiest.

Despite this, companies constantly focus on the goal rather than the systems.

“Accessibility is both a state and a practice.”

— Sommer Panage, SwiftTO talk, “Building Accessibility into Your Company, Team, and Culture

I’ll take the liberty of tweaking that to an aspirational state. Without recognizing the importance of the practice, any progress made is at risk of regression.

Instead, I encourage organizations to focus on building habits and embedding good accessibility practices into their workflows. A strong system not only ensures lasting progress but also fosters a culture where accessibility becomes second nature.

What Is Your Actual Goal?

That doesn’t mean goals are useless — they’re very effective in setting up direction.

In my team, we often said (only half-jokingly) that our ultimate goal was to put ourselves out of a job. This mindset reflects an important principle: accessibility is a cross-organizational responsibility, not the task of a single person or team.

That’s why, in my opinion, focusing solely on compliance rather than culture transformation (or prioritizing the “state” of accessibility over the “practice”) is a flawed strategy.

The real goal should be to build a user-centric culture where accessibility is embedded in every workflow, decision, and process. By doing so, companies can create products where accessibility is not about checking boxes and closing tickets but delivering meaningful and inclusive experiences to all users.

How Do We Get There?

Different companies (of various sizes, structures, and cultures) will approach accessibility differently, depending on where they are in their journey. I still have to meet, though, an accessibility team that ever felt they had enough resources. This makes careful resource allocation a cornerstone of your strategy. And while there’s no one-size-fits-all solution, shifting left (addressing issues earlier in the development process) tends to be the most effective approach in most cases.

Design Systems

If your company has a design system, partnering with the team that owns it can be one of your biggest wins. Fixing a single component used across dozens of places improves the experience everywhere it’s used. This approach scales beautifully.

Involvement in foundational decisions and discussions, like choosing color palettes, typography, and component interactions, and so on, can also be very valuable. Contributing to documentation and guidelines tailored to accessibility can help teams across the organization make informed decisions.

For a deeper dive, I recommend Feli Bernutz’s excellent talk, “Designing APIs: How to Ensure Accessibility in Design Systems.”

Still, I would encourage everyone to strive to change that mindset.

Doing accessibility for economic or legal reasons is valid, but it can lead to perverse incentives, where the bare minimum and compliance become the strategy, or where teams constantly need to prove their return on investment.

It is better to do it for the “wrong” reasons than not to do it at all. But ultimately, those aren’t the reasons we should be doing it.

The “13 Letters” podcast opened with an incredibly interesting two-part episode featuring Mike Shebanek. In it, Mike explains how Apple eventually renewed its commitment to accessibility because, in the state of Maine, schools were providing Macs and needed a screen reader for students who required one. It seems like a somewhat business-driven decision. But years later, Tim Cook famously stated, “When we work on making our devices accessible by the blind, I don’t consider the bloody ROI.” He also remarked, “Accessibility rights are human rights.

That’s the mindset I wish more CEOs and leaders had. It is a story of how a change of mindset from “we have to do it” to “it is a core part of what we do” leads to a lasting and successful accessibility culture. Going beyond the bare minimum, Apple has become a leader in accessibility. An innovative company that consistently makes products more accessible and pushes the entire industry forward.

The Good News

Once good habits are established, they tend to stick around. When I was let go, some people (I’m sure trying to comfort me) said the accessibility of the app would quickly regress and that the company would soon realize their mistake. Unexpectedly for them, I responded that I actually hoped it wouldn’t regress anytime soon. That, to me, would be the sign that I had done my job well.

And honestly, I felt confident it wouldn’t. Incredible people with deep knowledge and a passion for accessibility and building high-quality products stayed at the company. I knew the app was in good hands.

But it’s important not to fall into complacency. Cultures can be taken for granted, but they need constant nurturing and protection. A company that hires too fast, undergoes a major layoff, gets acquired, experiences high turnover, or sees changes in leadership or priorities… Any of these can pretty quickly destabilize something that took years to build.

Wrapping Up

This might not be your experience, and what we did may not work for you, but I hope you find this insight useful. I have, as they say, strong opinions, but loosely held. So I’m looking forward to knowing what you think and learning about your experiences too.

There’s no easy way or silver bullet! It’s actually very hard! The odds are against you. And we tend to constantly be puzzled about why the world is against us doing something that seems so obviously the right thing to do: to invite and include as many people as possible to use your product, to remove barriers, to avoid exclusion. It is important to talk about exclusion, too, when we talk about accessibility.

“Even though we were all talking about inclusion, we each had a different understanding of that word. Exclusion, on the other hand, is unanimously understood as being left out (...) Once we learn how to recognize exclusion, we can begin to see where a product or experience that works well for some might have barriers for someone else. Recognizing exclusion sparks a new kind of creativity on how a solution can be better.”

Kat Holmes

Something that might help: always assume goodwill and try to meet people where they are. I need to remind myself of this quite often.

“It is all about understanding where people are, meeting them where they’re at (...) People want to fundamentally do the right thing (...) They might not know what they don’t know (...) It might mean stepping back and going to the fundamentals (...) I know some people get frustrated about having to re-explain accessibility over and over again, but I believe that if we are not willing to do that, then how are we gonna change the hearts and minds of people?”

Jennison Asuncion

I’d encourage you to:

  • If you haven’t, just start. No matter what.
  • Play the long game, and focus more on systems and processes than just goals.
  • Build a network: rally allies around you and secure buy-in from leadership by showing that accessibility is not extra work; if considered after the fact, they’re actually missed steps.
  • Shift left and be strategic: reflect on where your limited resources can have the biggest, most lasting impact.
  • Be persistent. Be resilient.

But honestly, anything you can do is progress. And progress is all we need, just for things to be a little better every day. Your job is incredibly important. Thanks for all you do!

Accessibility: This is the way!

]]>
hello@smashingmagazine.com (Daniel Devesa Derksen-Staats)
<![CDATA[Inclusive Dark Mode: Designing Accessible Dark Themes For All Users]]> https://smashingmagazine.com/2025/04/inclusive-dark-mode-designing-accessible-dark-themes/ https://smashingmagazine.com/2025/04/inclusive-dark-mode-designing-accessible-dark-themes/ Tue, 15 Apr 2025 13:00:00 GMT Dark mode, a beloved feature in modern digital interfaces, offers a visually striking alternative to traditional light themes. Its allure lies in the striking visual contrast it provides, a departure from the light themes that have dominated our screens for decades.

However, its design often misses the mark on an important element — accessibility. For users with visual impairments or sensitivities, dark mode can introduce significant challenges if not thoughtfully implemented.

Hence, designing themes with these users in mind can improve user comfort in low-light settings while creating a more equitable digital experience for everyone. Let’s take a look at exactly how this can be done.

The Pros And Cons Of Dark Modes In Terms Of Accessibility

Dark mode can offer tangible accessibility benefits when implemented with care. For many users, especially those who experience light sensitivity, a well-calibrated dark theme can reduce eye strain and provide a more comfortable reading experience. In low-light settings, the softer background tones and reduced glare may help lessen fatigue and improve visual focus.

However, these benefits are not universal. For some users, particularly those with conditions such as astigmatism or low contrast sensitivity, dark mode can actually compromise readability. Light text on a dark background may lead to blurred edges or halo effects around characters, making it harder to distinguish content.

The Role Of Contrast In Dark Mode Accessibility

When you’re designing, contrast isn’t just another design element, it’s a key player in dark mode’s overall readability and accessibility. A well-designed dark mode, with the right contrast, can also enhance user engagement, creating a more immersive experience and drawing users into the content.

First and foremost, cleverly executing your site’s dark mode will result in a lower bounce rate (as much as 70%, according to one case study from Brazil). You can then further hack this statistic and greet visitors with a deep black, reinforcing your rankings in organic search results by sending positive signals to Google.

How is this possible? Well, the darker tones can hold attention longer, especially in low-light settings, leading to higher interaction rates while making your design more accessible. The point is, without proper contrast, even the sleekest dark mode design can become difficult to navigate and uncomfortable to use.

Designing For Contrast In Dark Mode

Instead of using pure black backgrounds, which can cause eye strain and make text harder to read, opt for dark grays. These softer tones help reduce harsh contrast and provide a modern look.

However, it’s important to note that color adjustments alone don’t solve technical challenges like anti-aliasing. In dark mode, anti-aliasing has the problem of halo effects, where the edges of the text appear blurred or overly luminous. To mitigate these issues, designers should test their interfaces on various devices and browsers and consider CSS properties to improve text clarity.

Real-world user testing, especially with individuals who have visual impairments, is essential to fine-tune these details and ensure an accessible experience for all users.

For individuals with low vision or color blindness, the right contrast can mean the difference between a frustrating and a seamless user experience. To keep your dark mode design looking its best, don’t forget to also:

  • Try to choose high-contrast color combinations for improved readability.
  • Make sure you avoid overly saturated colors, as they can strain the eyes in dark mode.
  • Use contrast checker tools like WebAIM to evaluate your design choices and ensure accessibility.

These simple adjustments make a big difference in creating a dark mode that everyone can use comfortably.

The Importance Of Readability In Dark Themes

While dark themes provide a sleek and visually appealing interface, some features still require lighter colors to remain functional and readable.

Certain interactive elements like buttons or form fields need to be easily distinguishable, especially if it involves transactions or providing personal information. Simply put, no one wants to sign documents digitally if they have to look for the right field, nor do they want to make a transaction if there is friction.

In addition to human readability, machine readability is equally important in an age of increased automation. Machine readability refers to how effective computers and bots are at extracting and processing data from the interface without human intervention. It’s important for pretty much any type of interface that has automation built into the workflows. For example, if the interface utilizes machine learning, machine readability is essential. Machine learning relies on accurate, quality data and effective interaction between different modules and systems, which makes machine readability critical to make it effective.

You can help ensure your dark mode interface is machine-readable in the following ways:

  • Use clear, semantic markup.
    Write your HTML so that it naturally describes the structure of the page. This means using proper tags (like <header>, <nav>, <main>, and <footer>) and ARIA roles. When your code is organized this way, machines can read and understand your page better, regardless of whether it's in dark or light mode.
  • Keep the structure consistent across themes.
    Whether users choose dark mode or light mode, the underlying structure of your content should remain the same. This consistency ensures that screen readers and other accessibility tools can interpret the page without confusion.
  • Maintain good color contrast.
    In dark mode, use color choices that meet accessibility standards. This not only helps people with low vision but also ensures that automated tools can verify your design’s accessibility.
  • Implement responsive styles with media queries.
    Use CSS media queries like ‘prefers-color-scheme’ to automatically adjust the interface based on the user’s system settings. This makes sure that the switch between dark and light modes happens smoothly and predictably, which helps both users and assistive technologies process the content correctly.

Making sure that data, especially in automated systems, is clear and accessible prevents functionality breakdowns and guarantees seamless workflows.

Best Strategies For Designing Accessible Dark Themes

Although we associate visual accessibility with visual impairments, the truth is that it’s actually meant for everyone. Easier access is something we all strive for, right? But more than anything, practicality is what matters. Fortunately, the strategies below fit the description to a tee.

Strengthen Contrast For Usability

Contrast is the backbone of dark mode design. Without proper implementation, elements blend together, creating a frustrating user experience. Instead of looking at contrast as just a relationship between colors, try to view it in the context of other UI elements:

  • Rethink background choices.
    Instead of pure black, which can cause harsh contrast and eye strain, use dark gray shades like #121212. These tones offer a softer, more adaptable visual experience.
  • Prioritize key elements.
    Ensure interactive elements like buttons and links have contrast ratios exceeding 4.5:1. This not only aids readability but also emphasizes functionality.
  • Test in real environments.
    Simulate low-light and high-glare conditions to see how contrast performs in real-life scenarios.

Pay Special Attention To Typography In Dark Themes

The use of effective typography is vital for preserving readability in dark mode. In particular, the right font choice can make your design both visually appealing and functional, while the wrong one can cause strain and confusion for users.

Thus, when designing dark themes, it’s essential to prioritize text clarity without sacrificing aesthetics. You can do this by prioritizing:

  • Sans-serif fonts
    They are often the best option for dark mode, as they offer a clean, modern look and remain highly readable when paired with a well-balanced contrast.
  • Strategic use of light elements
    Consider incorporating subtle, lighter accents to emphasize key elements, such as headings, call-to-action buttons, or critical information, without fully shifting to a light mode. These accents act as visual cues, drawing attention to important content.
  • Proper font metrics and stylization
    It’s important to consider font size and weight—larger, bolder fonts tend to stand out better against dark backgrounds, ensuring that your text is easy to read.

Make Sure Your Color Integration Is Thoughtful

Colors in dark mode require a delicate balance to ensure accessibility. It’s not as simple as looking at a list of complimentary color pairs and basing your designs around them. Instead, you must think about how users with visual impairments will experience the dark theme design.

While avoiding color combinations like red and green for the sake of colorblind users is a widely known rule, visual impairment is more than just color blindness. In particular, you have to pay attention to:

  • Low vision: Ensure text is clear with strong contrast and scalable fonts. Avoid thin typefaces and cluttered layouts for better readability.
  • Light sensitivity (photophobia): Minimize bright elements against dark backgrounds to reduce eye strain. Provide brightness and contrast adjustment options for comfort.
  • Glaucoma: Use bold, clear fonts and simplify layouts to minimize visual confusion. Focus on reducing clutter and enhancing readability.
  • Macular degeneration: Provide large text and high-contrast visuals to aid users with central vision loss. Refrain from relying on centrally aligned, intricate elements.
  • Diabetic retinopathy: Keep designs simple, avoiding patterns or textures that obscure content. Use high-contrast and well-spaced elements for clarity.
  • Retinitis pigmentosa: Place essential elements centrally with high contrast for those with peripheral vision loss. Avoid spreading critical information across wide areas.
  • Cataracts: Reduce glare by using dark gray backgrounds instead of pure black. Incorporate soft, muted colors, and avoid sharp contrasts.
  • Night blindness: Provide bright, legible text with balanced contrast against dark themes. Steer clear of overly dim elements that can strain vision.

As you can see, there are a lot of different considerations. Something you need to account for is that it’s nigh-on impossible to have a solution that will fix all the issues. You can’t test an interface for every single individual who uses it. The best you can do is make it as accessible as possible for as many users as possible, and you can always make adjustments in later iterations if there are major issues for a segment of users.

Understanding Color Perception And Visual Impairments To Get The Ideal Dark Mode

Even though dark mode doesn’t target only users with visual impairments, their input and ease of use are perhaps the most important.

The role of color perception in dark mode varies significantly among users, especially for those with visual impairments like color blindness or low vision. These conditions can make it challenging to distinguish certain colors on dark backgrounds, which can affect how users navigate and interact with your design.

In particular, some colors that seem vibrant in light mode may appear muted or blend into the background, making it difficult for users to see or interact with key elements. This is exactly why testing your color palette across different displays and lighting conditions is essential to ensure consistency and accessibility. However, you probably won’t be able to test for every single screen type, device, or environmental condition. Once again, make the dark mode interface as accessible as possible, and make adjustments in later iterations based on feedback.

For users with visual impairments, accessible color palettes can make a significant difference in their experience. Interactive elements, such as buttons or links, need to stand out clearly from the rest of the design, using colors that provide strong contrast and clear visual cues.

In the example above, Slack did an amazing job providing users with visual impairments with premade options. That way, someone can save hours of valuable time. If it wasn’t obvious by now, apps that do this find much more success in customer attraction (and retention) than those that don’t.

Making Dark Mode A User Choice

Dark mode is often celebrated for its ability to reduce screen glare and blue light, making it more comfortable for users who experience certain visual sensitivities, like eye strain or discomfort from bright screens.

For many, this creates a more pleasant browsing experience, particularly in low-light environments. However, dark mode isn’t a perfect solution for everyone.

Users with astigmatism, for instance, may find it difficult to read light text on a dark background. The contrast can cause the text to blur or create halos, making it harder to focus. Likewise, some users prefer dark mode for its reduced eye strain, while others may find it harder to read or simply prefer light mode.

These different factors mean that adaptability is important to better accommodate users who may have certain visual sensitivities. You can allow users to toggle between dark and light modes based on their preferences. For even greater comfort, think of providing options to customize text colors and background shades.

Switching between dark and light modes should also be smooth and unobtrusive. Whether you’re working in a bright office or relaxing in a dimly lit room, the transition should never disrupt your workflow.

On top of that, remembering your preferences automatically for future sessions creates a consistent and thoughtful user experience. These adjustments turn dark mode into a truly personalized feature, tailored to elevate every interaction you have with the interface.

Conclusion

While dark mode offers benefits like reduced eye strain and energy savings, it still has its limits. Focusing on key elements like contrast, readability, typography, and color perception helps guarantee that your designs are inclusive and user-friendly for all of your users.

Offering dark mode as an optional, customizable feature empowers users to interact with your interface in a way that best suits their needs. Meanwhile, prioritizing accessibility in dark mode design creates a more equitable digital experience for everyone, regardless of their abilities or preferences.

]]>
hello@smashingmagazine.com (Alex Williams)
<![CDATA[Gild Just One Lily]]> https://smashingmagazine.com/2025/04/gild-just-one-lily/ https://smashingmagazine.com/2025/04/gild-just-one-lily/ Thu, 10 Apr 2025 15:00:00 GMT The phrase “gild the lily” implies unnecessary ornamentation, the idea being that adorning a lily with superficial decoration only serves to obscure its natural beauty. Well, I’m here to tell you that a little touch of what might seem like unnecessary ornamentation in design is exactly what you need.

When your design is solid, and you’ve nailed the fundamentals, adding one layer of decoration can help communicate a level of care and attention.

First, You Need A Lily

Let’s break down the “gild the lily” metaphor. First, you need a lily. Lilies are naturally beautiful, and each is unique. They don’t need further decoration. To play in this metaphor, let’s assume your design is already great. If not, you don’t have a lily. Get back to work on the fundamentals and check back in later (or keep reading anyhow).

Now that you’ve got a lily, let’s talk gilding. To “gild” is to cover it with a thin layer of gold. We’re not talking about the inner beauty baked into the very soul of your product (that’s the lily part of the metaphor). A touch of metaphorical gold foil on the surface can send a message of delight with a hint of decadence.

This gilding might come in the form of a subtle, animated transition or through a hint of colour and added depth in a drop shadow. Before we get into specifics, let’s make sure our metaphor doesn’t carry us too far.

Gild Sparingly
If we go too far with our gilding, we can communicate indulgence and excess rather than a hint of decadence.

An over-the-top design can be particularly irritating, depending on the state of mind of the person you’re designing for. For example, a flashy animation bragging about your new AI chat feature may not sit so well with a frustrated customer who can’t get their password reset to use it in the first place.

Wink At The Audience (Once)

Not every great product design can be so obviously beautiful as a lily. Even if you have a great design, it may not be noticeable to those enjoying the benefits of that design. Our designs shouldn’t always be noticeable, but sometimes it’s fun to notice and appreciate a great design.

If you’re Apple, you don’t need to worry about your design going unnoticed. Nobody thinks the background color of the Apple website is white (#FFFFFF) because they forgot to specify one in their stylesheet (though I’m old enough to remember a time when the default background of the web was a battleship gray, #CCCCCC). It’s so clear from the general level of refinement and production quality on the Apple site that the white background is a deliberate choice.

You and I are not Apple. Your client is (probably) not Apple. You don’t have an army of world-class product photographers and motion designers working in a glass spaceship in Cupertino. You’re on a small team pushing up against budget and schedule constraints. Even with these limitations, you’re managing to make great products.

The great design behind your products might be so well done that it is invisible. The door handle is so well-shaped that you don’t notice how well-shaped it is. That button is so well-placed that no one thinks about where it is positioned.

When you’re nailing the fundamentals, it’s ok to wink at the audience once in a while. Not only is it ok, but it can even augment your design.

By calling just a touch of attention to the thoughtfulness of your design, you may make it even more delightful to experience. Take it one inch too far, though, and you’re distracting from the experience and begging for applause. Walk this line carefully.

Digital Lilies

A metaphor — even one with gold and lilies — only takes us so far. Let’s consider some concrete examples of gilding a digital product. When it comes to the web, a few touches of polish to reach for can include the following:

Not-quite black and not-quite white: Instead of solid black (#000000) and solid white (#FFFFFF) colors on the web, find subtle variations. They may look black/white on a first glance, but there’s a subtle implication of care and customization. An off-white background also allows you to have pure white elements, like form inputs, that stand out nicely against the backdrop. Be careful to preserve enough contrast to ensure accessible text.

Layered and color-hinted shadows: Josh Comeau writes about bringing color into shadows, including a tool to help generate shadows that just feel better.

Comfortable lettering: Find a comfortable line height and letter spacing for the font family you’re using. A responsive type system like Utopia can help define spacing that looks and feels comfortable across a variety of device sizes.

A touch of color: When you don’t want your brand colors to overwhelm your design or you would like a complementary color to accent an otherwise monotone site, consider adding a single, simple stripe of solid color along the top of the viewport. Even something a few pixels tall can add a nice splash of color without complicating the rest of the design. The site for the One React web framework does this nicely and goes further with a uniquely shaped yellow accent at the top of the site. It’s even more subtle if you’re seeing their dark-mode design, but it’s still there.

Illustration and photography: It’s easier than ever to find whimsical and fun illustrations for your site, but no stock image can replace a relevant illustration or photo so apt that it must have been crafted just for this case. A List Apart has commissioned a unique illustration in a consistent style for each of their articles for decades. You don’t have to be a gifted illustrator. There may be charm in your amateur scribbles. If not, hire a great artist.

Beware, Cheap Gilding

Symbols of decadence are valued because they are precious in some way. This is why we talk about gilding with gold and not brass. This is also why a business card with rounded corners may feel more premium than a simple rectangle. It feels more expensive because it is.

Printing has gotten pretty cheap, though, even with premium touches. Printing flourishes like rounded corners or a smooth finish don’t convey the same value and care as they did before they became quick up-sell options from your local (or budget online) print shop.

A well-worded and thoughtful cover letter used to be a great way to stand out from a pile of similar resumes. Now, it takes a whole different approach to stand out from a wall of AI-LLM-generated cover letters that say everything an employer might want to hear.

On the web, a landing page where new page sections slide and fade in with animation is used to imply that someone spent extra time on the implementation. Now, a page with too much motion feels more like a million other templates enabled by site-building tools like Wix, Squarespace, and Webflow.

Custom fonts have also become so easy and ubiquitous on the web that sticking to system default fonts can be as strong a statement as a stylish typeface.

Does Anyone Care?

Is everyone going to notice that the drop shadows on your website have a hint of color? No. Is anyone going to notice? Maybe not. If you get the details right, though, people will feel it. These levels of polish are cumulative, contributing one percent here and there to the overall experience. They may not notice the hue of your drop shadow, but they may impart some trust from a sense of the care that went into the design.

Most people aren’t web developers or designers. They don’t know the implementation details of CSS animations and box-shadows. Similarly, I’m not a car expert — far from it. I value reliability and affordability more than performance and luxury in a car. Even so, when I close the door on a high-quality vehicle, I can feel the difference.

On that next project, allow yourself to gild just one lily.

]]>
hello@smashingmagazine.com (Steven Garrity)
<![CDATA[Using Manim For Making UI Animations]]> https://smashingmagazine.com/2025/04/using-manim-making-ui-animations/ https://smashingmagazine.com/2025/04/using-manim-making-ui-animations/ Tue, 08 Apr 2025 15:00:00 GMT Say you are learning to code for the first time, in Python, for example, which is a great starting point for getting into development. You are likely to come across some information like “a variable stores a value.” That sounds straightforward, but if you are a beginner just starting, then it can also be a bit confusing. How does a variable store or hold something? What happens when we assign a new value to it?

To figure things out, you could read a bunch and watch tutorials, but sometimes, resources like these don’t help the concept fully click. That’s where animation helps. It has the power to take complex programming concepts and turn them into something visual, dynamic, and easy to grasp.

Let’s break it down with an example: Say we have a box labeled X, first empty, then fill with a value 5, for this example, then update to 12, then 8, then 20, then 3.

2. Click “Create App”

You’ll see three options:

  1. “Create With Replit Agent”,
  2. “Choose a Template”,
  3. “Import from GitHub”.

3. Select “Choose a Template”

Then, search for Manim and create your app. At this point, you don’t have to do anything else because this sets up everything for you (including the main.py file, a media folder, and all of the required dependencies).

Voilà! Now you can start coding your animations right away!

Using Manim For Math, Code, And UI/UX Visuals

Okay, you know Manim. Whether it’s for math, programming, physics, or even prototyping UI concepts, it’s all about making complex concepts easier to grasp through animation. But how does that work in practice? Let’s go through some ways Manim makes things clearer and more engaging.

1. Math & Geometry Visuals

Sometimes, math can feel a bit like a puzzle with missing pieces. But with Manim, numbers, shapes, and graphs move, making patterns and relationships easier to grasp. Take graphs, for example. When you tweak a parameter, Manim instantly updates the visualization so you can watch how a function changes over time. And that’s a game-changer for understanding concepts like derivatives or transformations.

(Large preview)

Geometry concepts also come easier and become even more fun when you can see those shapes move, giving you a clear understanding of rotation or reflection. If you’re drawing a triangle with a compass and straightedge, for example, Manim can animate each step, making it easier to follow along and understand the idea.

2. Coding & Algorithms

As you may already know, coding is a process that runs step by step, and Manim makes that easy to see. Whether you are working on the front end or the back end, logic flows in a way that’s not always clear from just reading or writing code. With Manim, you can, for example, watch how a sorting algorithm moves numbers around or simply how a loop runs.

The same goes for data structures like linked lists, trees, and more. A binary tree makes more sense when you can see it grow and balance itself. Even complex algorithms like Dijkstra’s shortest path become clearer when you watch the path being calculated in real time, even if you may not have a background in math.

3. UI/UX Concepts & Motion Design

Although Manim is not a UI/UX design tool, it can be useful for demonstrating designs. Static images can’t always show the full picture, but with Manim, before-and-after comparisons become more dynamic, and of course, it makes it easier to highlight why a new navigation menu, for example, is more intuitive or how a checkout flow reduces friction.

Animated heatmaps can show click patterns over time, helping to spot trends more easily. Conversion funnels become clearer when each stage is animated, revealing exactly where users drop off.

Let’s Manim!

Well, that’s a lot we covered! By now, you should have Manim installed in whatever way works best for you. But before we jump into the coding part, let’s quickly go over Manim’s core building blocks. Manim’s animations are made of three main concepts:

  • Mobjects,
  • Animations,
  • Scenes.

1. Mobjects (Mathematical Objects)

Everything you display in Manim is a Mobject (short for “mathematical object”). There are different types:

  • Basic shapes like Circle(), Rectangle(), and Arrow(),
  • Text elements for adding labels, and
  • Advanced structures like graphs, axes, and bar charts.

A mobject is more like a blueprint, and it won’t show up unless you add it to a scene. Here’s a brief example:

from manim import *

class MobjectExample(Scene):
  def construct(self):
    circle = Circle()  # Create a circle
    circle.set_fill(BLUE, opacity=0.5)  # Set color and transparency
    self.add(circle)  # Add to the scene
    self.wait(2)

A blue circle will appear for about two seconds when you run this:

2. Animations

Animations in Manim, on the other hand, are all about changing these objects over time. Rather than just displaying a sharp edge, we can make it move, rotate, fade, or transform into something else. Really, we do have this much control through the Animation class.

If we use the same circle example from earlier, we can add animations to see how it works and compare the visual differences:

from manim import *

class AnimationExample(Scene):
  def construct(self):
    circle = Circle()
    circle.set_fill(BLUE, opacity=0.5) 

    self.play(FadeIn(circle))
    self.play(circle.animate.shift(RIGHT * 2))
    self.play(circle.animate.scale(1.5)) 
    self.play(Rotate(circle, angle=PI/4))  
    self.wait(2)

Here, we are making a move, scaling up, and rotating. The play() method is what makes animations run. For example, FadeIn(circle) makes the circle gradually appear, and circle.animate.shift(RIGHT * 2) moves it two units to the right. If you want to slow things down, you can add run_time to control the duration, like the following:

self.play(circle.animate.scale(2), run_time=3),

This makes the scaling take three more seconds instead of the default amount of time:

3. Scenes

Scenes are what hold everything together. A scene defines what appears, how it animates, and in what order. Every Manim script has a class that is inherited from a Scene, and it contains a construct() method. This is where we write our animation logic. For example,

class SimpleScene(Scene):
  def construct(self):
    text = Text("Hello, Manim!")
    self.play(Write(text))
    self.wait(2)

This creates a simple text animation where the words appear as if being written.

Bringing Manim To Design

As we discussed earlier, Manim is a great tool for UI/UX designers and front-end developers to visualize user interactions or to explain UI concepts. Think about how users navigate through a website or an app: they click buttons, move between pages, and interact with elements. With Manim, we can animate these interactions and see them play out step by step.

With this in mind, let’s create a simple flow where a user clicks a button, leading to a new page:

from manim import *

class UIInteraction(Scene):
  def construct(self):
    # Create a homepage screen
    homepage = Rectangle(width=6, height=3, color=BLUE)
    homepage_label = Text("Home Page").scale(0.8)
    homepage_group = VGroup(homepage, homepage_label)

    # Create a button
    button = RoundedRectangle(width=1.5, height=0.6, color=RED).shift(DOWN * 1)
    button_label = Text("Click Me").scale(0.5).move_to(button)
    button_group = VGroup(button, button_label)

    # Add homepage and button
    self.add(homepage_group, button_group)

    # Simulating a button click
    self.play(button.animate.set_fill(RED, opacity=0.5))  # Button press effect
    self.wait(0.5)  # Pause to simulate user interaction

    # Create a new page (simulating navigation)
    new_page = Rectangle(width=6, height=3, color=GREEN)
    new_page_label = Text("New Page").scale(0.8)
    new_page_group = VGroup(new_page, new_page_label)

    # Animate transition to new page
    self.play(FadeOut(homepage_group, shift=UP),  # Move old page up
      FadeOut(button_group, shift=UP),  # Move button up
      FadeIn(new_page_group, shift=DOWN))  # Bring new page from top
    self.wait(2)

The code creates a simple UI animation for a homepage displaying a button. When the button is clicked, it fades slightly to simulate pressing, and then the homepage and button fade out while a new page fades in, creating a transition effect.

If you think of it, scrolling is one of the most natural interactions in modern web and app design. Whether moving between sections on a landing page or smoothly revealing content, well-designed scroll animations make the experience feel fluid. Let me show you:

from manim import *

class ScrollEffect(Scene):
  def construct(self):
    # Create three sections to simulate a webpage
    section1 = Rectangle(width=6, height=3, color=BLUE).shift(UP*3)
    section2 = Rectangle(width=6, height=3, color=GREEN)
    section3 = Rectangle(width=6, height=3, color=RED).shift(DOWN*3)

    # Add text to each section
    text1 = Text("Welcome", font_size=32).move_to(section1)
    text2 = Text("About Us", font_size=32).move_to(section2)
    text3 = Text("Contact", font_size=32).move_to(section3)

    self.add(section1, section2, section3, text1, text2, text3)
    self.wait(1)

    # Simulate scrolling down
    self.play(
      section1.animate.shift(DOWN*6),
      section2.animate.shift(DOWN*6),
      section3.animate.shift(DOWN*6),
      text1.animate.shift(DOWN*6),
      text2.animate.shift(DOWN*6),
      text3.animate.shift(DOWN*6),
      run_time=3
    )
    self.wait(1)

This animation shows a scrolling effect by moving sections of a webpage upward, simulating how content shifts as a user scrolls. It is a simple way to visualize transitions that make the UI feel smooth and engaging.

Wrapping Up

Manim makes it easier to show how users interact with a design. You can animate navigations, interactions, and user behaviors to understand better how design works in action. Is there more to explore? Definitely! You can take these simple examples and build on them by adding more complex features.

But what I hope you take away from all of this is that subtle animations can help communicate and clarify concepts and that Manim is a library for making those sorts of animations. Traditionally, it’s used to help explain mathematical and scientific concepts, but you can see just how useful it can be to working in front-end development, particularly when it comes to highlighting and visualizing UI changes.

]]>
hello@smashingmagazine.com (Joas Pambou)
<![CDATA[How To Build A Business Case To Promote Accessibility In Your B2B Products]]> https://smashingmagazine.com/2025/04/how-build-business-case-promote-accessibility-b2b-products/ https://smashingmagazine.com/2025/04/how-build-business-case-promote-accessibility-b2b-products/ Fri, 04 Apr 2025 12:00:00 GMT When I started working on promoting accessibility, I was fully convinced of its value and was determined to bring it to the business stakeholders. I thought that the moment I started pushing for it inside the company, my key stakeholders would be convinced, committed, and enlightened, and everyone would start working to make it possible.

I prepared a lovely presentation about the benefits of accessibility. I made sure my presentation reflected that accessibility is the right thing to do: it is good for everyone, including those who don’t have a disability; it improves usability, makes the code more robust, and, of course, promotes inclusivity. I confidently shared it with my stakeholders. I was so excited. Aaaaaand BOOM… I hit a wall. They didn’t show much interest. I repetitively got comments, such as:

  • It doesn’t bring much value to us.
  • It doesn’t impact the revenue.
  • The regulation doesn’t apply to us, so there is no reason.
  • Accessibility is just for a few people with disabilities.
  • It would cost too much.

“People don’t manage to understand the real value. How can they say it has no impact?” I thought. After some time of processing my frustration and thinking about it, I realized that maybe I was not communicating the value correctly. I was not speaking the same language, and I was just approaching it from my perspective. It was just a presentation, not a business case.

If there is something I had to learn when working that I didn’t in university, it is that if you want to move things forward in a company, you have to have a business case. I never thought that being a UX Designer would imply building so many of them. The thing with business cases, and that I neglected on my first attempts, is that they put the focus on, well, “the business”.

The ultimate goal is to build a powerful response to the question “Why should WE spend money and resources on this and not on something else?” not “Why is it good?” in general.

After some trial and error, I understood a bit better how to tackle the main comments and answer this question to move the conversation forward. Of course, the business case and strategy you build will depend a lot on the specific situation of your company and your product, but here is my contribution, hoping it can help.

In this article, I will focus on two of the most common situations: pushing for accessibility in a new product or feature and starting to bring accessibility to existing products that didn’t consider it before.

Implementing accessibility has a cost. Everything in a project has a cost. If developers are solving accessibility issues, they are not working on new features, so at the very least, you have to consider the opportunity cost. You have to make sure that you transform that cost into an investment and that that investment provides good results. You need to provide some more details on how you do it, so here are the key questions that help me to build my case:

  • Why should we spend money and resources on this and not on something else?
  • What exactly do we want to do?
  • What are the expected results?
  • How much would it cost?
  • How can I make a decision?
Why Should We Spend Money And Resources On This And Not On Something Else?

Risk Prevention

There is a good chance that your stakeholders have heard about accessibility due to the regulations. In the past years, accessibility has become a hot topic, mainly motivated by the European Accessibility Act (EAA), the Web Accessibility Directive (WAD) in Europe or the Americans with Disabilities Act (ADA), and the Section 508 of the Rehabilitation Act in the US and equivalent regulations on other countries. They should definitely be aware of them. However, unless they are from the legal department, they may not need to know every detail; just having an overview should be enough to understand the landscape. You can simplify it a bit, so no one panics.

One of the most useful slides I use is a summary table of the regulations with some key information:

  • What is the goal of the regulation?
  • Who is it targeting?
  • Relevant deadlines.
  • How does it affect us?
    This is essential information that you have to adapt to your business context. If you have some B2C or supply to the government, you may be affected. Even if you are pure private B2B, you will be partly affected, as more and more clients may include accessibility as a requirement for all the software they purchase.
  • If your company operates only in one country, it would be a good idea to include a summary of your country-specific regulations.

In addition, explain how the WCAG relates to the regulation. In the end, it is a third-party international standard used as the baseline for most official laws and directives and comes up in conversations quite often.

Keep in mind that using the regulation to motivate your case can work, but only to some point. We are aware that the regulation about accessibility is getting stronger and the requirements are affecting a good number of companies, especially big companies, but still not everyone. If you only base your case on it, the easy answer is, “Yeah, well, but we are not required to do it”.

If we start working now we will have time to prepare. If we consider accessibility for all the new features and projects, the cost won’t be affected much, and we will be prepared for the future.

However, many companies still don’t see the urgency of working on it if they are not directly required to do so by the regulation yet, and it is not certain that they will need to do it in the future. They prefer not to focus on it until that moment arrives. It is not necessarily a problem to be prioritized now, and there may be more urgent matters.

They should be aware of the regulations and the situation. We should show them how they could be affected, but if we don’t show the real value that accessibility brings to the products and the company, the conversation may end there.

Explore If It Can Be A Competitive Advantage

Big companies are starting to consider accessibility as part of their procurement process, which means that it is a hard requirement to become a provider, a checkbox in the selection process. You can try reaching out to your sales department to see if any clients are asking about your plans regarding accessibility compliance. If so, make sure you document them in the business case. Include some rough background research about those clients:

  • Are they strategic clients?
  • Are they clients who already have one of our products and want to expand?
  • How much revenue can they potentially bring?
  • Are they important companies in the industry that others may use as a reference?
  • Was it a one-time question?
  • Did they try to push for it?

The potential revenue and interest from important clients can be a good motivation.

In addition, try to find out if your competitors care about accessibility or are compliant. You can go to their website and see if they have an accessibility statement, if they have any certification by external parties (normally on the footer), if they include their accessibility level on their sales materials, or just try basic keyboard navigation and run an automatic checker to see what their situation is. If none of them are compliant or their accessibility level is really low, becoming compliant or implementing accessibility may be a competitive advantage for you, a differentiator. On the other hand, if they are compliant and you are not, you may lose some deals because of it.

To sum up, check clients’ interest in the topic, compare the situation of different competitors, and see if accessibility could be a potential revenue generator.

Showcase The Value It Brings To Your Users

Depending on the industries your product focuses on, the assumption may be that you don’t have a big user base of people with disabilities, and therefore, your users won’t benefit much from accessibility.

Accessibility helps everyone, and if you are reading this article, it is probably because you agree with it. But that statement sounds too generic and a bit theoretical, so it is important to provide specific and accurate examples around your users, in particular, that help people visualize it.

Think of your user base. What characteristics do they have? In which situations do they use your software? Maybe most of your users don’t have a disability, or you don’t even have the data about it, but they are office workers who use your software a lot, and having good keyboard navigation would help them to be more efficient. Maybe most of them are over fifty years old and can benefit from adapting the font size. They might have to use the software in the open air and are affected by sun glare, so they need high contrast between elements, or they have to wear gloves and prefer larger target sizes.

And I would say you always have to account for neurodiversity. The idea is to identify in which everyday situations your users face they can benefit from accessibility, even if they don’t have a disability.

Another key thing is to look for specific feedback from your users and customers on accessibility. If you are lucky enough to have an insight repository, look for anything related. Keep in mind that people can be asking about accessibility without knowing that they are asking for accessibility, so don’t expect to find all the insights directly with an “accessibility” tag, but rather search for related keywords in the “user’s vocabulary” (colors, hard to click, mobile devices, zoom, keyboard, error, and so on).

If you don’t have access to a repository, you can contact customer service and try to find out help requests or feedback about it with them. Anything you find is evidence that your users, your specific users, benefit from accessibility.

Highlight The Overlap With Good Practices

Accessibility overlaps heavily with best practices for usability, design, and development. Working on it helps us improve the overall product quality without, in some cases, adding extra effort.

In terms of design, the overlap between accessibility improvements and usability improvements is really huge. Things like writing precise error messages, having a clear page structure, relying on consistency, including clear labels and instructions, or keeping the user in control are some examples of the intersection. To visualize it, I like taking the 10 usability heuristics of Nielsen Norman and relating them to design-related success criteria from the WCAG.

For the developers, the work on accessibility creates a more structured code that is easier to understand. Some of the key aspects are the use of markup and the proper order of the code. In addition, the use of landmarks is key for managing responsive interfaces and, of course, choosing the most adequate component for the specific functionality needed and identifying it correctly with unique labels prevents the product from having unexpected behaviors.

As for the QA team, the test that they perform can vary a lot based on the product, but testing the responsiveness is normally a must, as well as keyboard navigation since it increases the efficiency of repetitive tasks.

Considering accessibility implies having clear guidelines that help you to work in the correct direction and overlap with things that we should already be doing.

What Exactly Do We Want To Do?

As we said, we are going to focus on two of the most common situations: pushing for accessibility in a new product or feature and starting to incorporate accessibility into existing products that didn’t consider it before.

New Products Or Features

If you are about to build a product from scratch, you have a wonderful opportunity to apply an accessibility-first approach and consider accessibility by default from the very beginning. This approach allows you to minimize the number of accessibility issues that end up reaching the user and reduces the cost of rework when trying to fix them or when looking for compliance.

One of the key things you need to successfully apply this approach is considering accessibility as a shared responsibility. The opposite of an accessibility-first approach is the retroactive consideration of accessibility. When you only care for accessibility after the implementation and run an audit on the released product, you will find all the issues that accumulated. Plenty of them could have been easily solvable if you knew them when you were designing or coding, but solving them afterward becomes complicated.

For example, if you only considered drag and drop for rearranging a list of items, now you have to rethink the interaction process and make sure it works in all the cases, devices, and so on. If single-point interactions were a requirement from the beginning, you would just implement them naturally and save time.

Applying an accessibility-first approach means that everyone has to contribute.

  • The POs have to make sure that accessibility is included as a requirement and that people have the time and resources to cover it.
  • Designers have to follow best practices and guidelines to make sure the design itself is accessible.
  • The devs should do the same, include markup and proper semantics, and follow the guidelines for accessible code.
  • QAs are the final filter before the product reaches the user. They should try to pick as much as possible so it can get fixed.

If everyone shares the ownership and spends a bit more time on including accessibility in their task, the overall result will have a good base. Of course, you may still need to tackle some specific issues with an expert, and when auditing the final product, you will probably still find some issues that escaped the process, but the number will be drastically lower.

In addition, the process of auditing your product can get much lighter. Running an accessibility audit means first defining who will do it: is it internal or external? If it is external, which providers? How long would it take to negotiate the contract?

Afterward, you have to set the scope of the audit. It is impossible to check the full product, so you start by checking the most important workflows and key pages. Then, you will do the analysis. The result is normally a list of issues prioritized based on the user impact and some recommendations for remediating it.

Once you have the issues, you have to plan the remediation and figure out how much capacity from the teams we have to allocate to it based on when we want to have the fixes ready. You also have to group similar issues together to prevent the change of context during remediation, increase efficiency, and eliminate all duplicated issues (the auditors may not know the architecture of the product, so you may find several issues documented that, in reality, are just one because you are using the same component).

Considering this full process, for a large product, you can easily spend three months just before you start the actual remediation of the issues. Applying an accessibility-first approach means that the number of issues that reach the audit of the released product is much lower, so the process of auditing and fixing goes much faster.

If you can apply this approach, you should definitely consider the need for educational resources and their impact. You don’t want people just to work on accessibility but to understand the value they are creating when doing it (I am preparing another article that focuses on this). You want them to feel comfortable with the topic and understand what their responsibilities are and which things they have to pay attention to. Check if you already have accessibility resources inside the company that you can use. The important thing for the business is that those resources are going to contribute to reducing the effort.

The implementation of an accessibility-first approach has a very clear learning curve. In the beginning, people will take a bit of extra time to consider accessibility as part of their task, but after they have done it for several tasks, it comes naturally, and the effort needed to implement it really drops.

Think of “not relying on color only for conveying information”, as a designer, the first two times you have to figure something out instead of just changing the color of a text or icon to convey a status, you spend some time looking for solutions, afterward, you already have in mind a bunch of strategies that allow you to directly chose a valid option almost automatically.

Using an accessibility-first approach for new products is a clear strategy, but it is also valid for new features in an existing product. If you include it by default in anything new you create, you are preventing new issues from accumulating.

To sum up, applying an accessibility-first approach is really beneficial.

Considering accessibility from the beginning can help you to largely reduce the number of issues that may appear in audits after the release since it prevents the issues from accumulating, distributes the effort across the full product team, and substantially reduces the cost, as there will be less need for retroactive remediation of the issues that appear.

If you can implement an accessibility-first approach, do it.

Existing Products Or Features

If you try to bring accessibility to legacy products that have been running for many years, an accessibility-first approach may not be enough. In these cases, there are a million topics competing for priority and resources. Accessibility may be perceived as a massive effort that brings reduced value.

You may face a product that can have a big technical debt, that may not have a big user base of people with disabilities, or in which the number of existing accessibility issues is so overwhelming that you would need five years to solve them. You won´t be able to move forward if you try to solve all the problems at once. Here are some of the strategies that have worked for me to kick off the work on accessibility.

Start by checking the Design System. If the Design System has accessibility issues, they are going to be inherited by all the products that use them, so it is better to solve them at a higher level than to have each product team solving the exact same issue in all their products. You can begin by taking a quick look at it:

  • Does it consider color contrast?
  • And target size?
  • Does the documentation include any accessibility considerations or guidelines?
  • Are there color-dependent components?

If you have a dedicated team for the Design System, you can also reach out to them. You can find out what is their level of awareness on the topic. If they don’t have much knowledge, you can give them an introduction or help them identify and fix the knowledge gaps they have.

If you notice some issues, you can organize a proper audit of the design system from the design and development perspective and pair up with them to fix as much as you can. It is a good way of getting some extra hands to help you while tackling strategic issues.

When working on the Design System, you can also spot which components or areas are more complex and create guidelines and documentation together with them to help the teams reuse those components and patterns, leveraging accessibility.

If the Design System is in good shape, you don’t have one, or you prefer to focus only on the product, you need to start by analyzing and fixing the most relevant part. You have to set a manageable scope. I recommend taking the most relevant workflows and the ones the users use the most. Two or three of them could be a good start. Inside the workflows, try picking the pages that have different structures so you can have a representative sample, for instance, one with a form, a table, plain text, lots of images, and so on. In many cases, the pages that share the same structure share the same problems, so having more variety in the sample helps you to pick more critical issues.

Once you have chosen the workflows and screens, you can audit them, but with a reduced scope. If your product has never considered accessibility, it is likely to have way too many issues. When doing an audit, you normally test compliance with all the success criteria (59 if we consider levels A and AA) and do manual testing with different browsers, screen readers, and devices. Then, document each of the issues, prioritize them, and include the remediation in the planning.

It takes a lot of time, and you may get hundreds of issues, or even thousands, which makes you feel like “I will never get this done” and if you even get there like “I am finally done with this I don’t want to hear about it for a long time”. If this is the situation you are forecasting for the business, most likely, you will not get the green light for the project. It is too much of an investment. So unless they have hard requirements for compliance coming from some really strategic customers, you are going to get stuck.

As we said, ideally, we would do a complete audit and fix everything, but delivering some value is better than delivering nothing, so instead, you can propose a reduced first audit to get you on the move. Rather than doing a detailed audit of all 59 criteria, I normally focus on these three things:

  • Running an automatic check. It is very fast and prepares the report by itself. Though it is only capable of finding around 30% of the issues, it is a good start.
  • Doing basic manual keyboard testing, checking that all the interactive elements are focusable, in the logical order, and following the expected keyboard command interactions.
  • Doing a quick responsive test. Basically, what breaks when I change the viewport? Do I have information on top of each other when I zoom in? Can I still use the functionalities?

With these three tests, you will already have a large number of critical issues and blockers to solve while staying close to the overlapping area between accessibility and good design and development practices and not taking too much time.

Remember, the goal of this first audit is to get easy-to-identify critical issues to have a starting point, not to solve all the problems. In this way, you can start delivering value while building the idea that accessibility is not a one-time fix but a continuous process. In addition, it gives you a lot of insights into the aspects in which the teams need guidelines and training, as well as defining the minimum things that the different roles have to consider when working to reduce the number of future accessibility issues. You want to take it as a learning opportunity.

Note: Accessibility insights is a good tool for auditing by yourself as it includes explanations and visual helpers and guides you through the process.
Screen reader testing should be added to the audit scope if you can, but it can be hard to do it if you have never done it before, and some of the issues will already be highlighted during the automatic check and the keyboard testing.

What Are The Expected Results?

The results you want to achieve are going to have a huge impact on the strategy.

Are you aiming for compliance or bringing value to the users and preparing for the future?

This is a key question you have to ask yourself.

Compliance with the regulation is pretty much a binary option. To be compliant with the WCAG at a certain level, let’s say AA, you should pass all the success criteria for that level and the previous ones. Each success criterion intends to help people with a specific disability. If you try to be compliant only with some of them, you would be leaving people out. Of course, in reality, there are always going to be some minor issues and violations of a success criterion that reach the user. But the idea is that you are either compliant or not. With this in mind, you have to make sure that you consider several audits, ideally by a certified external party that can reassure your compliance.

Trying to become compliant with a product that has never considered accessibility can become quite a large task, so it may not be the best first step. But, in general, if you are aiming for full compliance, it may be because you have strong motivations coming from the risk reduction and competitive advantage categories.

On the other hand, if your goal is to start including accessibility in the product to prepare for the future and help users, you will probably target a lighter result. Rather than looking for perfection, you want to start to have a level that is good enough as soon as possible.

Compliance is binary, but accessibility is a spectrum. You can have a pretty good level of accessibility even if you are not fully compliant.

You can focus on identifying and solving the most critical issues for the users and on applying an accessibility-first approach to new developments. The result is probably not compliant and not perfect, but it eliminates critical barriers without a huge effort. It will have basic accessibility to help users, and you can apply an iterative approach to improve the level.

Keep in mind that it is impossible to have a 100% accessible product. As the product evolves, there are always going to be some issues that escape the test and reach the user. The important thing is to work to ensure that these issues are minor ones and not blockers or critical ones. If you can get the resources to fix the most important problems, you are already bringing value, even if you don’t reach compliance.

How Much Would It Cost?

An accessibility-first approach typically means you have to assign 5 to 10% of the product capacity to apply it (the number goes down to 5% due to the learning curve). The underlying risk, though, is that the business still considers these percentages to be too high. To prevent this from happening, you have to highlight strongly the side value of accessibility and the huge overlap it has with the design and development best practices we mentioned above.

In addition, to help justify the cost, you can look for examples inside your company that allow you to compare it with the cost of retroactive fitting accessibility. If there are not any, you can look for some basic issue, such as the lack of structure of a page, and use it to illustrate that in order to add the structure afterward, once the product is released you would need to do a substantial rework or ask a developer to help you to estimate the effort of adding a heading structure to 40 different pages after released.

As for introducing accessibility in existing products, the cost can be quite hard to estimate. Having a rough audit can help you understand how many critical issues you have at the start, and you can ask developers to help you estimate some of the changes to get a rough idea.

The most interesting approach that helps you to reduce the “cost of accessibility” is exploiting the overlap between accessibility and usability or product features.

If you attach accessibility improvements to usability or UX ones, then it doesn’t really need dedicated capacity. For example, if some of the inputs are lacking labels or instructions and your users get confused, it is a usability problem that overlaps with accessibility. Normally, accessibility issues related to the Reflow criteria are quite time-consuming, as they rely on a proper responsive design. But isn’t it just good design?

I recommend checking the list of features in the product backlog and the feedback from the users to find out which accessibility improvements can you combine with them, especially with features that have priority according to the product strategy (such us, enabling the product on mobile devices, or improving efficiency by promoting keyboard navigation).

The bigger the overlap, the more you can reduce the effort. This said, I would say it is better not to make it too ambitious when you are starting. It is better to start moving, even if it is slowly, than to hit a wall. When you manage to start with it, you will spark curiosity in other people, gain allies, and have results that can help you to expand the project and the scope.

You can also consider an alternative approach, define an affordable capacity that you could dedicate based on your product situation (maybe 10 or 15%), and set the scope to match it.

Finally, it is also important to gather the existing resources you have access to, internal or external. If there are guidelines, if the Design System is accessible, if there are related company goals, educational sessions… Whatever is there already is something you can use, and that doesn’t add to the total cost of the project. If the Design System is accessible, it would be a waste if we don’t leverage it and make sure we implement the components in an accessible way. You can put together an overview to show the support you have.

How Can I Make A Decision?

Business stakeholders are short on time and have many things in mind. If you want them to make a decision and consider all the factors when making it, you have to help them visualize them together in an executive summary.

If there is a single direction that you are trying to promote, for example, implementing an accessibility-first approach for new products and features, you can put on a slide the three key questions we mentioned above and the answers to those questions:

  • What exactly do we want to do?
  • What are the expected results?
  • How much would it cost?

If there are different directions you can take, for example, you want to start to incorporate accessibility into products that meet certain conditions, or you can afford different capacities dedicated to accessibility for different products, you can use a decision-making diagram or a decision-making matrix. The idea is to visualize the different criteria that can affect the strategy and the adapted result for each of them.

For example,

  • Do I have clients inquiring about accessibility?
  • Is the product already using an accessible design system?
  • Are we considering opening part of the product to B2C?
  • Is the product going to take responsiveness and mobile interactions as a priority?
  • Do we want to expand the product target market to governmental institutions?

Mapping out the factors and possible directions can help you and decision-makers understand which products can be a better starting point for accessibility, where it makes sense to allocate more capacity, and which possibilities are open. This becomes especially relevant when you are trying to bring accessibility to several products at the same time.

Whatever representation you choose for your conditions, make sure it visualizes the answers to those questions to facilitate the decision-making process and get approval. I generally include it at the end of the presentation, or even at the beginning and the end.

Keep It Up!

Even if your business case is really good, sometimes you don’t get to have a big impact due to circumstances. It may be that there is a big shift in priorities, that the stakeholders change, that your contract ends (if you are a consultant), or that the company just doesn’t have the resources to work on it at that moment, and it gets postponed.

I know it can be very frustrating, but don´t lose the motivation. Change can move quite slowly, especially in big companies, but if you have put the topic into people’s minds, it will be back on the table. In the meantime, you can try organizing evangelization sessions for the teams to find new allies and share your passion. You may need to wait a bit more, but there will be more opportunities to push the topic again, and since people already know about it, you will probably get more support. You have initiated the change, and your effort will not be lost.

Key Points
  • Highlight the specific impact of accessibility on your specific products and users.
  • Check if accessibility could be a competitive differentiator.
  • Leverage the overlap between accessibility and good practices or product features to reduce the effort.
  • Include the existing resources and how you can benefit from them.
  • Clarify the expected result based on the effort.
  • Visualize the key points of the strategy to help the decision-making and approval process.
  • It is better to start with a small scope and iterate than not start at all.
]]>
hello@smashingmagazine.com (Gloria Diaz Alonso)
<![CDATA[Building A Drupal To Storyblok Migration Tool: An Engineering Perspective]]> https://smashingmagazine.com/2025/04/building-drupal-storyblok-migration-tool-engineering-perspective/ https://smashingmagazine.com/2025/04/building-drupal-storyblok-migration-tool-engineering-perspective/ Wed, 02 Apr 2025 12:00:00 GMT This article is a sponsored by Storyblok

Content management is evolving. The traditional monolithic CMS approach is giving way to headless architectures, where content management and presentation are decoupled. This shift brings new challenges, particularly when organizations need to migrate from legacy systems to modern headless platforms.

Our team encountered this scenario when creating a migration path from Drupal to Storyblok. These systems handle content architecture quite differently — Drupal uses an entity-field model integrated with PHP, while Storyblok employs a flexible Stories and Blocks structure designed for headless delivery.

If you just need to use a script to do a simple — yet extensible — content migration from Drupal to Storyblok, I already shared step-by-step instructions on how to download and use it. If you’re interested in the process of creating such a script so that you can write your own (possibly) better version, stay here!

We observed that developers sometimes struggle with manual content transfers and custom scripts when migrating between CMSs. This led us to develop and share our migration approach, which we implemented as an open-source tool that others could use as a reference for their migration needs.

Our solution combines two main components: a custom Drush command that handles content mapping and transformation and a new PHP client for Storyblok’s Management API that leverages modern language features for improved developer experience.

We’ll explore the engineering decisions behind this tool’s development, examining our architectural choices and how we addressed real-world migration challenges using modern PHP practices.

Note: You can find the complete source code of the migration tool in the Drupal exporter repo.

Planning The Migration Architecture

The journey from Drupal to Storyblok presents unique architectural challenges. The fundamental difference lies in how these systems conceptualize content: Drupal structures content as entities with fields, while Storyblok uses a component-based approach with Stories and Blocks.

Initial Requirements Analysis

A successful migration tool needs to understand both systems intimately. Drupal’s content model relies heavily on its Entity API, storing content as structured field collections within entities. A typical Drupal article might contain fields for the title, body content, images, and taxonomies. Storyblok, on the other hand, structures content as stories that contain blocks, reusable components that can be nested and arranged in a flexible way. It’s a subtle difference that shaped our technical requirements, particularly around content mapping and data transformation, but ultimately, it’s easy to see the relationships between the two content models.

Technical Constraints

Early in development, we identified several key constraints. Storyblok’s Management API enforces rate limits that affect how quickly we can transfer content. Media assets must first be uploaded and then linked. Error recovery becomes essential when migrating hundreds of pieces of content.

The brand-new Management API PHP client handles these constraints through built-in retry mechanisms and response validation, so in writing a migration script, we don’t need to worry about them.

Tool Selection

We chose Drush as our command-line interface for several reasons. First, it’s deeply integrated with Drupal’s bootstrap process, providing direct access to the Entity API and field data. Second, Drupal developers are already familiar with its conventions, making our tool more accessible.

The decision to develop a new Management API client came from our experience with the evolution of PHP since we developed the first PHP client, and our goal to provide developers with a dedicated tool for this specific API that offered an improved DX and a tailored set of features.

This groundwork shaped how we approached the migration workflow.

The Building Blocks: A New Management API Client

A content migration tool interacts heavily with Storyblok’s Management API &mdash, creating stories, uploading assets, and managing tags. Each operation needs to be reliable and predictable. Our brand-new client simplifies these interactions through intuitive method calls: The client handles authentication, request formatting, and response parsing behind the scenes, letting devs focus on content operations rather than API mechanics.

Design For Reliability

Content migrations often involve hundreds of API calls. Our client includes built-in mechanisms for handling common scenarios like rate limiting and failed requests. The response handling pattern provides clear feedback about operation success: A logger can be injected into the client class, as we did using the Drush logger in our migration script from Drupal.

Improving The Development Experience

Beyond basic API operations, the client reduces cognitive load through predictable patterns. Data objects provide a structured way to prepare content for Storyblok: This pattern validates data early in the process, catching potential issues before they reach the API.

Designing The Migration Workflow

Moving from Drupal’s entity-based structure to Storyblok’s component model required careful planning of the migration workflow. Our goal was to create a process that would be both reliable and adaptable to different content structures.

Command Structure

The migration leverages Drupal’s entity query system to extract content systematically. By default, access checks were disabled (a reversible business decision) to focus solely on migrating published nodes.

Key Steps And Insights

  • Text Fields

    • Required minimal effort: values like value() mapped directly to Storyblok fields.
    • Rich text posed no encoding challenges, enabling straightforward 1:1 transfers.
  • Handling Images

    1. Upload: Assets were sent to an AWS S3 bucket.
    2. Link: Storyblok’s Asset API upload() method returned an object_id, simplifying field mapping.
    3. Assign: The asset ID and filename were attached to the story.
  • Managing Tags

    • Tags extracted from Drupal were pre-created via Storyblok’s Tag API (optional but ensures consistency).
    • When assigning tags to stories, Storyblok automatically creates missing ones, streamlining the process.

Why Staged Workflows Matter

The migration avoids broken references by prioritizing dependencies (assets first, tags next, content last). While pre-creating tags add control, teams can adapt this logic—for example, letting Storyblok auto-generate tags to save time.

Flexibility is key: every decision (access checks, tag workflows) can be adjusted to align with project goals.

Real-World Implementation Challenges

Migrating content between Drupal and Storyblok presents challenges that you, as the implementer, may encounter.

For example, when dealing with large datasets, you may find that Drupal sites with thousands of nodes can quickly hit the rate limits enforced by Storyblok’s management API. In such cases, a batching mechanism for your requests is worth considering. Instead of processing every node at once, you can process a subset of records, wait for a short period of time, and then continue.

Alternatively, you could use the createBulk method of the Story API in the Management API, which allows you to handle multiple story creations with built-in rate limit handling and retries. Another potential hurdle is the conversion of complex field types, especially when Drupal’s nested structures or Paragraph fields need to be mapped to Storyblok’s more flexible block-based model.

One approach is first to analyze the nesting depth and structure of the Drupal content, then flatten deeply nested elements into reusable Storyblok components while maintaining the correct hierarchy. For example, a paragraph field with embedded media and text can be split into blocks within Storyblok, with each component representing a logical section of content. By structuring data this way before migration, you ensure that content remains editable and properly structured in the new system.

Data consistency is another aspect that you need to manage carefully. When migrating hundreds of records, partial failures are always risky. One approach to managing this is to log detailed information for each migration operation and implement a retry mechanism for failed operations.

For example, wrapping API calls in a try-catch block and logging errors can be a practical way to ensure that no records are silently dropped. When dealing with fields such as taxonomy terms or tags created on the fly in Storyblok, you may run into duplication issues. A good practice is to perform a check before creating a new tag. This could involve maintaining a local cache of previously created tags and checking against them before sending a create request to the API.

The same goes for images; a check could ensure you don’t upload the same asset twice.

Lessons Learned And Looking Forward

A dedicated API client for Storyblok streamlined interactions, abstracting backend complexity while improving code maintainability. Early use of structured data objects to prepare content proved critical, enabling pre-emptive error detection and reducing API failures.

We also ran into some challenges and see room for improvement:

  • Encoding issues in rich text (e.g., HTML entities) were resolved with a pre-processing step
  • Performance bottlenecks with large text/images required memory optimization and refined request handling

Enhancements could include support for Drupal Layout Builder, advanced validation layers, or dynamic asset management systems.

💡 For deeper dives into our Management API client or migration strategies, reach out via Discord, explore the PHP Client repo, or connect with me on Mastodon. Feedback and contributions are welcome!

]]>
hello@smashingmagazine.com (Edoardo Dusi)
<![CDATA[Blossoms, Flowers, And The Magic Of Spring (April 2025 Wallpapers Edition)]]> https://smashingmagazine.com/2025/03/desktop-wallpaper-calendars-april-2025/ https://smashingmagazine.com/2025/03/desktop-wallpaper-calendars-april-2025/ Mon, 31 Mar 2025 13:00:00 GMT Starting the new month with a little inspiration boost — that’s the idea behind our monthly wallpapers series which has been going on for more than fourteen years already. Each month, the wallpapers are created by the community for the community, and everyone who has an idea for a design is welcome to join in — experienced designers just like aspiring artists. Of course, it wasn’t any different this time around.

For this edition, creative folks from all across the globe once again got their ideas flowing to bring some good vibes to your screens. You’ll find their wallpapers compiled below, along with a selection of timeless April favorites from our archives that are just too good to be forgotten. A huge thank-you to everyone who shared their designs with us this month — you’re smashing!

If you too would like to get featured in one of our upcoming wallpapers posts, please don’t hesitate to submit your design. We can’t wait to see what you’ll come up with! Happy April!

  • You can click on every image to see a larger preview.
  • We respect and carefully consider the ideas and motivation behind each and every artist’s work. This is why we give all artists the full freedom to explore their creativity and express emotions and experience through their works. This is also why the themes of the wallpapers weren’t anyhow influenced by us but rather designed from scratch by the artists themselves.

April Blooms And Easter Joy

“April bursts with color, joy, and the magic of new beginnings. As spring awakens, Easter fills the air with wonder — bunnies paint playful masterpieces on eggs, and laughter weaves through cherished traditions. It’s a season to embrace warmth, kindness, and the simple beauty of blooming days.” — Designed by LibraFire from Serbia.

Walking Among Chimpanzees

“It’s April, and we’re heading to Tanzania with Jane Goodall, her chimpanzees, and her reflection that we are all important: ‘Every individual matters. Every individual has a role to play. Every individual makes a difference.” — Designed by Veronica Valenzuela from Spain.

Eggcited

Designed by Ricardo Gimenes from Spain.

2001

Designed by Ricardo Gimenes from Spain.

Swing Into Spring

“Our April calendar needs not mark any special occasion — April itself is a reason to celebrate. It was a breeze creating this minimal, pastel-colored calendar design with a custom lettering font and plant pattern for the ultimate spring feel.” — Designed by PopArt Studio from Serbia.

Spring Awakens

“We all look forward to the awakening of a life that spreads its wings after a dormant winter and opens its petals to greet us. Long live spring, long live life.” — Designed by LibraFire from Serbia.

Inspiring Blossom

“‘Sweet spring is your time is my time is our time for springtime is lovetime and viva sweet love,’ wrote E. E. Cummings. And we have a question for you: Is there anything more refreshing, reviving, and recharging than nature in blossom? Let it inspire us all to rise up, hold our heads high, and show the world what we are made of.” — Designed by PopArt Studio from Serbia.

Dreaming

“The moment when you just walk and your imagination fills up your mind with thoughts.” — Designed by Gal Shir from Israel.

Clover Field

Designed by Nathalie Ouederni from France.

Rainy Day

Designed by Xenia Latii from Berlin, Germany.

A Time For Reflection

“‘We’re all equal before a wave.’ (Laird Hamilton)” — Designed by Shawna Armstrong from the United States.

Purple Rain

“This month is International Guitar Month! Time to get out your guitar and play. As a graphic designer/illustrator seeing all the variations of guitar shapes begs to be used for a fun design. Search the guitar shapes represented and see if you see one similar to yours, or see if you can identify some of the different styles that some famous guitarists have played (BTW, Prince’s guitar is in there and purple is just a cool color).” — Designed by Karen Frolo from the United States.

Wildest Dreams

“We love the art direction, story, and overall cinematography of the ‘Wildest Dreams’ music video by Taylor Swift. It inspired us to create this illustration. Hope it will look good on your desktops.” — Designed by Kasra Design from Malaysia.

Sakura

“Spring is finally here with its sweet Sakura flowers, which remind me of my trip to Japan.” — Designed by Laurence Vagner from France.

April Fox

Designed by MasterBundles from the United States.

Fairytale

“A tribute to Hans Christian Andersen. Happy Birthday!” — Designed by Roxi Nastase from Romania.

Coffee Morning

Designed by Ricardo Gimenes from Spain.

The Loneliest House In The World

“March 26 was Solitude Day. To celebrate it, here is the picture about the loneliest house in the world. It is a real house, I found it on Youtube.” — Designed by Vlad Gerasimov from Georgia.

The Perpetual Circle

“Inspired by the Black Forest, which is beginning right behind our office windows, so we can watch the perpetual circle of nature when we take a look outside.” — Designed by Nils Kunath from Germany.

Ready For April

“It is very common that it rains in April. This year, I am not sure… But whatever… we are just prepared!” — Designed by Verónica Valenzuela from Spain.

Happy Easter

Designed by Tazi Design from Australia.

In The River

“Spring is here! Crocodiles search the hot and stay in the river.” — Designed by Veronica Valenzuela from Spain.

Springtime Sage

“Spring and fresh herbs always feel like they compliment each other. Keeping it light and fresh with this wallpaper welcomes a new season!” — Designed by Susan Chiang from the United States.

Citrus Passion

Designed by Nathalie Ouederni from France.

Walking To The Wizard

“We walked to Oz with our friends. The road is long, but we follow the yellow bricks. Are you coming with us?” — Designed by Veronica Valenzuela from Spain.

Hello!

Designed by Rachel from the United States.

Oceanic Wonders

“Celebrate National Dolphin Day on April 14th by acknowledging the captivating beauty and importance of dolphins in our oceans!” — Designed by PopArt Studio from Serbia.

Playful Alien

“Everything would be more fun if a little alien had the controllers.” — Designed by Maria Keller from Mexico.

Good Day

“Some pretty flowers and spring time always make for a good day.” — Designed by Amalia Van Bloom from the United States.

April Showers

Designed by Ricardo Gimenes from Spain.

Fusion

Designed by Rio Creativo from Poland.

Do Doodling

Designed by Design Studio from India.

Ipoh Hor Fun

“Missing my hometown’s delicious ‘Kai See Hor Fun’ (in Cantonese) that literally translates to ‘Shredded Chicken Flat Rice Noodles’. It is served in a clear chicken and prawn soup with chicken shreds, prawns, spring onions, and noodles.” — Designed by Lew Su Ann from Brunei.

]]>
hello@smashingmagazine.com (Cosima Mielke)
<![CDATA[How To Argue Against AI-First Research]]> https://smashingmagazine.com/2025/03/how-to-argue-against-ai-first-research/ https://smashingmagazine.com/2025/03/how-to-argue-against-ai-first-research/ Fri, 28 Mar 2025 09:00:00 GMT With AI upon us, companies have recently been turning their attention to “synthetic” user testing — AI-driven research that replaces UX research. There, questions are answered by AI-generated “customers,” human tasks “performed” by AI agents.

However, it’s not just for desk research or discovery that AI is used for; it’s an actual usability testing with “AI personas” that mimic human behavior of actual customers within the actual product. It’s like UX research, just… well, without the users.

If this sounds worrying, confusing, and outlandish, it is — but this doesn’t stop companies from adopting AI “research” to drive business decisions. Although, unsurprisingly, the undertaking can be dangerous, risky, and expensive and usually diminishes user value.

This article is part of our ongoing series on UX. You can find more details on design patterns and UX strategy in Smart Interface Design Patterns 🍣 — with live UX training coming up soon. Free preview.

Fast, Cheap, Easy… And Imaginary

Erika Hall famously noted that “design is only as ‘human-centered’ as the business model allows.” If a company is heavily driven by hunches, assumptions, and strong opinions, there will be little to no interest in properly-done UX research in the first place.

But unlike UX research, AI research (conveniently called synthetic testing) is fast, cheap, and easy to re-run. It doesn’t raise uncomfortable questions, and it doesn’t flag wrong assumptions. It doesn’t require user recruitment, much time, or long-winded debates.

And: it can manage thousands of AI personas at once. By studying AI-generated output, we can discover common journeys, navigation patterns, and common expectations. We can anticipate how people behave and what they would do.

Well, that’s the big promise. And that’s where we start running into big problems.

LLMs Are People Pleasers

Good UX research has roots in what actually happened, not what might have happened or what might happen in the future.

By nature, LLMs are trained to provide the most “plausible” or most likely output based on patterns captured in its training data. These patterns, however, emerge from expected behaviors by statistically “average” profiles extracted from content on the web. But these people don’t exist, they never have.

By default, user segments are not scoped and not curated. They don’t represent the customer base of any product. So to be useful, we must eloquently prompt AI by explaining who users are, what they do, and how they behave. Otherwise, the output won’t match user needs and won’t apply to our users.

When “producing” user insights, LLMs can’t generate unexpected things beyond what we’re already asking about.

In comparison, researchers are only able to define what’s relevant as the process unfolds. In actual user testing, insights can help shift priorities or radically reimagine the problem we’re trying to solve, as well as potential business outcomes.

Real insights come from unexpected behavior, from reading behavioral clues and emotions, from observing a person doing the opposite of what they said. We can’t replicate it with LLMs.

AI User Research Isn’t “Better Than Nothing”

Pavel Samsonov articulates that things that sound like customers might say them are worthless. But things that customers actually have said, done, or experienced carry inherent value (although they could be exaggerated). We just need to interpret them correctly.

AI user research isn’t “better than nothing” or “more effective.” It creates an illusion of customer experiences that never happened and are at best good guesses but at worst misleading and non-applicable. Relying on AI-generated “insights” alone isn’t much different than reading tea leaves.

The Cost Of Mechanical Decisions

We often hear about the breakthrough of automation and knowledge generation with AI. Yet we often forget that automation often comes at a cost: the cost of mechanical decisions that are typically indiscriminate, favor uniformity, and erode quality.

As Maria Rosala and Kate Moran write, the problem with AI research is that it most certainly will be misrepresentative, and without real research, you won't catch and correct those inaccuracies. Making decisions without talking to real customers is dangerous, harmful, and expensive.

Beyond that, synthetic testing assumes that people fit in well-defined boxes, which is rarely true. Human behavior is shaped by our experiences, situations, habits that can’t be replicated by text generation alone. AI strengthens biases, supports hunches, and amplifies stereotypes.

Triangulate Insights Instead Of Verifying Them

Of course AI can provide useful starting points to explore early in the process. But inherently it also invites false impressions and unverified conclusions — presented with an incredible level of confidence and certainty.

Starting with human research conducted with real customers using a real product is just much more reliable. After doing so, we can still apply AI to see if we perhaps missed something critical in user interviews. AI can enhance but not replace UX research.

Also, when we do use AI for desk research, it can be tempting to try to “validate” AI “insights” with actual user testing. However, once we plant a seed of insight in our head, it’s easy to recognize its signs everywhere — even if it really isn’t there.

Instead, we study actual customers, then triangulate data: track clusters or most heavily trafficked parts of the product. It might be that analytics and AI desk research confirm your hypothesis. That would give you a much stronger standing to move forward in the process.

Wrapping Up

I might sound like a broken record, but I keep wondering why we feel the urgency to replace UX work with automated AI tools. Good design requires a good amount of critical thinking, observation, and planning.

To me personally, cleaning up after AI-generated output takes way more time than doing the actual work. There is an incredible value in talking to people who actually use your product.

I would always choose one day with a real customer instead of one hour with 1,000 synthetic users pretending to be humans.

Useful Resources New: How To Measure UX And Design Impact

Meet Measure UX & Design Impact (8h), a new practical guide for designers and UX leads to measure and show your UX impact on business. Use the code 🎟 IMPACT to save 20% off today. Jump to the details.

Video + UX Training

$ 495.00 $ 799.00 Get Video + UX Training

25 video lessons (8h) + Live UX Training.
100 days money-back-guarantee.

Video only

$ 250.00$ 395.00
Get the video course

25 video lessons (8h). Updated yearly.
Also available as a UX Bundle with 2 video courses.

]]>
hello@smashingmagazine.com (Vitaly Friedman)
<![CDATA[Adaptive Video Streaming With Dash.js In React]]> https://smashingmagazine.com/2025/03/adaptive-video-streaming-dashjs-react/ https://smashingmagazine.com/2025/03/adaptive-video-streaming-dashjs-react/ Thu, 27 Mar 2025 13:00:00 GMT ` is the de facto element we turn to for embedding video content, but it comes with constraints. For example, it downloads the video file linearly over HTTP, which leads to performance hiccups, especially for large videos consumed on slower connections. But with adaptive bitrate streaming, we can split the video into multiple segments at different bitrates and resolutions.]]> I was recently tasked with creating video reels that needed to be played smoothly under a slow network or on low-end devices. I started with the native HTML5 <video> tag but quickly hit a wall — it just doesn’t cut it when connections are slow or devices are underpowered.

After some research, I found that adaptive bitrate streaming was the solution I needed. But here’s the frustrating part: finding a comprehensive, beginner-friendly guide was so difficult. The resources on MDN and other websites were helpful but lacked the end-to-end tutorial I was looking for.

That’s why I’m writing this article: to provide you with the step-by-step guide I wish I had found. I’ll bridge the gap between writing FFmpeg scripts, encoding video files, and implementing the DASH-compatible video player (Dash.js) with code examples you can follow.

Going Beyond The Native HTML5 <video> Tag

You might be wondering why you can’t simply rely on the HTML <video> element. There’s a good reason for that. Let’s compare the difference between a native <video> element and adaptive video streaming in browsers.

Progressive Download

With progressive downloading, your browser downloads the video file linearly from the server over HTTP and starts playback as long as it has buffered enough data. This is the default behavior of the <video> element.

<video src="rabbit320.mp4" />

When you play the video, check your browser’s network tab, and you’ll see multiple requests with the 206 Partial Content status code.

It uses HTTP 206 Range Requests to fetch the video file in chunks. The server sends specific byte ranges of the video to your browser. When you seek, the browser will make more range requests asking for new byte ranges (e.g., “Give me bytes 1,000,000–2,000,000”).

In other words, it doesn’t fetch the entire file all at once. Instead, it delivers partial byte ranges from the single MP4 video file on demand. This is still considered a progressive download because only a single file is fetched over HTTP — there is no bandwidth or quality adaptation.

If the server or browser doesn’t support range requests, the entire video file will be downloaded in a single request, returning a 200 OK status code. In that case, the video can only begin playing once the entire file has finished downloading.

The problems? If you’re on a slow connection trying to watch high-resolution video, you’ll be waiting a long time before playback starts.

Adaptive Bitrate Streaming

Instead of serving one single video file, adaptive bitrate (ABR) streaming splits the video into multiple segments at different bitrates and resolutions. During playback, the ABR algorithm will automatically select the highest quality segment that can be downloaded in time for smooth playback based on your network connectivity, bandwidth, and other device capabilities. It continues adjusting throughout to adapt to changing conditions.

This magic happens through two key browser technologies:

  • Media Source Extension (MSE)
    It allows passing a MediaSource object to the src attribute in <video>, enabling sending multiple SourceBuffer objects that represent video segments.
  • Media Capabilities API
    It provides information on your device’s video decoding and encoding abilities, enabling ABR to make informed decisions about which resolution to deliver.

Together, they enable the core functionality of ABR, serving video chunks optimized for your specific device limitations in real time.

Streaming Protocols: MPEG-DASH Vs. HLS

As mentioned above, to stream media adaptively, a video is split into chunks at different quality levels across various time points. We need to facilitate the process of switching between these segments adaptively in real time. To achieve this, ABR streaming relies on specific protocols. The two most common ABR protocols are:

  • MPEG-DASH,
  • HTTP Live Streaming (HLS).

Both of these protocols utilize HTTP to send video files. Hence, they are compatible with HTTP web servers.

This article focuses on MPEG-DASH. However, it’s worth noting that DASH isn’t supported by Apple devices or browsers, as mentioned in Mux’s article.

MPEG-DASH

MPEG-DASH enables adaptive streaming through:

  • A Media Presentation Description (MPD) file
    This XML manifest file contains information on how to select and manage streams based on adaptive rules.
  • Segmented Media Files
    Video and audio files are divided into segments at different resolutions and durations using MPEG-DASH-compliant codecs and formats.

On the client side, a DASH-compliant video player reads the MPD file and continuously monitors network bandwidth. Based on available bandwidth, the player selects the appropriate bitrate and requests the corresponding video chunk. This process repeats throughout playback, ensuring smooth, optimal quality.

Now that you understand the fundamentals, let’s build our adaptive video player!

Steps To Build an Adaptive Bitrate Streaming Video Player

Here’s the plan:

  1. Transcode the MP4 video into audio and video renditions at different resolutions and bitrates with FFmpeg.
  2. Generate an MPD file with FFmpeg.
  3. Serve the output files from the server.
  4. Build the DASH-compatible video player to play the video.

Install FFmpeg

For macOS users, install FFmpeg using Brew by running the following command in your terminal:

brew install ffmpeg

For other operating systems, please refer to FFmpeg’s documentation.

Generate Audio Rendition

Next, run the following script to extract the audio track and encode it in WebM format for DASH compatibility:

ffmpeg -i "input_video.mp4" -vn -acodec libvorbis -ab 128k "audio.webm"
  • -i "input_video.mp4": Specifies the input video file.
  • -vn: Disables the video stream (audio-only output).
  • -acodec libvorbis: Uses the libvorbis codec to encode audio.
  • -ab 128k: Sets the audio bitrate to 128 kbps.
  • "audio.webm": Specifies the output audio file in WebM format.

Generate Video Renditions

Run this script to create three video renditions with varying resolutions and bitrates. The largest resolution should match the input file size. For example, if the input video is 576×1024 at 30 frames per second (fps), the script generates renditions optimized for vertical video playback.

ffmpeg -i "input_video.mp4" -c:v libvpx-vp9 -keyint_min 150 -g 150 \
-tile-columns 4 -frame-parallel 1 -f webm \
-an -vf scale=576:1024 -b:v 1500k "input_video_576x1024_1500k.webm" \
-an -vf scale=480:854 -b:v 1000k "input_video_480x854_1000k.webm" \
-an -vf scale=360:640 -b:v 750k "input_video_360x640_750k.webm"
  • -c:v libvpx-vp9: Uses the libvpx-vp9 as the VP9 video encoder for WebM.
  • -keyint_min 150 and -g 150: Set a 150-frame keyframe interval (approximately every 5 seconds at 30 fps). This allows bitrate switching every 5 seconds.
  • -tile-columns 4 and -frame-parallel 1: Optimize encoding performance through parallel processing.
  • -f webm: Specifies the output format as WebM.

In each rendition:

  • -an: Excludes audio (video-only output).
  • -vf scale=576:1024: Scales the video to a resolution of 576x1024 pixels.
  • -b:v 1500k: Sets the video bitrate to 1500 kbps.

WebM is chosen as the output format, as they are smaller in size and optimized yet widely compatible with most web browsers.

Generate MPD Manifest File

Combine the video renditions and audio track into a DASH-compliant MPD manifest file by running the following script:

ffmpeg \
  -f webm_dash_manifest -i "input_video_576x1024_1500k.webm" \
  -f webm_dash_manifest -i "input_video_480x854_1000k.webm" \
  -f webm_dash_manifest -i "input_video_360x640_750k.webm" \
  -f webm_dash_manifest -i "audio.webm" \
  -c copy \
  -map 0 -map 1 -map 2 -map 3 \
  -f webm_dash_manifest \
  -adaptation_sets "id=0,streams=0,1,2 id=1,streams=3" \
  "input_video_manifest.mpd"
  • -f webm_dash_manifest -i "…": Specifies the inputs so that the ASH video player will switch between them dynamically based on network conditions.
  • -map 0 -map 1 -map 2 -map 3: Includes all video (0, 1, 2) and audio (3) in the final manifest.
  • -adaptation_sets: Groups streams into adaptation sets:
    • id=0,streams=0,1,2: Groups the video renditions into a single adaptation set.
    • id=1,streams=3: Assigns the audio track to a separate adaptation set.

The resulting MPD file (input_video_manifest.mpd) describes the streams and enables adaptive bitrate streaming in MPEG-DASH.

<?xml version="1.0" encoding="UTF-8"?>
<MPD
  xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
  xmlns="urn:mpeg:DASH:schema:MPD:2011"
  xsi:schemaLocation="urn:mpeg:DASH:schema:MPD:2011"
  type="static"
  mediaPresentationDuration="PT81.166S"
  minBufferTime="PT1S"
  profiles="urn:mpeg:dash:profile:webm-on-demand:2012">

  <Period id="0" start="PT0S" duration="PT81.166S">
    <AdaptationSet
      id="0"
      mimeType="video/webm"
      codecs="vp9"
      lang="eng"
      bitstreamSwitching="true"
      subsegmentAlignment="false"
      subsegmentStartsWithSAP="1">

      <Representation id="0" bandwidth="1647920" width="576" height="1024">
        <BaseURL>input_video_576x1024_1500k.webm</BaseURL>
        <SegmentBase indexRange="16931581-16931910">
          <Initialization range="0-645" />
        </SegmentBase>
      </Representation>

      <Representation id="1" bandwidth="1126977" width="480" height="854">
        <BaseURL>input_video_480x854_1000k.webm</BaseURL>
        <SegmentBase indexRange="11583599-11583986">
          <Initialization range="0-645" />
        </SegmentBase>
      </Representation>

      <Representation id="2" bandwidth="843267" width="360" height="640">
        <BaseURL>input_video_360x640_750k.webm</BaseURL>
        <SegmentBase indexRange="8668326-8668713">
          <Initialization range="0-645" />
        </SegmentBase>
      </Representation>

    </AdaptationSet>

    <AdaptationSet
      id="1"
      mimeType="audio/webm"
      codecs="vorbis"
      lang="eng"
      audioSamplingRate="44100"
      bitstreamSwitching="true"
      subsegmentAlignment="true"
      subsegmentStartsWithSAP="1">

      <Representation id="3" bandwidth="89219">
        <BaseURL>audio.webm</BaseURL>
        <SegmentBase indexRange="921727-922055">
          <Initialization range="0-4889" />
        </SegmentBase>
      </Representation>

    </AdaptationSet>
  </Period>
</MPD>

After completing these steps, you’ll have:

  1. Three video renditions (576x1024, 480x854, 360x640),
  2. One audio track, and
  3. An MPD manifest file.
input_video.mp4
audio.webm
input_video_576x1024_1500k.webm
input_video_480x854_1000k.webm
input_video_360x640_750k.webm
input_video_manifest.mpd

The original video input_video.mp4 should also be kept to serve as a fallback video source later.

Serve The Output Files

These output files can now be uploaded to cloud storage (e.g., AWS S3 or Cloudflare R2) for playback. While they can be served directly from a local folder, I highly recommend storing them in cloud storage and leveraging a CDN to cache the assets for better performance. Both AWS and Cloudflare support HTTP range requests out of the box.

Building The DASH-Compatible Video Player In React

There’s nothing like a real-world example to help understand how everything works. There are different ways we can implement a DASH-compatible video player, but I’ll focus on an approach using React.

First, install the Dash.js npm package by running:

npm i dashjs

Next, create a component called <DashVideoPlayer /> and initialize the Dash MediaPlayer instance by pointing it to the MPD file when the component mounts.

The ref callback function runs upon the component mounting, and within the callback function, playerRef will refer to the actual Dash MediaPlayer instance and be bound with event listeners. We also include the original MP4 URL in the <source> element as a fallback if the browser doesn’t support MPEG-DASH.

If you’re using Next.js app router, remember to add the ‘use client’ directive to enable client-side hydration, as the video player is only initialized on the client side.

Here is the full example:

import dashjs from 'dashjs'
import { useCallback, useRef } from 'react'

export const DashVideoPlayer = () => {
  const playerRef = useRef()

  const callbackRef = useCallback((node) => {
    if (node !== null) {
playerRef.current = dashjs.MediaPlayer().create() playerRef.current.initialize(node, "https://example.com/uri/to/input_video_manifest.mpd", false) playerRef.current.on('canPlay', () => { // upon video is playable }) playerRef.current.on('error', (e) => { // handle error }) playerRef.current.on('playbackStarted', () => { // handle playback started }) playerRef.current.on('playbackPaused', () => { // handle playback paused }) playerRef.current.on('playbackWaiting', () => { // handle playback buffering }) } },[]) return ( <video ref={callbackRef} width={310} height={548} controls> <source src="https://example.com/uri/to/input_video.mp4" type="video/mp4" /> Your browser does not support the video tag. </video> ) }

Result

Observe the changes in the video file when the network connectivity is adjusted from Fast 4G to 3G using Chrome DevTools. It switches from 480p to 360p, showing how the experience is optimized for more or less available bandwidth.

Conclusion

That’s it! We just implemented a working DASH-compatible video player in React to establish a video with adaptive bitrate streaming. Again, the benefits of this are rooted in performance. When we adopt ABR streaming, we’re requesting the video in smaller chunks, allowing for more immediate playback than we’d get if we needed to fully download the video file first. And we’ve done it in a way that supports multiple versions of the same video, allowing us to serve the best format for the user’s device.

References

]]>
hello@smashingmagazine.com (Teng Wei Herr)
<![CDATA[Previewing Content Changes In Your Work With document.designMode]]> https://smashingmagazine.com/2025/03/previewing-content-changes-work-documentdesignmode/ https://smashingmagazine.com/2025/03/previewing-content-changes-work-documentdesignmode/ Fri, 21 Mar 2025 08:00:00 GMT So, you just deployed a change to your website. Congrats! Everything went according to plan, but now that you look at your work in production, you start questioning your change. Perhaps that change was as simple as a new heading and doesn’t seem to fit the space. Maybe you added an image, but it just doesn’t feel right in that specific context.

What do you do? Do you start deploying more changes? It’s not like you need to crack open Illustrator or Figma to mock up a small change like that, but previewing your changes before deploying them would still be helpful.

Enter document.designMode. It’s not new. In fact, I just recently came across it for the first time and had one of those “Wait, this exists?” moments because it’s a tool we’ve had forever, even in Internet Explorer 6. But for some reason, I’m only now hearing about it, and it turns out that many of my colleagues are also hearing about it for the first time.

What exactly is document.designMode? Perhaps a little video demonstration can help demonstrate how it allows you to make direct edits to a page.

At its simplest, document.designMode makes webpages editable, similar to a text editor. I’d say it’s like having an edit mode for the web — one can click anywhere on a webpage to modify existing text, move stuff around, and even delete elements. It’s like having Apple’s “Distraction Control” feature at your beck and call.

I think this is a useful tool for developers, designers, clients, and regular users alike.

You might be wondering if this is just like contentEditable because, at a glance, they both look similar. But no, the two serve different purposes. contentEditable is more focused on making a specific element editable, while document.designMode makes the whole page editable.

How To Enable document.designMode In DevTools

Enabling document.designMode can be done in the browser’s developer tools:

  1. Right-click anywhere on a webpage and click Inspect.
  2. Click the Console tab.
  3. Type document.designMode = "on" and press Enter.

To turn it off, refresh the page. That’s it.

Another method is to create a bookmark that activates the mode when clicked:

  1. Create a new bookmark in your browser.
  2. You can name it whatever, e.g., “EDIT_MODE”.
  3. Input this code in the URL field:
javascript:(function(){document.designMode = document.designMode === 'on' ? 'off' : 'on';})();

And now you have a switch that toggles document.designMode on and off.

Use Cases

There are many interesting, creative, and useful ways to use this tool.

Basic Content Editing

I dare say this is the core purpose of document.designMode, which is essentially editing any text element of a webpage for whatever reason. It could be the headings, paragraphs, or even bullet points. Whatever the case, your browser effectively becomes a “What You See Is What You Get” (WYSIWYG) editor, where you can make and preview changes on the spot.

Landing Page A/B Testing

Let’s say we have a product website with an existing copy, but then you check out your competitors, and their copy looks more appealing. Naturally, you’d want to test it out. Instead of editing on the back end or taking notes for later, you can use document.designMode to immediately see how that copy variation would fit into the landing page layout and then easily compare and contrast the two versions.

This could also be useful for copywriters or solo developers.

SEO Title And Meta Description

Everyone wants their website to rank at the top of search results because that means more traffic. However, as broad as SEO is as a practice, the <title> tag and <meta> description is a website’s first impression in search results, both for visitors and search engines, as they can make or break the click-through rate.

The question that arises is, how do you know if certain text gets cut off in search results? I think document.designMode can fix that before pushing it live.

With this tool, I think it’d be a lot easier to see how different title lengths look when truncated, whether the keywords are instantly visible, and how compelling it’d be compared to other competitors on the same search result.

Developer Workflows

To be completely honest, developers probably won’t want to use document.designMode for actual development work. However, it can still be handy for breaking stuff on a website, moving elements around, repositioning images, deleting UI elements, and undoing what was deleted, all in real time.

This could help if you’re skeptical about the position of an element or feel a button might do better at the top than at the bottom; document.designMode sure could help. It sure beats rearranging elements in the codebase just to determine if an element positioned differently would look good. But again, most of the time, we’re developing in a local environment where these things can be done just as effectively, so your mileage may vary as far as how useful you find document.designMode in your development work.

Client And Team Collaboration

It is a no-brainer that some clients almost always have last-minute change requests — stuff like “Can we remove this button?” or “Let’s edit the pricing features in the free tier.”

To the client, these are just little tweaks, but to you, it could be a hassle to start up your development environment to make those changes. I believe document.designMode can assist in such cases by making those changes in seconds without touching production and sharing screenshots with the client.

It could also become useful in team meetings when discussing UI changes. Seeing changes in real-time through screen sharing can help facilitate discussion and lead to quicker conclusions.

Live DOM Tutorials

For beginners learning web development, I feel like document.designMode can help provide a first look at how it feels to manipulate a webpage and immediately see the results — sort of like a pre-web development stage, even before touching a code editor.

As learners experiment with moving things around, an instructor can explain how each change works and affects the flow of the page.

Social Media Content Preview

We can use the same idea to preview social media posts before publishing them! For instance, document.designMode can gauge the effectiveness of different call-to-action phrases or visualize how ad copy would look when users stumble upon it when scrolling through the platform. This would be effective on any social media platform.

Memes

I didn’t think it’d be fair not to add this. It might seem out of place, but let’s be frank: creating memes is probably one of the first things that comes to mind when anyone discovers document.designMode.

You can create parody versions of social posts, tweak article headlines, change product prices, and manipulate YouTube views or Reddit comments, just to name a few of the ways you could meme things. Just remember: this shouldn’t be used to spread false information or cause actual harm. Please keep it respectful and ethical!

Conclusion

document.designMode = "on" is one of those delightful browser tricks that can be immediately useful when you discover it for the first time. It’s a raw and primitive tool, but you can’t deny its utility and purpose.

So, give it a try, show it to your colleagues, or even edit this article. You never know when it might be exactly what you need.

Further Reading

]]>
hello@smashingmagazine.com (Victor Ayomipo)
<![CDATA[Web Components Vs. Framework Components: What’s The Difference?]]> https://smashingmagazine.com/2025/03/web-components-vs-framework-components/ https://smashingmagazine.com/2025/03/web-components-vs-framework-components/ Mon, 17 Mar 2025 10:00:00 GMT It might surprise you that a distinction exists regarding the word “component,” especially in front-end development, where “component” is often used and associated with front-end frameworks and libraries. A component is a code that encapsulates a specific functionality and presentation. Components in front-end applications have a similar function: building reusable user interfaces. However, their implementations are different.

Web — or “framework-agnostic” — components are standard web technologies for building reusable, self-sustained HTML elements. They consist of Custom Elements, Shadow DOM, and HTML template elements. On the other hand, framework components are reusable UIs explicitly tailored to the framework in which they are created. Unlike Web Components, which can be used in any framework, framework components are useless outside their frameworks.

Some critics question the agnostic nature of Web Components and even go so far as to state that they are not real components because they do not conform to the agreed-upon nature of components. This article comprehensively compares web and framework components, examines the arguments regarding Web Components agnosticism, and considers the performance aspects of Web and framework components.

What Makes A Component?

Several criteria could be satisfied for a piece of code to be called a component, but only a few are essential:

  • Reusability,
  • Props and data handling,
  • Encapsulation.

Reusability is the primary purpose of a component, as it emphasizes the DRY (don’t repeat yourself) principle. A component should be designed to be reused in different parts of an application or across multiple applications. Also, a component should be able to accept data (in the form of props) from its parent components and optionally pass data back through callbacks or events. Components are regarded as self-contained units; therefore, they should encapsulate their logic, styles, and state.

If there’s one thing we are certain of, framework components capture these criteria well, but what about their counterparts, Web Components?

Understanding Web Components
Web Components are a set of web APIs that allow developers to create custom, reusable HTML tags that serve a specific function. Based on existing web standards, they permit developers to extend HTML with new elements, custom behaviour, and encapsulated styling.

Web Components are built based on three web specifications:

  • Custom Elements,
  • Shadow DOM,
  • HTML templates.

Each specification can exist independently, but when combined, they produce a web component.

Custom Element

The Custom Elements API makes provision for defining and using new types of DOM elements that can be reused.

// Define a Custom Element
class MyCustomElement extends HTMLElement {
  constructor() {
    super();
  }

  connectedCallback() {
    this.innerHTML = `
      <p>Hello from MyCustomElement!</p>
    `;
  }
}

// Register the Custom Element
customElements.define('my-custom-element', MyCustomElement);

Shadow DOM

The Shadow DOM has been around since before the concept of web components. Browsers have used a nonstandard version for years for default browser controls that are not regular DOM nodes. It is a part of the DOM that is at least less reachable than typical light DOM elements as far as JavaScript and CSS go. These things are more encapsulated as standalone elements.

// Create a Custom Element with Shadow DOM
class MyShadowElement extends HTMLElement {
  constructor() {
    super();
    this.attachShadow({ mode: 'open' });
  }

  connectedCallback() {
    this.shadowRoot.innerHTML = `
      <style>
        p {
          color: green;
        }
      </style>
      <p>Content in Shadow DOM</p>
    `;
  }
}

// Register the Custom Element
customElements.define('my-shadow-element', MyShadowElement);

HTML Templates

HTML Templates API enables developers to write markup templates that are not loaded at the start of the app but can be called at runtime with JavaScript. HTML templates define the structure of Custom Elements in Web Components.

// my-component.js
export class MyComponent extends HTMLElement {
  constructor() {
    super();
    this.attachShadow({ mode: 'open' });
  }

  connectedCallback() {
    this.shadowRoot.innerHTML = `
      <style>
        p {
          color: red;
        }
      </style>
      <p>Hello from ES Module!</p>
    `;
  }
}

// Register the Custom Element
customElements.define('my-component', MyComponent);

<!-- Import the ES Module -->
<script type="module">
  import { MyComponent } from './my-component.js';
</script>

Web Components are often described as framework-agnostic because they rely on native browser APIs rather than being tied to any specific JavaScript framework or library. This means that Web Components can be used in any web application, regardless of whether it is built with React, Angular, Vue, or even vanilla JavaScript. Due to their supposed framework-agnostic nature, they can be created and integrated into any modern front-end framework and still function with little to no modifications. But are they actually framework-agnostic?

The Reality Of Framework-Agnosticism In Web Components

Framework-agnosticism is a term describing self-sufficient software — an element in this case — that can be integrated into any framework with minimal or no modifications and still operate efficiently, as expected.

Web Components can be integrated into any framework, but not without changes that can range from minimal to complex, especially the styles and HTML arrangement. Another change Web Components might experience during integration includes additional configuration or polyfills for full browser support. This drawback is why some developers do not consider Web Components to be framework-agnostic. Notwithstanding, besides these configurations and edits, Web Components can easily fit into any front-end framework, including but not limited to React, Angular, and Vue.

Framework Components: Strengths And Limitations

Framework components are framework-specific reusable bits of code. They are regarded as the building blocks of the framework on which they are built and possess several benefits over Web Components, including the following:

  • An established ecosystem and community support,
  • Developer-friendly integrations and tools,
  • Comprehensive documentation and resources,
  • Core functionality,
  • Tested code,
  • Fast development,
  • Cross-browser support, and
  • Performance optimizations.

Examples of commonly employed front-end framework elements include React components, Vue components, and Angular directives. React supports a virtual DOM and one-way data binding, which allows for efficient updates and a component-based model. Vue is a lightweight framework with a flexible and easy-to-learn component system. Angular, unlike React, offers a two-way data binding component model with a TypeScript focus. Other front-end framework components include Svelte components, SolidJS components, and more.

Framework layer components are designed to operate under a specific JavaScript framework such as React, Vue, or Angular and, therefore, reside almost on top of the framework architecture, APIs, and conventions. For instance, React components use JSX and state management by React, while Angular components leverage Angular template syntax and dependency injection. As far as benefits, it has excellent developer experience performance, but as far as drawbacks are concerned, they are not flexible or reusable outside the framework.

In addition, a state known as vendor lock-in is created when developers become so reliant on some framework or library that they are unable to switch to another. This is possible with framework components because they are developed to be operational only in the framework environment.

Comparative Analysis

Framework and Web Components have their respective strengths and weaknesses and are appropriate to different scenarios. However, a comparative analysis based on several criteria can help deduce the distinction between both.

Encapsulation And Styling: Scoped Vs. Isolated

Encapsulation is a trademark of components, but Web Components and framework components handle it differently. Web Components provide isolated encapsulation with the Shadow DOM, which creates a separate DOM tree that shields a component’s styles and structure from external manipulation. That ensures a Web Component will look and behave the same wherever it is used.

However, this isolation can make it difficult for developers who need to customize styles, as external CSS cannot cross the Shadow DOM without explicit workarounds (e.g., CSS custom properties). Scoped styling is used by most frameworks, which limit CSS to a component using class names, CSS-in-JS, or module systems. While this dissuades styles from leaking outwards, it does not entirely prevent external styles from leaking in, with the possibility of conflicts. Libraries like Vue and Svelte support scoped CSS by default, while React often falls back to libraries like styled-components.

Reusability And Interoperability

Web Components are better for reusable components that are useful for multiple frameworks or vanilla JavaScript applications. In addition, they are useful when the encapsulation and isolation of styles and behavior must be strict or when you want to leverage native browser APIs without too much reliance on other libraries.

Framework components are, however, helpful when you need to leverage some of the features and optimisations provided by the framework (e.g., React reconciliation algorithm, Angular change detection) or take advantage of the mature ecosystem and tools available. You can also use framework components if your team is already familiar with the framework and conventions since it will make your development process easier.

Performance Considerations

Another critical factor in determining web vs. framework components is performance. While both can be extremely performant, there are instances where one will be quicker than the other.

For Web Components, implementation in the native browser can lead to optimised rendering and reduced overhead, but older browsers may require polyfills, which add to the initial load. While React and Angular provide specific optimisations (e.g., virtual DOM, change detection) that will make performance improvements on high-flow, dynamic applications, they add overhead due to the framework runtime and additional libraries.

Developer Experience

Developer experience is another fundamental consideration regarding Web Components versus framework components. Ease of use and learning curve can play a large role in determining development time and manageability. Availability of tooling and community support can influence developer experience, too.

Web Components use native browser APIs and, therefore, are comfortable to developers who know HTML, CSS, and JavaScript but have a steeper learning curve due to additional concepts like the Shadow DOM, custom elements, and templates that have a learning curve attached to them. Also, Web Components have a smaller community and less community documentation compared to famous frameworks like React, Angular, and Vue.

Side-by-Side Comparison

Web Components Benefits Framework Components Benefits
Native browser support can lead to efficient rendering and reduced overhead. Frameworks like React and Angular provide specific optimizations (e.g., virtual DOM, change detection) that can improve performance for large, dynamic applications.
Smaller bundle sizes and native browser support can lead to faster load times. Frameworks often provide tools for optimizing bundle sizes and lazy loading components.
Leverage native browser APIs, making them accessible to developers familiar with HTML, CSS, and JavaScript. Extensive documentation, which makes it easier for developers to get started.
Native browser support means fewer dependencies and the potential for better performance. Rich ecosystem with extensive tooling, libraries, and community support.
Web Components Drawbacks Framework Components Drawbacks
Older browsers may require polyfills, which can add to the initial load time. Framework-specific components can add overhead due to the framework’s runtime and additional libraries.
Steeper learning curve due to additional concepts like Shadow DOM and Custom Elements. Requires familiarity with the framework’s conventions and APIs.
Smaller ecosystem and fewer community resources compared to popular frameworks. Tied to the framework, making it harder to switch to a different framework.

To summarize, the choice between Web Components and framework components depends on the specific need of your project or team, which can include cross-framework reusability, performance, and developer experience.

Conclusion

Web Components are the new standard for agnostic, interoperable, and reusable components. Although they need further upgrades and modifications in terms of their base technologies to meet framework components standards, they are entitled to the title “components.” Through a detailed comparative analysis, we’ve explored the strengths and weaknesses of Web Components and framework components, gaining insight into their differences. Along the way, we also uncovered useful workarounds for integrating web components into front-end frameworks for those interested in that approach.

References

]]>
hello@smashingmagazine.com (Gabriel Shoyombo)
<![CDATA[How To Prevent WordPress SQL Injection Attacks]]> https://smashingmagazine.com/2025/03/how-prevent-wordpress-sql-injection-attacks/ https://smashingmagazine.com/2025/03/how-prevent-wordpress-sql-injection-attacks/ Thu, 13 Mar 2025 08:00:00 GMT Did you know that your WordPress site could be a target for hackers right now? That’s right! Today, WordPress powers over 43% of all websites on the internet. That kind of public news makes WordPress sites a big target for hackers.

One of the most harmful ways they attack is through an SQL injection. A SQL injection may break your website, steal data, and destroy your content. More than that, they can lock you out of your website! Sounds scary, right? But don’t worry, you can protect your site. That is what this article is about.

What Is SQL?

SQL stands for Structured Query Language. It is a way to talk to databases, which store and organize a lot of data, such as user details, posts, or comments on a website. SQL helps us ask the database for information or give it new data to store.

When writing an SQL query, you ask the database a question or give it a task. For example, if you want to see all users on your site, an SQL query can retrieve that list.

SQL is powerful and vital since all WordPress sites use databases to store content.

What Is An SQL Injection Attack?

WordPress SQL injection attacks try to gain access to your site’s database. An SQL injection (SQLi) lets hackers exploit a vulnerable SQL query to run a query they made. The attack occurs when a hacker tricks a database into running harmful SQL commands.

Hackers can send these commands via input fields on your site, such as those in login forms or search bars. If the website does not check input carefully, a command can grant access to the database. Imagine a hacker typing an SQL command instead of typing a username. It may fool the database and show private data such as passwords and emails. The attacker could use it to change or delete database data.

Your database holds all your user-generated data and content. It stores pages, posts, links, comments, and users. For the “bad” guys, it is a goldmine of valuable data.

SQL injections are dangerous as they let hackers steal data or take control of a website. A WordPress firewall prevents SQL injection attacks. Those attacks can compromise and hack sites very fast.

SQL Injections: Three Main Types

There are three main kinds of SQL injection attacks. Every type works in various ways, but they all try to fool the database. We’re going to look at every single type.

In-Band SQLi

This is perhaps the most common type of attack. A hacker sends the command and gets the results using the same communication method. It is to make a request and get the answer right away.

There are two types of In-band SQLi injection attacks:

  • Error-based SQLi,
  • Union-based SQLi.

With error-based SQLi, the hacker causes the database to give an error message. This message may reveal crucial data, such as database structure and settings.

What about union-based SQLi attacks? The hacker uses the SQL UNION statement to combine their request with a standard query. It can give them access to other data stored in the database.

Inferential SQLi

With inferential SQLi, the hacker will not see the results at once. Instead, they ask for database queries that give “yes” and “no” answers. Hackers can reveal the database structure or data by how the site responds.

They do that in two common ways:

  • Boolean-based SQLi,
  • Time-based SQLi.

Through Boolean-based SQLi, the hacker sends queries that can only be “true” or “false.” For example, is this user ID more than 100? This allows hackers to gather more data about the site based on how it reacts.

In time-based SQLi, the hacker asks a query that makes the database take longer to reply if the answer is “yes.” They can figure out what they need to know due to the delay.

Out-of-band SQLi

Out-of-band SQLi is a less common but equally dangerous type of attack. Hackers use various ways to get results. Usually, they connect the database to a server they control.

The hacker does not see the results all at once. However, they can get the data sent somewhere else via email or a network connection. This method applies when the site blocks ordinary SQL injection methods.

Why Preventing SQL Injection Is Crucial

SQL injections are a giant risk for websites. They can lead to various harms — stolen data, website damage, legal issues, loss of trust, and more.

Hackers can steal data like usernames, passwords, and emails. They may cause damage by deleting and changing your data. Besides, it messes up your site structure, making it unusable.

Is your user data stolen? You might face legal troubles if your site treats sensitive data. People may lose trust in you if they see that your site gets hacked. As a result, the reputation of your site can suffer.

Thus, it is so vital to prevent SQL injections before they occur.

11 Ways To Prevent WordPress SQL Injection Attacks

OK, so we know what SQL is and that WordPress relies on it. We also know that attackers take advantage of SQL vulnerabilities. I’ve collected 11 tips for keeping your WordPress site free of SQL injections. The tips limit your vulnerability and secure your site from SQL injection attacks.

1. Validate User Input

SQL injection attacks usually occur via forms or input fields on your site. It could be inside a login form, a search box, a contact form, or a comment section. Does a hacker enter bad SQL commands into one of these fields? They may fool your site, giving them access to your database by running those commands.

Hence, always sanitize and validate all input data on your site. Users should not be able to submit data if it does not follow a specific format. The easiest way to avoid this is to use a plugin like Formidable Forms, an advanced builder for adding forms. That said, WordPress has many built-in functions to sanitize and validate input on your own. It includes sanitize_text_field(), sanitize_email(), and sanitize_url().

The validation cleans up user inputs before they get sent to your database. These functions strip out unwanted characters and ensure the data is safe to store.

2. Avoid Dynamic SQL

Dynamic SQL allows you to create SQL statements on the fly at runtime. How does dynamic SQL work compared to static SQL? You can create flexible and general SQL queries adjusted to various conditions. As a result, dynamic SQL is typically slower than static SQL, as it demands runtime parsing.

Dynamic SQL can be more vulnerable to SQL injection attacks. It occurs when the bad guy alters a query by injecting evil SQL code. The database may respond and run this harmful code. As a result, the attacker can access data, corrupt it, or even hack your entire database.

How do you keep your WordPress site safe? Use prepared statements, stored procedures or parameterized queries.

3. Regularly Update WordPress Themes And Plugins

Keeping WordPress and all plugins updated is the first step in keeping your site safe. Hackers often look for old software versions with known security issues.

There are regular security updates for WordPress, themes, and plugins. They fix security issues. You leave your site open to attacks as you ignore these updates.

To stay safe, set up automatic updates for minor WordPress versions. Check for theme and plugin updates often. Only use trusted plugins from the official WordPress source or well-known developers.

By updating often, you close many ways hackers could attack.

4. Add A WordPress Firewall

A firewall is one of the best ways to keep your WordPress website safe. It is a shield for your WordPress site and a security guard that checks all incoming traffic. The firewall decides who can enter your site and who gets blocked.

There are five main types of WordPress firewalls:

  • Plugin-based firewalls,
  • Web application firewalls,
  • Cloud-based firewalls,
  • DNS-level firewalls,
  • Application-level firewalls.

Plugin-based firewalls you install on your WordPress site. They work from within your website to block the bad traffic. Web application firewalls filter, check and block the traffic to and from a web service. They detect and defend against risky security flaws that are most common in web traffic. Cloud-based firewalls work from outside your site. They block the bad traffic before it even reaches your site. DNS-level firewalls send your site traffic via their cloud proxy servers, only letting them direct real traffic to your web server. Finally, application-level firewalls check the traffic as it reaches your server. That means before loading most of the WordPress scripts.

Stable security plugins like Sucuri and Wordfence can also act as firewalls.

5. Hide Your WordPress Version

Older WordPress versions display the WordPress version in the admin footer. It’s not always a bad thing to show your version of WordPress. But revealing it does provide virtual ammo to hackers. They want to exploit vulnerabilities in outdated WordPress versions.

Are you using an older WordPress version? You can still hide your WordPress version:

  • With a security plugin such as Sucuri or Wordfence to clear the version number or
  • By adding a little bit of code to your functions.php file.
function hide_wordpress_version() {
  return '';
}
add_filter('the_generator', 'hide_wordpress_version');

This code stops your WordPress version number from showing in the theme’s header.php file and RSS feeds. It adds a small but helpful layer of security. Thus, it becomes more difficult for hackers to detect.

6. Make Custom Database Error Notices

Bad guys can see how your database is set up via error notices. Ensure creating a custom database error notice that users see to stop it. Hackers will find it harder to detect weak spots in your site when you hide error details. The site will stay much safer when you show less data on the front end.

To do that, copy and paste the code into a new db-error.php file. Jeff Starr has a classic article on the topic from 2009 with an example:

<?php // Custom WordPress Database Error Page
  header('HTTP/1.1 503 Service Temporarily Unavailable');
  header('Status: 503 Service Temporarily Unavailable');
  header('Retry-After: 600'); // 1 hour = 3600 seconds

// If you want to send an email to yourself upon an error
// mail("your@email.com", "Database Error", "There is a problem with the database!", "From: Db Error Watching");
?>
<!DOCTYPE HTML> <html> <head> <title>Database Error</title> <style> body { padding: 50px; background: #04A9EA; color: #fff; font-size: 30px; } .box { display: flex; align-items: center; justify-content: center; } </style> </head> <body> <div class="box"> <h1>Something went wrong</h1> </div> </body> </html>

Now save the file in the root of your /wp-content/ folder for it to take effect.

7. Set Access And Permission Limits For User Roles

Assign only the permissions that each role demands to do its tasks. For example, Editors may not need access to the WordPress database or plugin settings. Improve site security by giving only the admin role full dashboard access. Limiting access to features for fewer roles reduces the odds of an SQL injection attack.

8. Enable Two-factor Authentication

A great way to protect your WordPress site is to apply two-factor authentication (2FA). Why? Since it adds an extra layer of security to your login page. Even if a hacker cracks your password, they still won’t be able to log in without getting access to the 2FA code.

Setting up 2FA on WordPress goes like this:

  1. Install a two-factor authentication plugin.
    Google Authenticator by miniOrange, Two-Factor, and WP 2FA by Melapress are good options.
  2. Pick your authentication method.
    The plugins often have three choices: SMS codes, authentication apps, or security keys.
  3. Link your account.
    Are you using Google Authenticator? Start and scan the QR code inside the plugin settings to connect it. If you use SMS, enter your phone number and get codes via text.
  4. Test it.
    Log out of WordPress and try to log in again. First, enter your username and password as always. Second, you complete the 2FA step and type in the code you receive via SMS or email.
  5. Enable backup codes (optional).
    Some plugins let you generate backup codes. Save these in a safe spot in case you lose access to your phone or email.

9. Delete All Unneeded Database Functions

Assure erasing tables you no longer use and delete junk or unapproved comments. Your database will be more resistant to hackers who try to exploit sensitive data.

10. Monitor Your Site For Unusual Activity

Watch for unusual activity on your site. You can check for actions like many failed login attempts or strange traffic spikes. Security plugins such as Wordfence or Sucuri alert you when something seems odd. That helps to catch issues before they get worse.

11. Backup Your Site Regularly

Running regular backups is crucial. With a backup, you can quickly restore your site to its original state if it gets hacked. You want to do this anytime you execute a significant update on your site. Also, it regards updating your theme and plugins.

Begin to create a plan for your backups so it suits your needs. For example, if you publish new content every day, then it may be a good idea to back up your database and files daily.

Many security plugins offer automated backups. Of course, you can also use backup plugins like UpdraftPlus or Solid Security. You should store backup copies in various locations, such as Dropbox and Google Drive. It will give you peace of mind.

How To Remove SQL Injection From Your Site

Let’s say you are already under attack and are dealing with an active SQL injection on your site. It’s not like any of the preventative measures we’ve covered will help all that much. Here’s what you can do to fight back and defend your site:

  • Check your database for changes. Look for strange entries in user accounts, content, or plugin settings.
  • Erase evil code. Scan your site with a security plugin like Wordfence or Sucuri to find and erase harmful code.
  • Restore a clean backup. Is the damage vast? Restoring your site from an existing backup could be the best option.
  • Change all passwords. Alter your passwords for the WordPress admin, the database, and the hosting account.
  • Harden your site security. After cleaning your site, take the 11 steps we covered earlier to prevent future attacks.
Conclusion

Hackers love weak sites. They look for easy ways to break in, steal data, and cause harm. One of the tricks they often use is SQL injection. If they find a way in, they can steal private data, alter your content, or even take over your site. That’s bad news both for you and your visitors.

But here is the good news: You can stop them! It is possible to block these attacks before they happen by taking the correct steps. And you don’t need to be a tech freak.

Many people ignore website security until it’s too late. They think, “Why would a hacker target my site?” But hackers don’t attack only big sites. They attack any site with weak security. So, even small blogs and new websites are in danger. Once a hacker gets in, this person can cause you lots of damage. Fixing a hacked site takes time, effort, and money. But stopping an attack before it happens? That’s much easier.

Hackers don’t sit and wait, so why should you? Thousands of sites get attacked daily, so don’t let yours be the next one. Update your site, add a firewall, enable 2FA, and check your security settings. These small steps can help prevent giant issues in the future.

Your site needs protection against the bad guys. You have worked hard to build it. Never neglect to update and protect it. After that, your site will be safer and sounder.

]]>
hello@smashingmagazine.com (Anders Johansson)
<![CDATA[How To Build Confidence In Your UX Work]]> https://smashingmagazine.com/2025/03/how-to-build-confidence-in-your-ux-work/ https://smashingmagazine.com/2025/03/how-to-build-confidence-in-your-ux-work/ Tue, 11 Mar 2025 15:00:00 GMT When I start any UX project, typically, there is very little confidence in the successful outcome of my UX initiatives. In fact, there is quite a lot of reluctance and hesitation, especially from teams that have been burnt by empty promises and poor delivery in the past.

Good UX has a huge impact on business. But often, we need to build up confidence in our upcoming UX projects. For me, an effective way to do that is to address critical bottlenecks and uncover hidden deficiencies — the ones that affect the people I’ll be working with.

Let’s take a closer look at what this can look like.

This article is part of our ongoing series on UX. You can find more details on design patterns and UX strategy in Smart Interface Design Patterns 🍣 — with live UX training coming up soon. Free preview.

UX Doesn’t Disrupt, It Solves Problems

Bottlenecks are usually the most disruptive part of any company. Almost every team, every unit, and every department has one. It’s often well-known by employees as they complain about it, but it rarely finds its way to senior management as they are detached from daily operations.

The bottleneck can be the only senior developer on the team, a broken legacy tool, or a confusing flow that throws errors left and right — there’s always a bottleneck, and it’s usually the reason for long waiting times, delayed delivery, and cutting corners in all the wrong places.

We might not be able to fix the bottleneck. But for a smooth flow of work, we need to ensure that non-constraint resources don’t produce more than the constraint can handle. All processes and initiatives must be aligned to support and maximize the efficiency of the constraint.

So before doing any UX work, look out for things that slow down the organization. Show that it’s not UX work that disrupts work, but it’s internal disruptions that UX can help with. And once you’ve delivered even a tiny bit of value, you might be surprised how quickly people will want to see more of what you have in store for them.

The Work Is Never Just “The Work”

Meetings, reviews, experimentation, pitching, deployment, support, updates, fixes — unplanned work blocks other work from being completed. Exposing the root causes of unplanned work and finding critical bottlenecks that slow down delivery is not only the first step we need to take when we want to improve existing workflows, but it is also a good starting point for showing the value of UX.

To learn more about the points that create friction in people’s day-to-day work, set up 1:1s with the team and ask them what slows them down. Find a problem that affects everyone. Perhaps too much work in progress results in late delivery and low quality? Or lengthy meetings stealing precious time?

One frequently overlooked detail is that we can’t manage work that is invisible. That’s why it is so important that we visualize the work first. Once we know the bottleneck, we can suggest ways to improve it. It could be to introduce 20% idle times if the workload is too high, for example, or to make meetings slightly shorter to make room for other work.

The Theory Of Constraints

The idea that the work is never just “the work” is deeply connected to the Theory of Constraints discovered by Dr. Eliyahu M. Goldratt. It showed that any improvements made anywhere beside the bottleneck are an illusion.

Any improvement after the bottleneck is useless because it will always remain starved, waiting for work from the bottleneck. And any improvements made before the bottleneck result in more work piling up at the bottleneck.

Wait Time = Busy ÷ Idle

To improve flow, sometimes we need to freeze the work and bring focus to one single project. Just as important as throttling the release of work is managing the handoffs. The wait time for a given resource is the percentage of time that the resource is busy divided by the percentage of time it’s idle. If a resource is 50% utilized, the wait time is 50/50, or 1 unit.

If the resource is 90% utilized, the wait time is 90/10, or 9 times longer. And if it’s 99% of time utilized, it’s 99/1, so 99 times longer than if that resource is 50% utilized. The critical part is to make wait times visible so you know when your work spends days sitting in someone’s queue.

The exact times don’t matter, but if a resource is busy 99% of the time, the wait time will explode.

Avoid 100% Occupation

Our goal is to maximize flow: that means exploiting the constraint but creating idle times for non-constraint to optimize system performance.

One surprising finding for me was that any attempt to maximize the utilization of all resources — 100% occupation across all departments — can actually be counterproductive. As Goldratt noted, “An hour lost at a bottleneck is an hour out of the entire system. An hour saved at a non-bottleneck is worthless.”

Recommended Read: “The Phoenix Project”

I can only wholeheartedly recommend The Phoenix Project, an absolutely incredible book that goes into all the fine details of the Theory of Constraints described above.

It’s not a design book but a great book for designers who want to be more strategic about their work. It’s a delightful and very real read about the struggles of shipping (albeit on a more technical side).

Wrapping Up

People don’t like sudden changes and uncertainty, and UX work often disrupts their usual ways of working. Unsurprisingly, most people tend to block it by default. So before we introduce big changes, we need to get their support for our UX initiatives.

We need to build confidence and show them the value that UX work can have — for their day-to-day work. To achieve that, we can work together with them. Listening to the pain points they encounter in their workflows, to the things that slow them down.

Once we’ve uncovered internal disruptions, we can tackle these critical bottlenecks and suggest steps to make existing workflows more efficient. That’s the foundation to gaining their trust and showing them that UX work doesn’t disrupt but that it’s here to solve problems.

New: How To Measure UX And Design Impact

Meet Measure UX & Design Impact (8h), a practical guide for designers and UX leads to measure and show your UX impact on business. Watch the free preview or jump to the details.

Video + UX Training

$ 495.00 $ 799.00 Get Video + UX Training

25 video lessons (8h) + Live UX Training.
100 days money-back-guarantee.

Video only

$ 250.00$ 395.00
Get the video course

25 video lessons (8h). Updated yearly.
Also available as a UX Bundle with 2 video courses.

]]>
hello@smashingmagazine.com (Vitaly Friedman)
<![CDATA[How To Fix Largest Contentful Paint Issues With Subpart Analysis]]> https://smashingmagazine.com/2025/03/how-to-fix-largest-contentful-issues-with-subpart-analysis/ https://smashingmagazine.com/2025/03/how-to-fix-largest-contentful-issues-with-subpart-analysis/ Thu, 06 Mar 2025 10:00:00 GMT This article is a sponsored by DebugBear

The Largest Contentful Paint (LCP) in Core Web Vitals measures how quickly a website loads from a visitor’s perspective. It looks at how long after opening a page the largest content element becomes visible. If your website is loading slowly, that’s bad for user experience and can also cause your site to rank lower in Google.

When trying to fix LCP issues, it’s not always clear what to focus on. Is the server too slow? Are images too big? Is the content not being displayed? Google has been working to address that recently by introducing LCP subparts, which tell you where page load delays are coming from. They’ve also added this data to the Chrome UX Report, allowing you to see what causes delays for real visitors on your website!

Let’s take a look at what the LCP subparts are, what they mean for your website speed, and how you can measure them.

The Four LCP Subparts

LCP subparts split the Largest Contentful Paint metric into four different components:

  1. Time to First Byte (TTFB): How quickly the server responds to the document request.
  2. Resource Load Delay: Time spent before the LCP image starts to download.
  3. Resource Load Time: Time spent downloading the LCP image.
  4. Element Render Delay: Time before the LCP element is displayed.

The resource timings only apply if the largest page element is an image or background image. For text elements, the Load Delay and Load Time components are always zero.

How To Measure LCP Subparts

One way to measure how much each component contributes to the LCP score on your website is to use DebugBear’s website speed test. Expand the Largest Contentful Paint metric to see subparts and other details related to your LCP score.

Here, we can see that TTFB and image Load Duration together account for 78% of the overall LCP score. That tells us that these two components are the most impactful places to start optimizing.

What’s happening during each of these stages? A network request waterfall can help us understand what resources are loading through each stage.

The LCP Image Discovery view filters the waterfall visualization to just the resources that are relevant to displaying the Largest Contentful Paint image. In this case, each of the first three stages contains one request, and the final stage finishes quickly with no new resources loaded. But that depends on your specific website and won’t always be the case.

Time To First Byte

The first step to display the largest page element is fetching the document HTML. We recently published an article about how to improve the TTFB metric.

In this example, we can see that creating the server connection doesn’t take all that long. Most of the time is spent waiting for the server to generate the page HTML. So, to improve the TTFB, we need to speed up that process or cache the HTML so we can skip the HTML generation entirely.

Resource Load Delay

The “resource” we want to load is the LCP image. Ideally, we just have an <img> tag near the top of the HTML, and the browser finds it right away and starts loading it.

But sometimes, we get a Load Delay, as is the case here. Instead of loading the image directly, the page uses lazysize.js, an image lazy loading library that only loads the LCP image once it has detected that it will appear in the viewport.

Part of the Load Delay is caused by having to download that JavaScript library. But the browser also needs to complete the page layout and start rendering content before the library will know that the image is in the viewport. After finishing the request, there’s a CPU task (in orange) that leads up to the First Contentful Paint milestone, when the page starts rendering. Only then does the library trigger the LCP image request.

How do we optimize this? First of all, instead of using a lazy loading library, you can use the native loading="lazy" image attribute. That way, loading images no longer depends on first loading JavaScript code.

But more specifically, the LCP image should not be lazily loaded. That way, the browser can start loading it as soon as the HTML code is ready. According to Google, you should aim to eliminate resource load delay entirely.

Resources Load Duration

The Load Duration subpart is probably the most straightforward: you need to download the LCP image before you can display it!

In this example, the image is loaded from the same domain as the HTML. That’s good because the browser doesn’t have to connect to a new server.

Other techniques you can use to reduce load delay:

Element Render Delay

The fourth and final LCP component, Render Delay, is often the most confusing. The resource has loaded, but for some reason, the browser isn’t ready to show it to the user yet!

Luckily, in the example we’ve been looking at so far, the LCP image appears quickly after it’s been loaded. One common reason for render delay is that the LCP element is not an image. In that case, the render delay is caused by render-blocking scripts and stylesheets. The text can only appear after these have loaded and the browser has completed the rendering process.

Another reason you might see render delay is when the website preloads the LCP image. Preloading is a good idea, as it practically eliminates any load delay and ensures the image is loaded early.

However, if the image finishes downloading before the page is ready to render, you’ll see an increase in render delay on the page. And that’s fine! You’ve improved your website speed overall, but after optimizing your image, you’ve uncovered a new bottleneck to focus on.

LCP Subparts In Real User CrUX Data

Looking at the Largest Contentful Paint subparts in lab-based tests can provide a lot of insight into where you can optimize. But all too often, the LCP in the lab doesn’t match what’s happening for real users!

That’s why, in February 2025, Google started including subpart data in the CrUX data report. It’s not (yet?) included in PageSpeed Insights, but you can see those metrics in DebugBear’s “Web Vitals” tab.

One super useful bit of info here is the LCP resource type: it tells you how many visitors saw the LCP element as a text element or an image.

Even for the same page, different visitors will see slightly different content. For example, different elements are visible based on the device size, or some visitors will see a cookie banner while others see the actual page content.

To make the data easier to interpret, Google only reports subpart data for images.

If the LCP element is usually text on the page, then the subparts info won’t be very helpful, as it won’t apply to most of your visitors.

But breaking down text LCP is relatively easy: everything that’s not part of the TTFB score is render-delayed.

Track Subparts On Your Website With Real User Monitoring

Lab data doesn’t always match what real users experience. CrUX data is superficial, only reported for high-traffic pages, and takes at least 4 weeks to fully update after a change has been rolled out.

That’s why a real-user monitoring tool like DebugBear comes in handy when fixing your LCP scores. You can track scores across all pages on your website over time and get dedicated dashboards for each LCP subpart.

You can also review specific visitor experiences, see what the LCP image was for them, inspect a request waterfall, and check LCP subpart timings. Sign up for a free trial.

Conclusion

Having more granular metric data available for the Largest Contentful Paint gives web developers a big leg up when making their website faster.

Including subparts in CrUX provides new insight into how real visitors experience your website and can tell if the optimizations you’re considering would really be impactful.

]]>
hello@smashingmagazine.com (Matt Zeunert)
<![CDATA[The Case For Minimal WordPress Setups: A Contrarian View On Theme Frameworks]]> https://smashingmagazine.com/2025/03/case-minimal-wordpress-setups-contrarian-view-theme-frameworks/ https://smashingmagazine.com/2025/03/case-minimal-wordpress-setups-contrarian-view-theme-frameworks/ Mon, 03 Mar 2025 08:00:00 GMT When it comes to custom WordPress development, theme frameworks like Sage and Genesis have become a go-to solution, particularly for many agencies that rely on frameworks as an efficient starting point for client projects. They promise modern standards, streamlined workflows, and maintainable codebases. At face value, these frameworks seem to be the answer to building high-end, bespoke WordPress websites. However, my years of inheriting these builds as a freelance developer tell a different story — one rooted in the reality of long-term maintenance, scalability, and developer onboarding.

As someone who specializes in working with professional websites, I’m frequently handed projects originally built by agencies using these frameworks. This experience has given me a unique perspective on the real-world implications of these tools over time. While they may look great in an initial pitch, their complexities often create friction for future developers, maintenance teams, and even the businesses they serve.

This is not to say frameworks like Sage or Genesis are without merit, but they are far from the universal “best practice” they’re often touted to be.

Below, I’ll share the lessons I’ve learned from inheriting and working with these setups, the challenges I’ve faced, and why I believe a minimal WordPress approach often provides a better path forward.

Why Agencies Use Frameworks

Frameworks are designed to make WordPress development faster, cleaner, and optimized for current best practices. Agencies are drawn to these tools for several reasons:

  • Current code standards
    Frameworks like Sage adopt PSR-2 standards, composer-based dependency management, and MVC-like abstractions.
  • Reusable components
    Sage’s Blade templating encourages modularity, while Genesis relies on hooks for extensive customization.
  • Streamlined design tools
    Integration with Tailwind CSS, SCSS, and Webpack (or newer tools like Bud) allows rapid prototyping.
  • Optimized performance
    Frameworks are typically designed with lightweight, bloat-free themes in mind.
  • Team productivity
    By creating a standardized approach, these frameworks promise efficiency for larger teams with multiple contributors.

On paper, these benefits make frameworks an enticing choice for agencies. They simplify the initial build process and cater to developers accustomed to working with modern PHP practices and JavaScript-driven tooling. But whenever I inherit these projects years later, the cracks in the foundation begin to show.

The Reality of Maintaining Framework-Based Builds

While frameworks have their strengths, my firsthand experience reveals recurring issues that arise when it’s time to maintain or extend these builds. These challenges aren’t theoretical — they are issues I’ve encountered repeatedly when stepping into an existing framework-based site.

1. Abstraction Creates Friction

One of the selling points of frameworks is their use of abstractions, such as Blade templating and controller-to-view separation. While these patterns make sense in theory, they often lead to unnecessary complexity in practice.

For instance, Blade templates abstract PHP logic from WordPress’s traditional theme hierarchy. This means errors like syntax issues don’t provide clear stack traces pointing to the actual view file — rather, they reference compiled templates. Debugging becomes a scavenger hunt, especially for developers unfamiliar with Sage’s structure.

One example is a popular news outlet with millions of monthly visitors. When I first inherited their Sage-based theme, I had to bypass their Lando/Docker environment to use my own minimal Nginx localhost setup. The theme was incompatible with standard WordPress workflows, and I had to modify build scripts to support a traditional installation. Once I resolved the environment issues, I realized their build process was incredibly slow, with hot module replacement only partially functional (Blade template changes wouldn’t reload). Each save took 4–5 seconds to compile.

Faced with a decision to either upgrade to Sage 10 or rebuild the critical aspects, I opted for the latter. We drastically improved performance by replacing the Sage build with a simple Laravel Mix process. The new build process was reduced from thousands of lines to 80, significantly improving developer workflow. Any new developer could now understand the setup quickly, and future debugging would be far simpler.

2. Inflexible Patterns

While Sage encourages “best practices,” these patterns can feel rigid and over-engineered for simple tasks. Customizing basic WordPress features — like adding a navigation menu or tweaking a post query — requires following the framework’s prescribed patterns. This introduces a learning curve for developers who aren’t deeply familiar with Sage, and slows down progress for minor adjustments.

Traditional WordPress theme structures, by contrast, are intuitive and widely understood. Any WordPress developer, regardless of background, can jump into a classic theme and immediately know where to look for templates, logic, and customizations. Sage’s abstraction layers, while well-meaning, limit accessibility to a smaller, more niche group of developers.

3. Hosting Compatibility Issues

When working with Sage, issues with hosting environments are inevitable. For example, Sage’s use of Laravel Blade compiles templates into cached PHP files, often stored in directories like /wp-content/cache. Strict file system rules on managed hosting platforms, like WP Engine, can block these writes, leading to white screens or broken templates after deployment.

This was precisely the issue I faced with a custom agency-built theme using the Sage theme on WPEngine." Every Git deployment resulted in a white screen of death due to PHP errors caused by Blade templates failing to save in the intended cache directory. The solution, recommended by WP Engine support, was to use the system’s /tmp directory. While this workaround prevented deployment errors, it undermined the purpose of cached templates, as temporary files are cleared by PHP’s garbage collection. Debugging and implementing this solution consumed significant time — time that could have been avoided had the theme been designed with hosting compatibility in mind.

4. Breaking Changes And Upgrade Woes

Upgrading from Sage 9 to Sage 10 — or even from older versions of Roots — often feels like a complete rebuild. These breaking changes create friction for businesses that want long-term stability. Clients, understandably, are unwilling to pay for what amounts to refactoring without a visible return on investment. As a result, these sites stagnate, locked into outdated versions of the framework, creating problems with dependency management (e.g., Composer packages, Node.js versions) and documentation mismatches.

One agency subcontract I worked on recently gave me insight into Sage 10’s latest approach. Even on small microsites with minimal custom logic, I found the Bud-based build system sluggish, with watch processes taking over three seconds to reload.

For developers accustomed to faster workflows, this is unacceptable. Additionally, Sage 10 introduced new patterns and directives that departed significantly from Sage 9, adding a fresh learning curve. While I understand the appeal of mirroring Laravel’s structure, I couldn’t shake the feeling that this complexity was unnecessary for WordPress. By sticking to simpler approaches, the footprint could be smaller, the performance faster, and the maintenance much easier.

The Cost Of Over-Engineering

The issues above boil down to one central theme: over-engineering.

Frameworks like Sage introduce complexity that, while beneficial in theory, often outweighs the practical benefits for most WordPress projects.

When you factor in real-world constraints — like tight budgets, frequent developer turnover, and the need for intuitive codebases — the case for a minimal approach becomes clear.

Minimal WordPress setups embrace simplicity:

  • No abstraction for abstraction’s sake
    Traditional WordPress theme hierarchy is straightforward, predictable, and accessible to a broad developer audience.
  • Reduced tooling overhead
    Avoiding reliance on tools like Webpack or Blade removes potential points of failure and speeds up workflows.
  • Future-proofing
    A standard theme structure remains compatible with WordPress core updates and developer expectations, even a decade later.

In my experience, minimal setups foster easier collaboration and faster problem-solving. They focus on solving the problem rather than adhering to overly opinionated patterns.

Real World Example

Like many things, this all sounds great and makes sense in theory, but what does it look like in practice? Seeing is believing, so I’ve created a minimal theme that exemplifies some of the concepts I’ve described here. This theme is a work in progress, and there are plenty of areas where it needs work. It provides the top features that custom WordPress developers seem to want most in a theme framework.

Modern Features

Before we dive in, I’ll list out some of the key benefits of what’s going on in this theme. Above all of these, working minimally and keeping things simple and easy to understand is by far the largest benefit, in my opinion.

  • A watch task that compiles and reloads in under 100ms;
  • Sass for CSS preprocessing coupled with CSS written in BEM syntax;
  • Native ES modules;
  • Composer package management;
  • Twig view templating;
  • View-controller pattern;
  • Namespaced PHP for isolation;
  • Built-in support for the Advanced Custom Fields plugin;
  • Global context variables for common WordPress data: site_url, site_description, site_url, theme_dir, theme_url, primary_nav, ACF custom fields, the_title(), the_content().

Templating Language

Twig is included with this theme, and it is used to load a small set of commonly used global context variables such as theme URL, theme directory, site name, site URL, and so on. It also includes some core functions as well, like the_content(), the_title(), and others you’d routinely often use during the process of creating a custom theme. These global context variables and functions are available for all URLs.

While it could be argued that Twig is an unnecessary additional abstraction layer when we’re trying to establish a minimal WordPress setup, I chose to include it because this type of abstraction is included in Sage. But it’s also for a few other important reasons:

  • Old,
  • Dependable, and
  • Stable.

You won’t need to worry about any future breaking changes in future versions, and it’s widely in use today. All the features I commonly see used in Sage Blade templates can easily be handled with Twig similarly. There really isn’t anything you can do with Blade that isn’t possible with Twig.

Blade is a great templating language, but it’s best suited for Laravel, in my opinion. BladeOne does provide a good way to use it as a standalone templating engine, but even then, it’s still not as performant under pressure as Twig. Twig’s added performance, when used with small, efficient contexts, allows us to avoid the complexity that comes with caching view output. Compile-on-the-fly Twig is very close to the same speed as raw PHP in this use case.

Most importantly, Twig was built to be portable. It can be installed with composer and used within the theme with just 55 lines of code.

Now, in a real project, this would probably be more than 55 lines, but either way, it is, without a doubt, much easier to understand and work with than Blade. Blade was built for use in Laravel, and it’s just not nearly as portable. It will be significantly easier to identify issues, track them down with a direct stack trace, and fix them with Twig.

The view context in this theme is deliberately kept sparse, during a site build you’ll add what you specifically need for a particular site. A lean context for your views helps with performance and workflow.

Models & Controllers

The template hierarchy follows the patterns of good ol’ WordPress, and while some developers don’t like this, it is undoubtedly the most widely accepted and commonly understood standard. Each standard theme file uses a model where you define your data structures with PHP and hand off the theme as the context to a .twig view file.

Developers like the structure of separating server-side logic from a template, and in a classic MVC/MVVC pattern, we have our model, view, and controller. Here, I’m using the standard WordPress theme templates as models.

Currently, template files include some useful basics. You’re likely familiar with these standard templates, but I’ll list them here for posterity:

  • 404.php: Displays a custom “Page Not Found” message when a visitor tries to access a page that doesn’t exist.
  • archive.php: Displays a list of posts from a particular archive, such as a category, date, or tag archive.
  • author.php: Displays a list of posts by a specific author, along with the author’s information.
  • category.php: Displays a list of posts from a specific category.
  • footer.php: Contains the footer section of the theme, typically including closing HTML tags and widgets or navigation in the footer area.
  • front-page.php: The template used for the site’s front page, either static or a blog, depending on the site settings.
  • functions.php: Adds custom functionality to the theme, such as registering menus and widgets or adding theme support for features like custom logos or post thumbnails.
  • header.php: Contains the header section of the theme, typically including the site’s title, meta tags, and navigation menu.
  • index.php: The fallback template for all WordPress pages is used if no other more specific template (like category.php or single.php) is available.
  • page.php: Displays individual static pages, such as “About” or “Contact” pages.
  • screenshot.png: An image of the theme’s design is shown in the WordPress theme selector to give users a preview of the theme’s appearance.
  • search.php: Displays the results of a search query, showing posts or pages that match the search terms entered by the user.
  • single.php: Displays individual posts, often used for blog posts or custom post types.
  • tag.php: Displays a list of posts associated with a specific tag.

Extremely Fast Build Process For SCSS And JavaScript

The build is curiously different in this theme, but out of the box, you can compile SCSS to CSS, work with native JavaScript modules, and have a live reload watch process with a tiny footprint. Look inside the bin/*.js files, and you’ll see everything that’s happening.

There are just two commands here, and all web developers should be familiar with them:

  1. Watch
    While developing, it will reload or inject JavaScript and CSS changes into the browser automatically using a Browsersync.
  2. Build
    This task compiles all top-level *.scss files efficiently. There’s room for improvement, but keep in mind this theme serves as a concept.

Now for a curveball: there is no compile process for JavaScript. File changes will still be injected into the browser with hot module replacement during watch mode, but we don’t need to compile anything.

WordPress will load theme JavaScript as native ES modules, using WordPress 6.5’s support for ES modules. My reasoning is that many sites now pass through Cloudflare, so modern compression is handled for JavaScript automatically. Many specialized WordPress hosts do this as well. When comparing minification to GZIP, it’s clear that minification provides trivial gains in file reduction. The vast majority of file reduction is provided by CDN and server compression. Based on this, I believe the benefits of a fast workflow far outweigh the additional overhead of pulling in build steps for webpack, Rollup, or other similar packaging tools.

We’re fortunate that the web fully supports ES modules today, so there is really no reason why we should need to compile JavaScript at all if we’re not using a JavaScript framework like Vue, React, or Svelte.

A Contrarian Approach

My perspective and the ideas I’ve shared here are undoubtedly contrarian. Like anything alternative, this is bound to ruffle some feathers. Frameworks like Sage are celebrated in developer circles, with strong communities behind them. For certain use cases — like large-scale, enterprise-level projects with dedicated development teams — they may indeed be the right fit.

For the vast majority of WordPress projects I encounter, the added complexity creates more problems than it solves. As developers, our goal should be to build solutions that are not only functional and performant but also maintainable and approachable for the next person who inherits them.

Simplicity, in my view, is underrated in modern web development. A minimal WordPress setup, tailored to the specific needs of the project without unnecessary abstraction, is often the leaner, more sustainable choice.

Conclusion

Inheriting framework-based projects has taught me invaluable lessons about the real-world impact of theme frameworks. While they may impress in an initial pitch or during development, the long-term consequences of added complexity often outweigh the benefits. By adopting a minimal WordPress approach, we can build sites that are easier to maintain, faster to onboard new developers, and more resilient to change.

Modern tools have their place, but minimalism never goes out of style. When you choose simplicity, you choose a codebase that works today, tomorrow, and years down the line. Isn’t that what great web development is all about?

]]>
hello@smashingmagazine.com (Kevin Leary)
<![CDATA[Sunshine And March Vibes (2025 Wallpapers Edition)]]> https://smashingmagazine.com/2025/02/desktop-wallpaper-calendars-march-2025/ https://smashingmagazine.com/2025/02/desktop-wallpaper-calendars-march-2025/ Fri, 28 Feb 2025 13:00:00 GMT With the days getting noticeably longer in the northern hemisphere, the sun coming out, and the flowers blooming, March fuels us with fresh energy. And even if spring is far away in your part of the world, you might feel that 2025 has gained full speed by now — the perfect opportunity to put all those plans you’ve made and ideas you’ve been carrying around to action!

To cater for some extra inspiration this March, artists and designers from across the globe once again challenged their creative skills and designed a new batch of desktop wallpapers to accompany you through the month. As every month, you’ll find their artworks compiled below — together with some timeless March favorites from our archives that are just too good to be forgotten.

This post wouldn’t exist without the kind support of our wonderful community who diligently contributes their designs each month anew to keep the steady stream of wallpapers flowing. So, a huge thank-you to everyone who shared their artwork with us this time around! If you, too, would like to get featured in one of our upcoming wallpapers posts, please don’t hesitate to join in. We can’t wait to see what you’ll come up with! Happy March!

  • You can click on every image to see a larger preview.
  • We respect and carefully consider the ideas and motivation behind each and every artist’s work. This is why we give all artists the full freedom to explore their creativity and express emotions and experience through their works. This is also why the themes of the wallpapers weren’t anyhow influenced by us but rather designed from scratch by the artists themselves.
  • Submit your wallpaper design! 👩‍🎨
    Feeling inspired? We are always looking for creative talent and would love to feature your desktop wallpaper in one of our upcoming posts. Join in ↬

Bee-utiful Smile

Designed by Doreen Bethge from Germany.

Coffee Break

Designed by Ricardo Gimenes from Spain.

Rosa Parks

“March, the month of transition between winter and spring, is dedicated to Rosa Parks and her great phrase: ‘You must never be fearful about what you are doing when it is right.’” — Designed by Veronica Valenzuela from Spain.

So Tire

Designed by Ricardo Gimenes from Spain.

Time To Wake Up

“Rays of sunlight had cracked into the bear’s cave. He slowly opened one eye and caught a glimpse of nature in blossom. Is it spring already? Oh, but he is so sleepy. He doesn’t want to wake up, not just yet. So he continues dreaming about those sweet sluggish days while everything around him is blooming.” — Designed by PopArt Studio from Serbia.

Music From The Past

Designed by Ricardo Gimenes from Spain.

Northern Lights

“Spring is getting closer, and we are waiting for it with open arms. This month, we want to enjoy discovering the northern lights. To do so, we are going to Alaska, where we have the faithful company of our friend White Fang.” — Designed by Veronica Valenzuela Jimenez from Spain.

Queen Bee

“Spring is coming! Birds are singing, flowers are blooming, bees are flying… Enjoy this month!” — Designed by Melissa Bogemans from Belgium.

Botanica

Designed by Vlad Gerasimov from Georgia.

Let’s Spring

“After some freezing months, it’s time to enjoy the sun and flowers. It’s party time, colours are coming, so let’s spring!” — Designed by Colorsfera from Spain.

Spring Bird

Designed by Nathalie Ouederni from France.

Explore The Forest

“This month, I want to go to the woods and explore my new world in sunny weather.” — Designed by Zi-Cing Hong from Taiwan.

Tacos To The Moon And Back

Designed by Ricardo Gimenes from Spain.

Daydreaming

“Daydreaming of better things, of lovely things, of saddening things.” — Designed by Bhabna Basak from India.

Ballet

“A day, even a whole month, isn’t enough to show how much a woman should be appreciated. Dear ladies, any day or month are yours if you decide so.” — Designed by Ana Masnikosa from Belgrade, Serbia.

Awakening

“I am the kind of person who prefers the cold but I do love spring since it’s the magical time when flowers and trees come back to life and fill the landscape with beautiful colors.” — Designed by Maria Keller from Mexico.

MARCHing Forward

“If all you want is a little orange dinosaur MARCHing (okay, I think you get the pun) across your monitor, this wallpaper was made just for you! This little guy is my design buddy at the office and sits by (and sometimes on top of) my monitor. This is what happens when you have designer’s block and a DSLR.” — Designed by Paul Bupe Jr from Statesboro, GA.

Jingzhe

“Jīngzhé is the third of the 24 solar terms in the traditional East Asian calendars. The word 驚蟄 means ‘the awakening of hibernating insects’. 驚 is ‘to start’ and 蟄 means ‘hibernating insects’. Traditional Chinese folklore says that during Jingzhe, thunderstorms will wake up the hibernating insects, which implies that the weather is getting warmer.” — Designed by Sunny Hong from Taiwan.

Fresh Lemons

Designed by Nathalie Ouederni from France.

Pizza Time

“Who needs an excuse to look at pizza all month?” — Designed by James Mitchell from the United Kingdom.

Questions

“Doodles are slowly becoming my trademark, so I just had to use them to express this phrase I’m fond of recently. A bit enigmatic, philosophical. Inspiring, isn’t it?” — Designed by Marta Paderewska from Poland.

The Unknown

“I made a connection, between the dark side and the unknown lighted and catchy area.” — Designed by Valentin Keleti from Romania.

Waiting For Spring

“As days are getting longer again and the first few flowers start to bloom, we are all waiting for spring to finally arrive.” — Designed by Naioo from Germany.

St. Patrick’s Day

“On the 17th March, raise a glass and toast St. Patrick on St. Patrick’s Day, the Patron Saint of Ireland.” — Designed by Ever Increasing Circles from the United Kingdom.

Spring Is Coming

“This March, our calendar design epitomizes the heralds of spring. Soon enough, you’ll be waking up to the singing of swallows, in a room full of sunshine, filled with the empowering smell of daffodil, the first springtime flowers. Spring is the time of rebirth and new beginnings, creativity and inspiration, self-awareness, and inner reflection. Have a budding, thriving spring!” — Designed by PopArt Studio from Serbia.

Happy Birthday Dr. Seuss!

“March 2nd marks the birthday of the most creative and extraordinary author ever, Dr. Seuss! I have included an inspirational quote about learning to encourage everyone to continue learning new things every day.” — Designed by Safia Begum from the United Kingdom.</p

Wake Up!

“Early spring in March is for me the time when the snow melts, everything isn’t very colorful. This is what I wanted to show. Everything comes to life slowly, as this bear. Flowers are banal, so instead of a purple crocus we have a purple bird-harbinger.” — Designed by Marek Kedzierski from Poland.

Spring Is Inevitable

“Spring is round the corner. And very soon plants will grow on some other planets too. Let’s be happy about a new cycle of life.” — Designed by Igor Izhik from Canada.

Traveling To Neverland

“This month we become children and we travel with Peter Pan. Let’s go to Neverland!” — Designed by Veronica Valenzuela from Spain.

Let’s Get Outside

Designed by Lívia Lénárt from Hungary.

]]>
hello@smashingmagazine.com (Cosima Mielke)
<![CDATA[The Human Element: Using Research And Psychology To Elevate Data Storytelling]]> https://smashingmagazine.com/2025/02/human-element-using-research-psychology-elevate-data-storytelling/ https://smashingmagazine.com/2025/02/human-element-using-research-psychology-elevate-data-storytelling/ Wed, 26 Feb 2025 10:00:00 GMT Data storytelling is a powerful communication tool that combines data analysis with narrative techniques to create impactful stories. It goes beyond presenting raw numbers by transforming complex data into meaningful insights that can drive decisions, influence behavior, and spark action.

When done right, data storytelling simplifies complex information, engages the audience, and compels them to act. Effective data storytelling allows UX professionals to effectively communicate the “why” behind their design choices, advocate for user-centered improvements, and ultimately create more impactful and persuasive presentations. This translates to stronger buy-in for research initiatives, increased alignment across teams, and, ultimately, products and experiences that truly meet user needs.

For instance, The New York Times’ Snow Fall data story (Figure 1) used data to immerse readers in the tale of a deadly avalanche through interactive visuals and text, while The Guardian’s The Counted (Figure 2) powerfully illustrated police violence in the U.S. by humanizing data through storytelling. These examples show that effective data storytelling can leave lasting impressions, prompting readers to think differently, act, or make informed decisions.

The importance of data storytelling lies in its ability to:

  • Simplify complexity
    It makes data understandable and actionable.
  • Engage and persuade
    Emotional and cognitive engagement ensures audiences not only understand but also feel compelled to act.
  • Bridge gaps
    Data storytelling connects the dots between information and human experience, making the data relevant and relatable.

While there are numerous models of data storytelling, here are a few high-level areas of focus UX practitioners should have a grasp on:

Narrative Structures: Traditional storytelling models like the hero’s journey (Vogler, 1992) or the Freytag pyramid (Figure 3) provide a backbone for structuring data stories. These models help create a beginning, rising action, climax, falling action, and resolution, keeping the audience engaged.

Data Visualization: Broadly speaking, these are the tools and techniques for visualizing data in our stories. Interactive charts, maps, and infographics (Cairo, 2016) transform raw data into digestible visuals, making complex information easier to understand and remember.

Narrative Structures For Data

Moving beyond these basic structures, let’s explore how more sophisticated narrative techniques can enhance the impact of data stories:

  • The Three-Act Structure
    This approach divides the data story into setup, confrontation, and resolution. It helps build context, present the problem or insight, and offer a solution or conclusion (Few, 2005).
  • The Hero’s Journey (Data Edition)
    We can frame a data set as a problem that needs a hero to overcome. In this case, the hero is often the audience or the decision-maker who needs to use the data to solve a problem. The data itself becomes the journey, revealing challenges, insights, and, ultimately, a path to resolution.
Example:
Presenting data on declining user engagement could follow the hero’s journey. The “call to adventure” is the declining engagement. The “challenges” are revealed through data points showing where users are dropping off. The “insights” are uncovered through further analysis, revealing the root causes. The “resolution” is the proposed solution, supported by data, that the audience (the hero) can implement.
Problems With Widely Used Data Storytelling Models

Many data storytelling models follow a traditional, linear structure: data selection, audience tailoring, storyboarding with visuals, and a call to action. While these models aim to make data more accessible, they often fail to engage the audience on a deeper level, leading to missed opportunities. This happens because they prioritize the presentation of data over the experience of the audience, neglecting how different individuals perceive and process information.

While existing data storytelling models adhere to a structured and technically correct approach to data creation, they often fall short of fully analyzing and understanding their audience. This gap weakens their overall effectiveness and impact.

  • Cognitive Overload
    Presenting too much data without context or a clear narrative overwhelms the audience. Instead of enlightenment, they experience confusion and disengagement. It’s like trying to drink from a firehose; the sheer volume becomes counterproductive. This overload can be particularly challenging for individuals with cognitive differences who may require information to be presented in smaller, more digestible chunks.
  • Emotional Disconnect
    Data-heavy presentations often fail to establish an emotional connection, which is crucial for driving audience engagement and action. People are more likely to remember and act upon information that resonates with their feelings and values.
  • Lack of Personalization
    Many data stories adopt a one-size-fits-all approach. Without tailoring the narrative to specific audience segments, the impact is diluted. A message that resonates with a CEO might not land with frontline employees.
  • Over-Reliance on Visuals
    While visuals are essential for simplifying data, they are insufficient without a cohesive narrative to provide context and meaning, and they may not be accessible to all audience members.

These shortcomings reveal a critical flaw: while current models successfully follow a structured data creation process, they often neglect the deeper, audience-centered analysis required for actual storytelling effectiveness. To bridge this gap,

Data storytelling must evolve beyond simply presenting information — it should prioritize audience understanding, engagement, and accessibility at every stage.

Improving On Traditional Models

Traditional models can be improved by focusing more on the following two critical components:

Audience understanding: A greater focus can be concentrated on who the audience is, what they need, and how they perceive information. Traditional models should consider the unique characteristics and needs of specific audiences. This lack of audience understanding can lead to data stories that are irrelevant, confusing, or even misleading.

Effective data storytelling requires a deep understanding of the audience’s demographics, psychographics, and information needs. This includes understanding their level of knowledge about the topic, their prior beliefs and attitudes, and their motivations for seeking information. By tailoring the data story to a specific audience, storytellers can increase engagement, comprehension, and persuasion.

Psychological principles: These models could be improved with insights from psychology that explain how people process information and make decisions. Without these elements, even the most beautifully designed data story may fall flat. Traditional models of data storytelling can be improved with two critical components that are essential for creating impactful and persuasive narratives: audience understanding and psychological principles.

By incorporating audience understanding and psychological principles into their storytelling process, data storytellers can create more effective and engaging narratives that resonate with their audience and drive desired outcomes.

Persuasion In Data Storytelling

All storytelling involves persuasion. Even if it’s a poorly told story and your audience chooses to ignore your message, you’ve persuaded them to do that. When your audience feels that you understand them, they are more likely to be persuaded by your message. Data-driven stories that speak to their hearts and minds are more likely to drive action. You can frame your message effectively when you have a deeper understanding of your audience.

Applying Psychological Principles To Data Storytelling

Humans process information based on psychological cues such as cognitive ease, social proof, and emotional appeal. By incorporating these principles, data storytellers can make their narratives more engaging, memorable, and persuasive.

Psychological principles help data storytellers tap into how people perceive, interpret, and remember information.

The Theory of Planned Behavior

While there is no single truth when it comes to how human behavior is created or changed, it is important for a data storyteller to use a theoretical framework to ensure they address the appropriate psychological factors of their audience. The Theory of Planned Behavior (TPB) is a commonly cited theory of behavior change in academic psychology research and courses. It’s useful for creating a reasonably effective framework to collect audience data and build a data story around it.

The TPB (Ajzen 1991) (Figure 5) aims to predict and explain human behavior. It consists of three key components:

  1. Attitude
    This refers to the degree to which a person has a favorable or unfavorable evaluation of the behavior in question. An example of attitudes in the TPB is a person’s belief about the importance of regular exercise for good health. If an individual strongly believes that exercise is beneficial, they are likely to have a favorable attitude toward engaging in regular physical activity.
  2. Subjective Norms
    These are the perceived social pressures to perform or not perform the behavior. Keeping with the exercise example, this would be how a person thinks their family, peers, community, social media, and others perceive the importance of regular exercise for good health.
  3. Perceived Behavioral Control
    This component reflects the perceived ease or difficulty of performing the behavior. For our physical activity example, does the individual believe they have access to exercise in terms of time, equipment, physical capability, and other potential aspects that make them feel more or less capable of engaging in the behavior?

As shown in Figure 5, these three components interact to create behavioral intentions, which are a proxy for actual behaviors that we often don’t have the resources to measure in real-time with research participants (Ajzen, 1991).

UX researchers and data storytellers should develop a working knowledge of the TPB or another suitable psychological theory before moving on to measure the audience’s attitudes, norms, and perceived behavioral control. We have included additional resources to support your learning about the TPB in the references section of this article.

How To Understand Your Audience And Apply Psychological Principles

OK, we’ve covered the importance of audience understanding and psychology. These two principles serve as the foundation of the proposed model of storytelling we’re putting forth. Let’s explore how to integrate them into your storytelling process.

Introducing The Audience Research Informed Data Storytelling Model (ARIDSM)

At the core of successful data storytelling lies a deep understanding of your audience’s psychology. Here’s a five-step process to integrate UX research and psychological principles effectively into your data stories:

Step 1: Define Clear Objectives

Before diving into data, it’s crucial to establish precisely what you aim to achieve with your story. Do you want to inform, persuade, or inspire action? What specific message do you want your audience to take away?

Why it matters: Defining clear objectives provides a roadmap for your storytelling journey. It ensures that your data, narrative, and visuals are all aligned toward a common goal. Without this clarity, your story risks becoming unfocused and losing its impact.

How to execute Step 1: Start by asking yourself:

  • What is the core message I want to convey?
  • What do I want my audience to think, feel, or do after experiencing this story?
  • How will I measure the success of my data story?

Frame your objectives using action verbs and quantifiable outcomes. For example, instead of “raise awareness about climate change,” aim to “persuade 20% of the audience to adopt one sustainable practice.”

Example:
Imagine you’re creating a data story about employee burnout. Your objective might be to convince management to implement new policies that promote work-life balance, with the goal of reducing reported burnout cases by 15% within six months.

Step 2: Conduct UX Research To Understand Your Audience

This step involves gathering insights about your audience: their demographics, needs, motivations, pain points, and how they prefer to consume information.

Why it matters: Understanding your audience is fundamental to crafting a story that resonates. By knowing their preferences and potential biases, you can tailor your narrative and data presentation to capture their attention and ensure the message is clearly understood.

How to execute Step 2: Employ UX research methods like surveys, interviews, persona development, and testing the message with potential audience members.

Example:
If your data story aims to encourage healthy eating habits among college students, your research might conduct a survey of students to determine what types of attitudes exist towards specific types of healthy foods for eating, to apply that knowledge in your data story.

Step 3: Analyze and Select Relevant Audience Data

This step bridges the gap between raw data and meaningful insights. It involves exploring your data to identify patterns, trends, and key takeaways that support your objectives and resonate with your audience.

Why it matters: Careful data analysis ensures that your story is grounded in evidence and that you’re using the most impactful data points to support your narrative. This step adds credibility and weight to your story, making it more convincing and persuasive.

How to execute Step 3:

  • Clean and organize your data.
    Ensure accuracy and consistency before analysis.
  • Identify key variables and metrics.
    This will be determined by the psychological principle you used to inform your research. Using the TPB, we might look closely at how we measured social norms to understand directionally how the audience perceives social norms around the topic of the data story you are sharing, allowing you to frame your call to action in ways that resonate with these norms. You might run a variety of statistics at this point, including factor analysis to create groups based on similar traits, t-tests to determine if averages on your measurements are significantly different between groups, and correlations to see if there might be an assumed direction between scores on various items.
Example:
If your objective is to demonstrate the effectiveness of a new teaching method, analyzing how your audience perceives their peers to be open to adopting new methods, their belief that they are in control over the decision to use a new teaching method, and their attitude towards the effectiveness of their current teaching methods to create groups that have various levels of receptivity in trying new methods, allowing you to later tailor your data story for each group.

Step 4: Apply The Theory of Planned Behavior Or Your Psychological Principle Of Choice [Done Simultaneous With Step 3]

In this step, you will see that The Theory of Planned Behavior (TPB) provides a robust framework for understanding the factors that drive human behavior. It posits that our intentions, which are the strongest predictors of our actions, are shaped by three core components: attitudes, subjective norms, and perceived behavioral control. By consciously incorporating these elements into your data story, you can significantly enhance its persuasive power.

Why it matters: The TPB offers valuable insights into how people make decisions. By aligning your narrative with these psychological drivers, you increase the likelihood of influencing your audience’s intentions and, ultimately, their behavior. This step adds a layer of strategic persuasion to your data storytelling, making it more impactful and effective.

How to execute Step 4:

Here’s how to leverage the TPB in your data story:

Influence Attitudes: Present data and evidence that highlight the positive consequences of adopting the desired behavior. Frame the behavior as beneficial, valuable, and aligned with the audience’s values and aspirations.

This is where having a deep knowledge of the audience is helpful. Let’s imagine you are creating a data story on exercise and your call to action promoting exercise daily. If you know your audience has a highly positive attitude towards exercise, you can capitalize on that and frame your language around the benefits of exercising, increasing exercise, or specific exercises that might be best suited for the audience. It’s about framing exercise not just as a physical benefit but as a holistic improvement to their life. You can also tie it to their identity, positioning exercise as an integral part of living the kind of life they aspire to.

Shape Subjective Norms: Demonstrate that the desired behavior is widely accepted and practiced by others, especially those the audience admires or identifies with. Knowing ahead of time if your audience thinks daily exercise is something their peers approve of or engage in will allow you to shape your messaging accordingly. Highlight testimonials, success stories, or case studies from individuals who mirror the audience’s values.

If you were to find that the audience does not consider exercise to be normative amongst peers, you would look for examples of similar groups of people who do exercise. For example, if your audience is in a certain age group, you might focus on what data you have that supports a large percentage of those in their age group engaging in exercise.

Enhance Perceived Behavioral Control: Address any perceived barriers to adopting the desired behavior and provide practical solutions. For instance, when promoting daily exercise, it’s important to acknowledge the common obstacles people face — lack of time, resources, or physical capability — and demonstrate how these can be overcome.

Step 5: Craft A Balanced And Persuasive Narrative

This is where you synthesize your data, audience insights, psychological principles (including the TPB), and storytelling techniques into a compelling and persuasive narrative. It’s about weaving together the logical and emotional elements of your story to create an experience that resonates with your audience and motivates them to act.

Why it matters: A well-crafted narrative transforms data from dry statistics into a meaningful and memorable experience. It ensures that your audience not only understands the information but also feels connected to it on an emotional level, increasing the likelihood of them internalizing the message and acting upon it.

How to execute Step 5:

Structure your story strategically: Use a clear narrative arc that guides your audience through the information. Begin by establishing the context and introducing the problem, then present your data-driven insights in a way that supports your objectives and addresses the TPB components. Conclude with a compelling call to action that aligns with the attitudes, norms, and perceived control you've cultivated throughout the narrative.

Example:
In a data story about promoting exercise, you could:
  • Determine what stories might be available using the data you have collected or obtained. In this example, let’s say you work for a city planning office and have data suggesting people aren’t currently biking as frequently as they could, even if they are bike owners.
  • Begin with a relatable story about lack of exercise and its impact on people’s lives. Then, present data on the benefits of cycling, highlighting its positive impact on health, socializing, and personal feelings of well-being (attitudes).
  • Integrate TPB elements: Showcase stories of people who have successfully incorporated cycling into their daily commute (subjective norms). Provide practical tips on bike safety, route planning, and finding affordable bikes (perceived behavioral control).
  • Use infographics to compare commute times and costs between driving and cycling. Show maps of bike-friendly routes and visually appealing images of people enjoying cycling.
  • Call to action: Encourage the audience to try cycling for a week and provide links to resources like bike share programs, cycling maps, and local cycling communities.

Evaluating The Method

Our next step is to test our hypothesis that incorporating audience research and psychology into creating a data story will lead to more powerful results. We have conducted preliminary research using messages focused on climate change, and our results suggest some support for our assertion.

We purposely chose a controversial topic because we believe data storytelling can be a powerful tool. If we want to truly realize the benefits of effective data storytelling, we need to focus on topics that matter. We also know that academic research suggests it is more difficult to shift opinions or generate behavior around topics that are polarizing (at least in the US), such as climate change.

We are not ready to share the full results of our study. We will share those in an academic journal and in conference proceedings. Here is a look at how we set up the study and how you might do something similar when either creating a data story using our method or doing your own research to test our model. You will see that it closely aligns with the model itself, with the added steps of testing the message against a control message and taking measurements of the actions the message(s) are likely to generate.

Step 1: We chose our topic and the data set we wanted to explore. As I mentioned, we purposely went with a polarizing topic. My academic background was in messaging around conservation issues, so we explored that. We used data from a publicly available data set that states July 2023 was the hottest month ever recorded.

Step 2: We identified our audience and took basic measurements. We decided our audience would be members of the general public who do not have jobs working directly with climate data or other relevant fields for climate change scientists.

We wanted a diverse range of ages and backgrounds, so we screened for this in our questions on the survey to measure the TPB components as well. We created a survey to measure the elements of the TPB as it relates to climate change and administered the survey via a Google Forms link that we shared directly, on social media posts, and in online message boards related to topics of climate change and survey research.

Step 3: We analyzed our data and broke our audience into groups based on key differences. This part required a bit of statistical know-how. Essentially, we entered all of the responses into a spreadsheet and ran a factor analysis to define groups based on shared attributes. In our case, we found two distinct groups for our respondents. We then looked deeper into the individual differences between the groups, e.g., group 1 had a notably higher level of positive attitude towards taking action to remediate climate change.

Step 4 [remember this happens simultaneously with step 3]: We incorporated aspects of the TPB in how we framed our data analysis. As we created our groups and looked at the responses to the survey, we made sure to note how this might impact the story for our various groups. Using our previous example, a group with a higher positive attitude toward taking action might need less convincing to do something about climate change and more information on what exactly they can do.

Table 1 contains examples of the questions we asked related to the TPB. We used the guidance provided here to generate the survey items to measure the TPB related to climate change activism. Note that even the academic who created the TPB states there are no standardized questions (PDF) validated to measure the concepts for each individual topic.

Item Measures Scale
How beneficial do you believe individual actions are compared to systemic changes (e.g., government policies) in tackling climate change? Attitude 1 to 5 with 1 being “not beneficial” and 5 being “extremely beneficial”
How much do you think the people you care about (family, friends, community) expect you to take action against climate change? Subjective Norms 1 to 5 with 1 being “they do not expect me to take action” and 5 being “they expect me to take action”
How confident are you in your ability to overcome personal barriers when trying to reduce your environmental impact? Perceived Behavioral Control 1 to 5 with 1 being “not at all confident” and 5 being “extremely confident”

Table 1: Examples of questions we used to measure the TPB factors. We asked multiple questions for each factor and then generated a combined mean score for each component.

Step 5: We created data stories aligned with the groups and a control story. We created multiple stories to align with the groups we identified in our audience. We also created a control message that lacked substantial framing in any direction. See below for an example of the control data story (Figure 7) and one of the customized data stories (Figure 8) we created.

Step 6: We released the stories and took measurements of the likelihood of acting. Specific to our study, we asked the participants how likely they were to “Click here to LEARN MORE.” Our hypothesis was that individuals would express a notably higher likelihood to want to click to learn more on the data story aligned with their grouping, as compared to the competing group and the control group.

Step 7: We analyzed the differences between the preexisting groups and what they stated was their likelihood of acting. As I mentioned, our findings are still preliminary, and we are looking at ways to increase our response rate so we can present statistically substantiated findings. Our initial findings are that we do see small differences between the responses to the tailored data stories and the control data story. This is directionally what we would be expecting to see. If you are going to conduct a similar study or test out your messages, you would also be looking for results that suggest your ARIDS-derived message is more likely to generate the expected outcome than a control message or a non-tailored message.

Overall, we feel there is an exciting possibility and that future research will help us refine exactly what is critical about generating a message that will have a positive impact on your audience. We also expect there are better models of psychology to use to frame your measurements and message depending on the audience and topic.

For example, you might feel Maslow’s hierarchy of needs is more relevant to your data storytelling. You would want to take measurements related to these needs from your audience and then frame the data story using how a decision might help meet their needs.

Elevate Your Data Storytelling

Traditional models of data storytelling, while valuable, often fall short of effectively engaging and persuading audiences. This is primarily due to their neglect of crucial aspects such as audience understanding and the application of psychological principles. By incorporating these elements into the data storytelling process, we can create more impactful and persuasive narratives.

The five-step framework proposed in this article — defining clear objectives, conducting UX research, analyzing data, applying psychological principles, and crafting a balanced narrative — provides a roadmap for creating data stories that resonate with audiences on both a cognitive and emotional level. This approach ensures that data is not merely presented but is transformed into a meaningful experience that drives action and fosters change. As data storytellers, embracing this human-centric approach allows us to unlock the full potential of data and create narratives that truly inspire and inform.

Effective data storytelling isn’t a black box. You can test your data stories for effectiveness using the same research process we are using to test our hypothesis as well. While there are additional requirements in terms of time as a resource, you will make this back in the form of a stronger impact on your audience when they encounter your data story if it is shown to be significantly greater than the impact of a control message or other messages you were considering that don’t incorporate the psychological traits of your audience.

Please feel free to use our method and provide any feedback on your experience to the author.

]]>
hello@smashingmagazine.com (Victor Yocco & Angelica Lo Duca)
<![CDATA[Human-Centered Design Through AI-Assisted Usability Testing: Reality Or Fiction?]]> https://smashingmagazine.com/2025/02/human-centered-design-ai-assisted-usability-testing/ https://smashingmagazine.com/2025/02/human-centered-design-ai-assisted-usability-testing/ Wed, 19 Feb 2025 10:00:00 GMT Unmoderated usability testing has been steadily growing more popular with the assistance of online UX research tools. Allowing participants to complete usability testing without a moderator, at their own pace and convenience, can have a number of advantages.

The first is the liberation from a strict schedule and the availability of moderators, meaning that a lot more participants can be recruited on a more cost-effective and quick basis. It also lets your team see how users interact with your solution in their natural environment, with the setup of their own devices. Overcoming the challenges of distance and differences in time zones in order to obtain data from all around the globe also becomes much easier.

However, forgoing the use of moderators also has its drawbacks. The moderator brings flexibility, as well as a human touch into usability testing. Since they are in the same (virtual) space as the participants, the moderator usually has a good idea of what’s going on. They can react in real-time depending on what they witness the participant do and say. A moderator can carefully remind the participants to vocalize their thoughts. To the participant, thinking aloud in front of a moderator can also feel more natural than just talking to themselves. When the participant does something interesting, the moderator can prompt them for further comment.

Meanwhile, a traditional unmoderated study lacks such flexibility. In order to complete tasks, participants receive a fixed set of instructions. Once they are done, they can be asked to complete a static questionnaire, and that’s it.

The feedback that the research & design team receives will be completely dependent on what information the participants provide on their own. Because of this, the phrasing of instructions and questions in unmoderated testing is extremely crucial. Although, even if everything is planned out perfectly, the lack of adaptive questioning means that a lot of the information will still remain unsaid, especially with regular people who are not trained in providing user feedback.

If the usability test participant misunderstands a question or doesn’t answer completely, the moderator can always ask for a follow-up to get more information. A question then arises: Could something like that be handled by AI to upgrade unmoderated testing?

Generative AI could present a new, potentially powerful tool for addressing this dilemma once we consider their current capabilities. Large language models (LLMs), in particular, can lead conversations that can appear almost humanlike. If LLMs could be incorporated into usability testing to interactively enhance the collection of data by conversing with the participant, they might significantly augment the ability of researchers to obtain detailed personal feedback from great numbers of people. With human participants as the source of the actual feedback, this is an excellent example of human-centered AI as it keeps humans in the loop.

There are quite a number of gaps in the research of AI in UX. To help with fixing this, we at UXtweak research have conducted a case study aimed at investigating whether AI could generate follow-up questions that are meaningful and result in valuable answers from the participants.

Asking participants follow-up questions to extract more in-depth information is just one portion of the moderator’s responsibilities. However, it is a reasonably-scoped subproblem for our evaluation since it encapsulates the ability of the moderator to react to the context of the conversation in real time and to encourage participants to share salient information.

Experiment Spotlight: Testing GPT-4 In Real-Time Feedback

The focus of our study was on the underlying principles rather than any specific commercial AI solution for unmoderated usability testing. After all, AI models and prompts are being tuned constantly, so findings that are too narrow may become irrelevant in a week or two after a new version gets updated. However, since AI models are also a black box based on artificial neural networks, the method by which they generate their specific output is not transparent.

Our results can show what you should be wary of to verify that an AI solution that you use can actually deliver value rather than harm. For our study, we used GPT-4, which at the time of the experiment was the most up-to-date model by OpenAI, also capable of fulfilling complex prompts (and, in our experience, dealing with some prompts better than the more recent GPT-4o).

In our experiment, we conducted a usability test with a prototype of an e-commerce website. The tasks involved the common user flow of purchasing a product.

Note: See our article published in the International Journal of Human-Computer Interaction for more detailed information about the prototype, tasks, questions, and so on).

In this setting, we compared the results with three conditions:

  1. A regular static questionnaire made up of three pre-defined questions (Q1, Q2, Q3), serving as an AI-free baseline. Q1 was open-ended, asking the participants to narrate their experiences during the task. Q2 and Q3 can be considered non-adaptive follow-ups to Q1 since they asked participants more directly about usability issues and to identify things that they did not like.
  2. The question Q1, serving as a seed for up to three GPT-4-generated follow-up questions as the alternative to Q2 and Q3.
  3. All three pre-defined questions, Q1, Q2, and Q3, each used as a seed for its own GPT-4 follow-up.

The following prompt was used to generate the follow-up questions:

To assess the impact of the AI follow-up questions, we then compared the results on both a quantitative and a qualitative basis. One of the measures that we analyzed is informativeness — ratings of the responses based on how useful they are at elucidating new usability issues encountered by the user.

As seen in the figure below, the informativeness dropped significantly between the seed questions and their AI follow-up. The follow-ups rarely helped identify a new issue, although they did help elaborate further details.

The emotional reactions of the participants offer another perspective on AI-generated follow-up questions. Our analysis of the prevailing emotional valence based on the phrasing of answers revealed that, at first, the answers started with a neutral sentiment. Afterward, the sentiment shifted toward the negative.

In the case of the pre-defined questions Q2 and Q3, this could be seen as natural. While question Seed 1 was open-ended, asking the participants to explain what they did during the task, Q2 and Q3 focused more on the negative — usability issues and other disliked aspects. Curiously, the follow-up chains generally received even more negative receptions than their seed questions, and not for the same reason.

Frustration was common as participants interacted with the GPT-4-driven follow-up questions. This is rather critical, considering that frustration with the testing process can sidetrack participants from taking usability testing seriously, hinder meaningful feedback, and introduce a negative bias.

A major aspect that participants were frustrated with was redundancy. Repetitiveness, such as re-explaining the same usability issue, was quite common. While pre-defined follow-up questions yielded 27-28% of repeated answers (it’s likely that participants already mentioned aspects they disliked during the open-ended Q1), AI-generated questions yielded 21%.

That’s not that much of an improvement, given that the comparison is made to questions that literally could not adapt to prevent repetition at all. Furthermore, when AI follow-up questions were added to obtain more elaborate answers for every pre-defined question, the repetition ratio rose further to 35%. In the variant with AI, participants also rated the questions as significantly less reasonable.

Answers to AI-generated questions contained a lot of statements like “I already said that” and “The obvious AI questions ignored my previous responses.”

The prevalence of repetition within the same group of questions (the seed question, its follow-up questions, and all of their answers) can be seen as particularly problematic since the GPT-4 prompt had been provided with all the information available in this context. This demonstrates that a number of the follow-up questions were not sufficiently distinct and lacked the direction that would warrant them being asked.

Insights From The Study: Successes And Pitfalls

To summarize the usefulness of AI-generated follow-up questions in usability testing, there are both good and bad points.

Successes:

  • Generative AI (GPT-4) excels at refining participant answers with contextual follow-ups.
  • Depth of qualitative insights can be enhanced.

Challenges:

  • Limited capacity to uncover new issues beyond pre-defined questions.
  • Participants can easily grow frustrated with repetitive or generic follow-ups.

While extracting answers that are a bit more elaborate is a benefit, it can be easily overshadowed if the lack of question quality and relevance is too distracting. This can potentially inhibit participants’ natural behavior and the relevance of feedback if they’re focusing on the AI.

Therefore, in the following section, we discuss what to be careful of, whether you are picking an existing AI tool to assist you with unmoderated usability testing or implementing your own AI prompts or even models for a similar purpose.

Recommendations For Practitioners

Context is the end-all and be-all when it comes to the usefulness of follow-up questions. Most of the issues that we identified with the AI follow-up questions in our study can be tied to the ignorance of proper context in one shape or another.

Based on real blunders that GPT-4 made while generating questions in our study, we have meticulously collected and organized a list of the types of context that these questions were missing. Whether you’re looking to use an existing AI tool or are implementing your own system to interact with participants in unmoderated studies, you are strongly encouraged to use this list as a high-level checklist. With it as the guideline, you can assess whether the AI models and prompts at your disposal can ask reasonable, context-sensitive follow-up questions before you entrust them with interacting with real participants.

Without further ado, these are the relevant types of context:

  • General Usability Testing Context.
    The AI should incorporate standard principles of usability testing in its questions. This may appear obvious, and it actually is. But it needs to be said, given that we have encountered issues related to this context in our study. For example, the questions should not be leading, ask participants for design suggestions, or ask them to predict their future behavior in completely hypothetical scenarios (behavioral research is much more accurate for that).
  • Usability Testing Goal Context.
    Different usability tests have different goals depending on the stage of the design, business goals, or features being tested. Each follow-up question and the participant’s time used in answering it are valuable resources. They should not be wasted on going off-topic. For example, in our study, we were evaluating a prototype of a website with placeholder photos of a product. When the AI starts asking participants about their opinion of the displayed fake products, such information is useless to us.
  • User Task Context.
    Whether the tasks in your usability testing are goal-driven or open and exploratory, their nature should be properly reflected in follow-up questions. When the participants have freedom, follow-up questions could be useful for understanding their motivations. By contrast, if your AI tool foolishly asks the participants why they did something closely related to the task (e.g., placing the specific item they were supposed to buy into the cart), you will seem just as foolish by association for using it.
  • Design Context.
    Detailed information about the tested design (e.g., prototype, mockup, website, app) can be indispensable for making sure that follow-up questions are reasonable. Follow-up questions should require input from the participant. They should not be answerable just by looking at the design. Interesting aspects of the design could also be reflected in the topics to focus on. For example, in our study, the AI would occasionally ask participants why they believed a piece of information that was very prominently displayed in the user interface, making the question irrelevant in context.
  • Interaction Context.
    If Design Context tells you what the participant could potentially see and do during the usability test, Interaction Context comprises all their actual actions, including their consequences. This could incorporate the video recording of the usability test, as well as the audio recording of the participant thinking aloud. The inclusion of interaction context would allow follow-up questions to build on the information that the participant already provided and to further clarify their decisions. For example, if a participant does not successfully complete a task, follow-up questions could be directed at investigating the cause, even as the participant continues to believe they have fulfilled their goal.
  • Previous Question Context.
    Even when the questions you ask them are mutually distinct, participants can find logical associations between various aspects of their experience, especially since they don’t know what you will ask them next. A skilled moderator may decide to skip a question that a participant already answered as part of another question, instead focusing on further clarifying the details. AI follow-up questions should be capable of doing the same to avoid the testing from becoming a repetitive slog.
  • Question Intent Context.
    Participants routinely answer questions in a way that misses their original intent, especially if the question is more open-ended. A follow-up can spin the question from another angle to retrieve the intended information. However, if the participant’s answer is technically a valid reply but only to the word rather than the spirit of the question, the AI can miss this fact. Clarifying the intent could help address this.

When assessing a third-party AI tool, a question to ask is whether the tool allows you to provide all of the contextual information explicitly.

If AI does not have an implicit or explicit source of context, the best it can do is make biased and untransparent guesses that can result in irrelevant, repetitive, and frustrating questions.

Even if you can provide the AI tool with the context (or if you are crafting the AI prompt yourself), that does not necessarily mean that the AI will do as you expect, apply the context in practice, and approach its implications correctly. For example, as demonstrated in our study, when a history of the conversation was provided within the scope of a question group, there was still a considerable amount of repetition.

The most straightforward way to test the contextual responsiveness of a specific AI model is simply by conversing with it in a way that relies on context. Fortunately, most natural human conversation already depends on context heavily (saying everything would take too long otherwise), so that should not be too difficult. What is key is focusing on the varied types of context to identify what the AI model can and cannot do.

The seemingly overwhelming number of potential combinations of varied types of context could pose the greatest challenge for AI follow-up questions.

For example, human moderators may decide to go against the general rules by asking less open-ended questions to obtain information that is essential for the goals of their research while also understanding the tradeoffs.

In our study, we have observed that if the AI asked questions that were too generically open-ended as a follow-up to seed questions that were open-ended themselves, without a significant enough shift in perspective, this resulted in repetition, irrelevancy, and — therefore — frustration.

The fine-tuning of the AI models to achieve an ability to resolve various types of contextual conflict appropriately could be seen as a reliable metric by which the quality of the AI generator of follow-up questions could be measured.

Researcher control is also key since tougher decisions that are reliant on the researcher’s vision and understanding should remain firmly in the researcher’s hands. Because of this, a combination of static and AI-driven questions with complementary strengths and weaknesses could be the way to unlock richer insights.

A focus on contextual sensitivity validation can be seen as even more important while considering the broader social aspects. Among certain people, the trend-chasing and the general overhype of AI by the industry have led to a backlash against AI. AI skeptics have a number of valid concerns, including usefulness, ethics, data privacy, and the environment. Some usability testing participants may be unaccepting or even outwardly hostile toward encounters with AI.

Therefore, for the successful incorporation of AI into research, it will be essential to demonstrate it to the users as something that is both reasonable and helpful. Principles of ethical research remain as relevant as ever. Data needs to be collected and processed with the participant’s consent and not breach the participant’s privacy (e.g. so that sensitive data is not used for training AI models without permission).

Conclusion: What’s Next For AI In UX?

So, is AI a game-changer that could break down the barrier between moderated and unmoderated usability research? Maybe one day. The potential is certainly there. When AI follow-up questions work as intended, the results are exciting. Participants can become more talkative and clarify potentially essential details.

To any UX researcher who’s familiar with the feeling of analyzing vaguely phrased feedback and wishing that they could have been there to ask one more question to drive the point home, an automated solution that could do this for them may seem like a dream. However, we should also exercise caution since the blind addition of AI without testing and oversight can introduce a slew of biases. This is because the relevance of follow-up questions is dependent on all sorts of contexts.

Humans need to keep holding the reins in order to ensure that the research is based on actual solid conclusions and intents. The opportunity lies in the synergy that can arise from usability researchers and designers whose ability to conduct unmoderated usability testing could be significantly augmented.

Humans + AI = Better Insights

The best approach to advocate for is likely a balanced one. As UX researchers and designers, humans should continue to learn how to use AI as a partner in uncovering insights. This article can serve as a jumping-off point, providing a list of the AI-driven technique’s potential weak points to be aware of, to monitor, and to improve on.

]]>
hello@smashingmagazine.com (Eduard Kuric)
<![CDATA[How OWASP Helps You Secure Your Full-Stack Web Applications]]> https://smashingmagazine.com/2025/02/how-owasp-helps-secure-full-stack-web-applications/ https://smashingmagazine.com/2025/02/how-owasp-helps-secure-full-stack-web-applications/ Tue, 18 Feb 2025 08:00:00 GMT Security can be an intimidating topic for web developers. The vocabulary is rich and full of acronyms. Trends evolve quickly as hackers and analysts play a perpetual cat-and-mouse game. Vulnerabilities stem from little details we cannot afford to spend too much time on during our day-to-day operations.

JavaScript developers already have a lot to take with the emergence of a new wave of innovative architectures, such as React Server Components, Next.js App Router, or Astro islands.

So, let’s have a focused approach. What we need is to be able to detect and palliate the most common security issues. A top ten of the most common vulnerabilities would be ideal.

Meet The OWASP Top 10

Guess what: there happens to be such a top ten of the most common vulnerabilities, curated by experts in the field!

It is provided by the OWASP Foundation, and it’s an extremely valuable resource for getting started with security.

OWASP stands for “Open Worldwide Application Security Project.” It’s a nonprofit foundation whose goal is to make software more secure globally. It supports many open-source projects and produces high-quality education resources, including the OWASP top 10 vulnerabilities list.

We will dive through each item of the OWASP top 10 to understand how to recognize these vulnerabilities in a full-stack application.

Note: I will use Next.js as an example, but this knowledge applies to any similar full-stack architecture, even outside of the JavaScript ecosystem.

Let’s start our countdown towards a safer web!

Number 10: Server-Side Request Forgery (SSRF)

You may have heard about Server-Side Rendering, aka SSR. Well, you can consider SSRF to be its evil twin acronym.

Server-Side Request Forgery can be summed up as letting an attacker fire requests using your backend server. Besides hosting costs that may rise up, the main problem is that the attacker will benefit from your server’s level of accreditation. In a complex architecture, this means being able to target your internal private services using your own corrupted server.

Here is an example. Our app lets a user input a URL and summarizes the content of the target page server-side using an AI SDK. A mischievous user passes localhost:3000 as the URL instead of a website they’d like to summarize. Your server will fire a request against itself or any other service running on port 3000 in your backend infrastructure. This is a severe SSRF vulnerability!

You’ll want to be careful when firing requests based on user inputs, especially server-side.

Number 9: Security Logging And Monitoring Failures

I wish we could establish a telepathic connection with our beloved Node.js server running in the backend. Instead, the best thing we have to see what happens in the cloud is a dreadful stream of unstructured pieces of text we name “logs.”

Yet we will have to deal with that, not only for debugging or performance optimization but also because logs are often the only information you’ll get to discover and remediate a security issue.

As a starter, you might want to focus on logging the most important transactions of your application exactly like you would prioritize writing end-to-end tests. In most applications, this means login, signup, payouts, mail sending, and so on. In a bigger company, a more complete telemetry solution is a must-have, such as Open Telemetry, Sentry, or Datadog.

If you are using React Server Components, you may need to set up a proper logging strategy anyway since it’s not possible to debug them directly from the browser as we used to do for Client components.

Number 8: Software And Data Integrity Failures

The OWASP top 10 vulnerabilities tend to have various levels of granularity, and this one is really a big family. I’d like to focus on supply chain attacks, as they have gained a lot of popularity over the years.

You may have heard about the Log4J vulnerability. It was very publicized, very critical, and very exploited by hackers. It’s a massive supply chain attack.

In the JavaScript ecosystem, you most probably install your dependencies using NPM. Before picking dependencies, you might want to craft yourself a small list of health indicators.

  • Is the library maintained and tested with proper code?
  • Does it play a critical role in my application?
  • Who is the main contributor?
  • Did I spell it right when installing?

For more serious business, you might want to consider setting up a Supply Chain Analysis (SCA) solution; GitHub’s Dependabot is a free one, and Snyk and Datadog are other famous actors.

Number 7: Identification And Authentication Failures

Here is a stereotypical vulnerability belonging to this category: your admin password is leaked. A hacker finds it. Boom, game over.

Password management procedures are beyond the scope of this article, but in the context of full-stack web development, let’s dive deep into how we can prevent brute force attacks using Next.js edge middlewares.

Middlewares are tiny proxies written in JavaScript. They process requests in a way that is supposed to be very, very fast, faster than a normal Node.js endpoint, for example. They are a good fit for handling low-level processing, like blocking malicious IPs or redirecting users towards the correct translation of a page.

One interesting use case is rate limiting. You can quickly improve the security of your applications by limiting people’s ability to spam your POST endpoints, especially login and signup.

You may go even further by setting up a Web Applications Firewall (WAF). A WAF lets developers implement elaborate security rules. This is not something you would set up directly in your application but rather at the host level. For instance, Vercel has released its own WAF in 2024.

Number 6: Vulnerable And Outdated Components

We have discussed supply chain attacks earlier. Outdated components are a variation of this vulnerability, where you actually are the person to blame. Sorry about that.

Security vulnerabilities are often discovered ahead of time by diligent security analysts before a mean attacker can even start thinking about exploiting them. Thanks, analysts friends! When this happens, they fill out a Common Vulnerabilities and Exposure and store that in a public database.

The remedy is the same as for supply chain attacks: set up an SCA solution like Dependabot that will regularly check for the use of vulnerable packages in your application.

Halfway break

I just want to mention at this point how much progress we have made since the beginning of this article. To sum it up:

  • We know how to recognize an SSRF. This is a nasty vulnerability, and it is easy to accidentally introduce while crafting a super cool feature.
  • We have identified monitoring and dependency analysis solutions as important pieces of “support” software for securing applications.
  • We have figured out a good use case for Next.js edge middlewares: rate limiting our authentication endpoints to prevent brute force attacks.

It’s a good time to go grab a tea or coffee. But after that, come back with us because we are going to discover the five most common vulnerabilities affecting web applications!

Number 5: Security Misconfiguration

There are so many configurations that we can mismanage. But let’s focus on the most insightful ones for a web developer learning about security: HTTP headers.

You can use HTTP response headers to pass on a lot of information to the user’s browser about what’s possible or not on your website.

For example, by narrowing down the “Permissions-Policy” headers, you can claim that your website will never require access to the user’s camera. This is an extremely powerful protection mechanism in case of a script injection attack (XSS). Even if the hacker manages to run a malicious script in the victim’s browser, the latter will not allow the script to access the camera.

I invite you to observe the security configuration of any template or boilerplate that you use to craft your own websites. Do you understand them properly? Can you improve them? Answering these questions will inevitably lead you to vastly increase the safety of your websites!

Number 4: Insecure Design

I find this one funny, although a bit insulting for us developers.

Bad code is literally the fourth most common cause of vulnerabilities in web applications! You can’t just blame your infrastructure team anymore.

Design is actually not just about code but about the way we use our programming tools to produce software artifacts.

In the context of full-stack JavaScript frameworks, I would recommend learning how to use them idiomatically, the same way you’d want to learn a foreign language. It’s not just about translating what you already know word-by-word. You need to get a grasp of how a native speaker would phrase their thoughts.

Learning idiomatic Next.js is really, really hard. Trust me, I teach this framework to web developers. Next is all about client and server logic hybridization, and some patterns may not even transfer to competing frameworks with a different architecture like Astro.js or Remix.

Hopefully, the Next.js core team has produced many free learning resources, including articles and documentation specifically focusing on security.

I recommend reading Sebastian Markbåge’s famous article “How to Think About Security in Next.js” as a starting point. If you use Next.js in a professional setting, consider organizing proper training sessions before you start working on high-stakes projects.

Number 3: Injection

Injections are the epitome of vulnerabilities, the quintessence of breaches, and the paragon of security issues. SQL injections are typically very famous, but JavaScript injections are also quite common. Despite being well-known vulnerabilities, injections are still in the top 3 in the OWASP ranking!

Injections are the reason why forcing a React component to render HTML is done through an unwelcoming dangerouslySetInnerHTML function.

React doesn’t want you to include user input that could contain a malicious script.

The screenshot below is a demonstration of an injection using images. It could target a message board, for instance. The attacker misused the image posting system. They passed a URL that points towards an API GET endpoint instead of an actual image. Whenever your website’s users see this post in their browser, an authenticated request is fired against your backend, triggering a payment!

As a bonus, having a GET endpoint that triggers side-effects such as payment also constitutes a risk of Cross-Site Request Forgery (CSRF, which happens to be SSRF client-side cousin).

Even experienced developers can be caught off-guard. Are you aware that dynamic route parameters are user inputs? For instance, [language]/page.jsx in a Next.js or Astro app. I often see clumsy attack attempts when logging them, like “language” being replaced by a path traversal like ../../../../passwords.txt.

Zod is a very popular library for running server-side data validation of user inputs. You can add a transform step to sanitize inputs included in database queries, or that could land in places where they end up being executed as code.

Number 2: Cryptographic Failures

A typical discussion between two developers that are in deep, deep trouble:

— We have leaked our database and encryption key. What algorithm was used to encrypt the password again? AES-128 or SHA-512?
— I don’t know, aren’t they the same thing? They transform passwords into gibberish, right?
— Alright. We are in deep, deep trouble.

This vulnerability mostly concerns backend developers who have to deal with sensitive personal identifiers (PII) or passwords.

To be honest, I don’t know much about these algorithms; I studied computer science way too long ago.

The only thing I remember is that you need non-reversible algorithms to encrypt passwords, aka hashing algorithms. The point is that if the encrypted passwords are leaked, and the encryption key is also leaked, it will still be super hard to hack an account (you can’t just reverse the encryption).

In the State of JavaScript survey, we use passwordless authentication with an email magic link and one-way hash emails, so even as admins, we cannot guess a user’s email in our database.

And number 1 is...

Such suspense! We are about to discover that the top 1 vulnerability in the world of web development is...

Broken Access Control! Tada.

Yeah, the name is not super insightful, so let me rephrase it. It’s about people being able to access other people’s accounts or people being able to access resources they are not allowed to. That’s more impressive when put this way.

A while ago, I wrote an article about the fact that checking authorization within a layout may leave page content unprotected in Next.js. It’s not a flaw in the framework’s design but a consequence of how React Server Components have a different model than their client counterparts, which then affects how the layout works in Next.

Here is a demo of how you can implement a paywall in Next.js that doesn’t protect anything.

// app/layout.jsx
// Using cookie-based authentication as usual
async function checkPaid() {
  const token = cookies.get("auth_token");
  return await db.hasPayments(token);
}
// Running the payment check in a layout to apply it to all pages
// Sadly, this is not how Next.js works!
export default async function Layout() {
  // ❌ this won't work as expected!!
  const hasPaid = await checkPaid();
  if (!hasPaid) redirect("/subscribe");
  // then render the underlying page
  return <div>{children}</div>;
}
// ❌ this can be accessed directly
// by adding “RSC=1” to the request that fetches it!
export default function Page() {
  return <div>PAID CONTENT</div>
}
What We Have Learned From The Top 5 Vulnerabilities

Most common vulnerabilities are tightly related to application design issues:

  • Copy-pasting configuration without really understanding it.
  • Having an improper understanding of the framework we use in inner working. Next.js is a complex beast and doesn’t make our life easier on this point!
  • Picking an algorithm that is not suited for a given task.

These vulnerabilities are tough ones because they confront us to our own limits as web developers. Nobody is perfect, and the most experienced developers will inevitably write vulnerable code at some point in their lives without even noticing.

How to prevent that? By not staying alone! When in doubt, ask around fellow developers; there are great chances that someone has faced the same issues and can lead you to the right solutions.

Where To Head Now?

First, I must insist that you have already done a great job of improving the security of your applications by reading this article. Congratulations!

Most hackers rely on a volume strategy and are not particularly skilled, so they are really in pain when confronted with educated developers who can spot and fix the most common vulnerabilities.

From there, I can suggest a few directions to get even better at securing your web applications:

  • Try to apply the OWASP top 10 to an application you know well, either a personal project, your company’s codebase, or an open-source solution.
  • Give a shot at some third-party security tools. They tend to overflow developers with too much information but keep in mind that most actors in the field of security are aware of this issue and work actively to provide more focused vulnerability alerts.
  • I’ve added my favorite security-related resources at the end of the article, so you’ll have plenty to read!

Thanks for reading, and stay secure!

Resources For Further Learning

This article is inspired by my talk at React Advanced London 2024, “Securing Server-Rendered Applications: Next.js case,” which is available to watch as a replay online.

]]>
hello@smashingmagazine.com (Eric Burel)
<![CDATA[How To Test And Measure Content In UX]]> https://smashingmagazine.com/2025/02/how-to-test-and-measure-content-in-ux/ https://smashingmagazine.com/2025/02/how-to-test-and-measure-content-in-ux/ Thu, 13 Feb 2025 08:00:00 GMT Content testing is a simple way to test the clarity and understanding of the content on a page — be it a paragraph of text, a user flow, a dashboard, or anything in between. Our goal is to understand how well users actually perceive the content that we present to them.

It’s not only about finding pain points and things that cause confusion or hinder users from finding the right answer on a page but also about if our content clearly and precisely articulates what we actually want to communicate.

This article is part of our ongoing series on UX. You can find more details on design patterns and UX strategy in Smart Interface Design Patterns 🍣 — with live UX training coming up soon. Free preview.

Banana Testing

A great way to test how well your design matches a user’s mental model is Banana Testing. We replace all key actions with the word “Banana,” then ask users to suggest what each action could prompt.

Not only does it tell you if key actions are understood immediately and if they are in the right place but also if your icons are helpful and if interactive elements such as links or buttons are perceived as such.

Content Heatmapping

One reliable technique to assess content is content heatmapping. The way we would use it is by giving participants a task, then asking them to highlight things that are clear or confusing. We could define any other dimensions or style lenses as well: e.g., phrases that bring more confidence and less confidence.

Then we map all highlights into a heatmap to identify patterns and trends. You could run it with print-outs in person, but it could also happen in Figjam or in Miro remotely — as long as your tool of choice has a highlighter feature.

Run Moderated Testing Sessions

These little techniques above help you discover content issues, but they don’t tell you what is missing in the content and what doubts, concerns, and issues users have with it. For that, we need to uncover user needs in more detail.

Too often, users say that a page is “clear and well-organized,” but when you ask them specific questions, you notice that their understanding is vastly different from what you were trying to bring into spotlight.

Such insights rarely surface in unmoderated sessions — it’s much more effective to observe behavior and ask questions on the spot, be it in person or remote.

Test Concepts, Not Words

Before testing, we need to know what we want to learn. First, write up a plan with goals, customers, questions, script. Don’t tweak words alone — broader is better. In the session, avoid speaking aloud as it’s usually not how people consume content. Ask questions and wait silently.

After the task is completed, ask users to explain the product, flow, and concepts to you. But: don’t ask them what they like, prefer, feel, or think. And whenever possible, avoid the word “content” in testing as users often perceive it differently.

Choosing The Right Way To Test

There are plenty of different tests that you could use:

  • Banana test 🍌
    Replace key actions with “bananas,” ask to explain.
  • Cloze test 🕳️
    Remove words from your copy, ask users to fill in the blanks.
  • Reaction cards 🤔
    Write up emotions on 25 cards, ask users to choose.
  • Card sorting 🃏
    Ask users to group topics into meaningful categories.
  • Highlighting 🖍️
    Ask users to highlight helpful or confusing words.
  • Competitive testing 🥊
    Ask users to explain competitors’ pages.

When choosing the right way to test, consider the following guidelines:

  • Do users understand?
    Interviews, highlighting, Cloze test
  • Do we match the mental model?
    Banana testing, Cloze test
  • What word works best?
    Card sorting, A/B testing, tree testing
  • Why doesn’t it work?
    Interviews, highlighting, walkthroughs
  • Do we know user needs?
    Competitive testing, process mapping
Wrapping Up

In many tasks, there is rarely anything more impactful than the careful selection of words on a page. However, it’s not only the words alone that are being used but the voice and tone that you choose to communicate with customers.

Use the techniques above to test and measure how well people perceive content but also check how they perceive the end-to-end experience on the site.

Quite often, the right words used incorrectly on a key page can convey a wrong message or provide a suboptimal experience. Even though the rest of the product might perform remarkably well, if a user is blocked on a critical page, they will be gone before you even blink.

Useful Resources New: How To Measure UX And Design Impact

Meet Measure UX & Design Impact (8h), a new practical guide for designers and UX leads to measure and show your UX impact on business. Use the code 🎟 IMPACT to save 20% off today. Jump to the details.

Video + UX Training

$ 495.00 $ 799.00 Get Video + UX Training

25 video lessons (8h) + Live UX Training.
100 days money-back-guarantee.

Video only

$ 250.00$ 395.00
Get the video course

25 video lessons (8h). Updated yearly.
Also available as a UX Bundle with 2 video courses.

]]>
hello@smashingmagazine.com (Vitaly Friedman)
<![CDATA[Time To First Byte: Beyond Server Response Time]]> https://smashingmagazine.com/2025/02/time-to-first-byte-beyond-server-response-time/ https://smashingmagazine.com/2025/02/time-to-first-byte-beyond-server-response-time/ Wed, 12 Feb 2025 17:00:00 GMT This article is a sponsored by DebugBear

Loading your website HTML quickly has a big impact on visitor experience. After all, no page content can be displayed until after the first chunk of the HTML has been loaded. That’s why the Time to First Byte (TTFB) metric is important: it measures how soon after navigation the browser starts receiving the HTML response.

Generating the HTML document quickly plays a big part in minimizing TTFB delays. But actually, there’s a lot more to optimizing this metric. In this article, we’ll take a look at what else can cause poor TTFB and what you can do to fix it.

What Components Make Up The Time To First Byte Metric?

TTFB stands for Time to First Byte. But where does it measure from?

Different tools handle this differently. Some only count the time spent sending the HTTP request and getting a response, ignoring everything else that needs to happen first before the resource can be loaded. However, when looking at Google’s Core Web Vitals, TTFB starts from the time when the users start navigating to a new page. That means TTFB includes:

  • Cross-origin redirects,
  • Time spent connecting to the server,
  • Same-origin redirects, and
  • The actual request for the HTML document.

We can see an example of this in this request waterfall visualization.

The server response time here is only 183 milliseconds, or about 12% of the overall TTFB metric. Half of the time is instead spent on a cross-origin redirect — a separate HTTP request that returns a redirect response before we can even make the request that returns the website’s HTML code. And when we make that request, most of the time is spent on establishing the server connection.

Connecting to a server on the web typically takes three round trips on the network:

  1. DNS: Looking up the server IP address.
  2. TCP: Establishing a reliable connection to the server.
  3. TLS: Creating a secure encrypted connection.
What Network Latency Means For Time To First Byte

Let’s add up all the network round trips in the example above:

  • 2 server connections: 6 round trips.
  • 2 HTTP requests: 2 round trips.

That means that before we even get the first response byte for our page we actually have to send data back and forth between the browser and a server eight times!

That’s where network latency comes in, or network round trip time (RTT) if we look at the time it takes to send data to a server and receive a response in the browser. On a high-latency connection with a 150 millisecond RTT, making those eight round trips will take 1.2 seconds. So, even if the server always responds instantly, we can’t get a TTFB lower than that number.

Network latency depends a lot on the geographic distances between the visitor’s device and the server the browser is connecting to. You can see the impact of that in practice by running a global TTFB test on a website. Here, I’ve tested a website that’s hosted in Brazil. We get good TTFB scores when testing from Brazil and the US East Coast. However, visitors from Europe, Asia, or Australia wait a while for the website to load.

What Content Delivery Networks Mean for Time to First Byte

One way to speed up your website is by using a Content Delivery Network (CDN). These services provide a network of globally distributed server locations. Instead of each round trip going all the way to where your web application is hosted, browsers instead connect to a nearby CDN server (called an edge node). That greatly reduces the time spent on establishing the server connection, improving your overall TTFB metric.

By default, the actual HTML request still has to be sent to your web app. However, if your content isn’t dynamic, you can also cache responses at the CDN edge node. That way, the request can be served entirely through the CDN instead of data traveling all across the world.

If we run a TTFB test on a website that uses a CDN, we can see that each server response comes from a regional data center close to where the request was made. In many cases, we get a TTFB of under 200 milliseconds, thanks to the response already being cached at the edge node.

How To Improve Time To First Byte

What you need to do to improve your website’s TTFB score depends on what its biggest contributing component is.

  • A lot of time is spent establishing the connection: Use a global CDN.
  • The server response is slow: Optimize your application code or cache the response
  • Redirects delay TTFB: Avoid chaining redirects and optimize the server returning the redirect response.

Keep in mind that TTFB depends on how visitors are accessing your website. For example, if they are logged into your application, the page content probably can’t be served from the cache. You may also see a spike in TTFB when running an ad campaign, as visitors are redirected through a click-tracking server.

Monitor Real User Time To First Byte

If you want to get a breakdown of what TTFB looks like for different visitors on your website, you need real user monitoring. That way, you can break down how visitor location, login status, or the referrer domain impact real user experience.

DebugBear can help you collect real user metrics for Time to First Byte, Google Core Web Vitals, and other page speed metrics. You can track individual TTFB components like TCP duration or redirect time and break down website performance by country, ad campaign, and more.

Conclusion

By looking at everything that’s involved in serving the first byte of a website to a visitor, we’ve seen that just reducing server response time isn’t enough and often won’t even be the most impactful change you can make on your website.

Just because your website is fast in one location doesn’t mean it’s fast for everyone, as website speed varies based on where the visitor is accessing your site from.

Content Delivery Networks are an incredibly powerful way to improve TTFB. Even if you don’t use any of their advanced features, just using their global server network saves a lot of time when establishing a server connection.

]]>
hello@smashingmagazine.com (Matt Zeunert)
<![CDATA[Taking RWD To The Extreme]]> https://smashingmagazine.com/2025/02/taking-rwd-to-the-extreme/ https://smashingmagazine.com/2025/02/taking-rwd-to-the-extreme/ Fri, 07 Feb 2025 13:00:00 GMT When Ethan Marcotte conceived RWD, web technologies were far less mature than today. As web developers, we started to grasp how to do things with floats after years of stuffing everything inside table cells. There weren’t many possible ways to achieve a responsive site. There were two of them: fluid grids (based on percentages) and media queries, which were a hot new thing back then.

What was lacking was a real layout system that would allow us to lay things out on a page instead of improvising with floating content. We had to wait several years for Flexbox to appear. And CSS Grid followed that.

Undoubtedly, new layout systems native to the browser were groundbreaking 10 years ago. They were revolutionary enough to usher in a new era. In her talk “Everything You Know About Web Design Just Changed” at the An Event Apart conference in 2019, Jen Simmons proposed a name for it: Intrinsic Web Design (IWD). Let’s disarm that fancy word first. According to the Merriam-Webster dictionary, intrinsic means “belonging to the essential nature or constitution of a thing.” In other words, IWD is a natural way of doing design for the web. And that boils down to using CSS layout systems for… laying out things. That’s it.

It does not sound that groundbreaking on its own. But it opens a lot of possibilities that weren’t earlier available with float-based layouts or table ones. We got the best things from both worlds: two-dimensional layouts (like tables with their rows and columns) with wrapping abilities (like floating content when there is not enough space for it). And there are even more goodies, like mixing fixed-sized content with fluid-sized content or intentionally overlapping elements:

Native layout systems are here to make the browser work for you — don’t hesitate to use that to your advantage.

Start With Semantic HTML

HTML is the backbone of the web. It’s the language that structures and formats the content for the user. And it comes with a huge bonus: it loads and displays to the user, even if CSS and JavsScript fail to load for whatever reason. In other words, the website should still make sense to the user even if the CSS that provides the layout and the JavsScript that provides the interactivity are no-shows. A website is a text document, not so different from the one you can create in a text processor, like Word or LibreWriter.

Semantic HTML also provides important accessibility features, like headings that are often used by screen-reader users for navigating pages. This is why starting not just with any markup but semantic markup for meaningful structure is a crucial step to embracing native web features.

Use Fluid Type With Fluid Space

We often need to adjust the font size of our content when the screen size changes. Smaller screens mean being able to display less content, and larger screens provide more affordance for additional content. This is why we ought to make content as fluid as possible, by which I mean the content should automatically adjust based on the screen’s size. A fluid typographic system optimizes the content’s legibility when it’s being viewed in different contexts.

Nowadays, we can achieve truly fluid type with one line of CSS, thanks to the clamp() function:

font-size: clamp(1rem, calc(1rem + 2.5vw), 6rem);

The maths involved in it goes quite above my head. Thankfully, there is a detailed article on fluid type by Adrian Bece here on Smashing Magazine and Utopia, a handy tool for doing the maths for us. But beware — there be dragons! Or at least possible accessibility issues. By limiting the maximum font size, we could break the ability to zoom the text content, violating one of the WCAG’s requirements (though there are ways to address that).

Fortunately, fluid space is much easier to grasp: if gaps (margins) between elements are defined in font-dependent units (like rem or em), they will scale alongside the font size. Yet rest assured, there are also caveats.

Always Bet On Progressive Enhancement

Yes, that’s this over-20-year-old technique for creating web pages. And it’s still relevant today in 2025. Many interesting features have limited availability — like cross-page view transitions. They won’t work for every user, but enabling them is as simple as adding one line of CSS:

@view-transition { navigation: auto; }

It won’t work in some browsers, but it also won’t break anything. And if some browser catches up with the standard, the code is already there, and view transitions start to work in that browser on your website. It’s sort of like opting into the feature when it’s ready.

That’s progressive enhancement at its best: allowing you to make your stairs into an escalator whenever it’s possible.

It applies to many more things in CSS (unsupported grid is just a flow layout, unsupported masonry layout is just a grid, and so on) and other web technologies.

Trust The Browser

Trust it because it knows much more about how safe it is for users to surf the web. Besides, it’s a computer program, and computer programs are pretty good at calculating things. So instead of calculating all these breakpoints ourselves, take their helping hand and allow them to do it for you. Just give them some constraints. Make that <main> element no wider than 60 characters and no narrower than 20 characters — and then relax, watching the browser make it 37 characters on some super rare viewport you’ve never encountered before. It Just Works™.

But trusting the browser also means trusting the open web. After all, these algorithms responsible for laying things out are all parts of the standards.

Ditch The “Physical” CSS

That’s a bonus point from me. Layout systems introduced the concept of logical CSS. Flexbox does not have a notion of a left or right side — it has a start and an end. And that way of thinking lurked into other areas of CSS, creating the whole CSS Logical Properties and Values module. After working more with layout systems, logical CSS seems much more intuitive than the old “physical” one. It also has at least one advantage over the old way of doing things: it works far better with internationalized content.

And I know that sounds crazy, but it forces a change in thinking about websites. If you don’t know the most basic information about your content (the font size), you can’t really apply any concrete numbers to your layout. You can only think in ratios. If the font size equals , your heading could equal 2✕, the main column 60✕, some text input — 10✕, and so on. This way, everything should work out with any font size and, by extension, scale up with any font size.

We’ve already been doing that with layout systems — we allow them to work on ratios and figure out how big each part of the layout should be. And we’ve also been doing that with rem and em units for scaling things up depending on font size. The only thing left is to completely forget the “1rem = 16px” equation and fully embrace the exciting shores of unknown dimensions.

But that sort of mental shift comes with one not-so-straightforward consequence. Not setting the font size and working with the user-provided one instead fully moves the power from the web developer to the browser and, effectively, the user. And the browser can provide us with far more information about user preferences.

Thanks to the modern CSS, we can respond to these things. For example, we can switch to dark mode if the user prefers one, we can limit motion if the user requests it, we can make clickable areas bigger if the device has a touch screen, and so on. By having this kind of dialogue with the browser, exchanging information (it gives us data on the user, and we give it hints on how to display our content), we empower the user in the result. The content would be displayed in the way they want. That makes our website far more inclusive and accessible.

After all, the users know what they need best. If they set the default font size to 64 pixels, they would be grateful if we respected that value. We don’t know why they did it (maybe they have some kind of vision impairment, or maybe they simply have a screen far away from them); we only know they did it — and we respect that.

And that’s responsive design for me.

]]>
hello@smashingmagazine.com (Tomasz Jakut)
<![CDATA[Integrations: From Simple Data Transfer To Modern Composable Architectures]]> https://smashingmagazine.com/2025/02/integrations-from-simple-data-transfer-to-composable-architectures/ https://smashingmagazine.com/2025/02/integrations-from-simple-data-transfer-to-composable-architectures/ Tue, 04 Feb 2025 08:00:00 GMT This article is a sponsored by Storyblok

When computers first started talking to each other, the methods were remarkably simple. In the early days of the Internet, systems exchanged files via FTP or communicated via raw TCP/IP sockets. This direct approach worked well for simple use cases but quickly showed its limitations as applications grew more complex.

# Basic socket server example
import socket

server_socket = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
server_socket.bind(('localhost', 12345))
server_socket.listen(1)

while True:
    connection, address = server_socket.accept()
    data = connection.recv(1024)
    # Process data
    connection.send(response)

The real breakthrough in enabling complex communication between computers on a network came with the introduction of Remote Procedure Calls (RPC) in the 1980s. RPC allowed developers to call procedures on remote systems as if they were local functions, abstracting away the complexity of network communication. This pattern laid the foundation for many of the modern integration approaches we use today.

At its core, RPC implements a client-server model where the client prepares and serializes a procedure call with parameters, sends the message to a remote server, the server deserializes and executes the procedure, and then sends the response back to the client.

Here’s a simplified example using Python’s XML-RPC.

# Server
from xmlrpc.server import SimpleXMLRPCServer

def calculate_total(items):
    return sum(items)

server = SimpleXMLRPCServer(("localhost", 8000))
server.register_function(calculate_total)
server.serve_forever()

# Client
import xmlrpc.client

proxy = xmlrpc.client.ServerProxy("http://localhost:8000/")
try:
    result = proxy.calculate_total([1, 2, 3, 4, 5])
except ConnectionError:
    print("Network error occurred")

RPC can operate in both synchronous (blocking) and asynchronous modes.

Modern implementations such as gRPC support streaming and bi-directional communication. In the example below, we define a gRPC service called Calculator with two RPC methods, Calculate, which takes a Numbers message and returns a Result message, and CalculateStream, which sends a stream of Result messages in response.

// protobuf
service Calculator {
  rpc Calculate(Numbers) returns (Result);
  rpc CalculateStream(Numbers) returns (stream Result);
}
Modern Integrations: The Rise Of Web Services And SOA

The late 1990s and early 2000s saw the emergence of Web Services and Service-Oriented Architecture (SOA). SOAP (Simple Object Access Protocol) became the standard for enterprise integration, introducing a more structured approach to system communication.

<?xml version="1.0"?>
<soap:Envelope xmlns:soap="http://www.w3.org/2003/05/soap-envelope">
  <soap:Header>
  </soap:Header>
  <soap:Body>
    <m:GetStockPrice xmlns:m="http://www.example.org/stock">
      <m:StockName>IBM</m:StockName>
    </m:GetStockPrice>
  </soap:Body>
</soap:Envelope>

While SOAP provided robust enterprise features, its complexity, and verbosity led to the development of simpler alternatives, especially the REST APIs that dominate Web services communication today.

But REST is not alone. Let’s have a look at some modern integration patterns.

RESTful APIs

REST (Representational State Transfer) has become the de facto standard for Web APIs, providing a simple, stateless approach to manipulating resources. Its simplicity and HTTP-based nature make it ideal for web applications.

First defined by Roy Fielding in 2000 as an architectural style on top of the Web’s standard protocols, its constraints align perfectly with the goals of the modern Web, such as performance, scalability, reliability, and visibility: client and server separated by an interface and loosely coupled, stateless communication, cacheable responses.

In modern applications, the most common implementations of the REST protocol are based on the JSON format, which is used to encode messages for requests and responses.

// Request
async function fetchUserData() {
  const response = await fetch('https://api.example.com/users/123');
  const userData = await response.json();
  return userData;
}

// Response
{
  "id": "123",
  "name": "John Doe",
  "_links": {
    "self": { "href": "/users/123" },
    "orders": { "href": "/users/123/orders" },
    "preferences": { "href": "/users/123/preferences" }
  }
}

GraphQL

GraphQL emerged from Facebook’s internal development needs in 2012 before being open-sourced in 2015. Born out of the challenges of building complex mobile applications, it addressed limitations in traditional REST APIs, particularly the issues of over-fetching and under-fetching data.

At its core, GraphQL is a query language and runtime that provides a type system and declarative data fetching, allowing the client to specify exactly what it wants to fetch from the server.

// graphql
type User {
  id: ID!
  name: String!
  email: String!
  posts: [Post!]!
}

type Post {
  id: ID!
  title: String!
  content: String!
  author: User!
  publishDate: String!
}

query GetUserWithPosts {
  user(id: "123") {
    name
    posts(last: 3) {
      title
      publishDate
    }
  }
}

Often used to build complex UIs with nested data structures, mobile applications, or microservices architectures, it has proven effective at handling complex data requirements at scale and offers a growing ecosystem of tools.

Webhooks

Modern applications often require real-time updates. For example, e-commerce apps need to update inventory levels when a purchase is made, or content management apps need to refresh cached content when a document is edited. Traditional request-response models can struggle to meet these demands because they rely on clients’ polling servers for updates, which is inefficient and resource-intensive.

Webhooks and event-driven architectures address these needs more effectively. Webhooks let servers send real-time notifications to clients or other systems when specific events happen. This reduces the need for continuous polling. Event-driven architectures go further by decoupling application components. Services can publish and subscribe to events asynchronously, and this makes the system more scalable, responsive, and simpler.

import fastify from 'fastify';

const server = fastify();
server.post('/webhook', async (request, reply) => {
  const event = request.body;

  if (event.type === 'content.published') {
    await refreshCache();
  }

  return reply.code(200).send();
});

This is a simple Node.js function that uses Fastify to set up a web server. It responds to the endpoint /webhook, checks the type field of the JSON request, and refreshes a cache if the event is of type content.published.

With all this background information and technical knowledge, it’s easier to picture the current state of web application development, where a single, monolithic app is no longer the answer to business needs, but a new paradigm has emerged: Composable Architecture.

Composable Architecture And Headless CMSs

This evolution has led us to the concept of composable architecture, where applications are built by combining specialized services. This is where headless CMS solutions have a clear advantage, serving as the perfect example of how modern integration patterns come together.

Headless CMS platforms separate content management from content presentation, allowing you to build specialized frontends relying on a fully-featured content backend. This decoupling facilitates content reuse, independent scaling, and the flexibility to use a dedicated technology or service for each part of the system.

Take Storyblok as an example. Storyblok is a headless CMS designed to help developers build flexible, scalable, and composable applications. Content is exposed via API, REST, or GraphQL; it offers a long list of events that can trigger a webhook. Editors are happy with a great Visual Editor, where they can see changes in real time, and many integrations are available out-of-the-box via a marketplace.

Imagine this ContentDeliveryService in your app, where you can interact with Storyblok’s REST API using the open source JS Client:

import StoryblokClient from "storyblok-js-client";

class ContentDeliveryService {
  constructor(private storyblok: StoryblokClient) {}

  async getPageContent(slug: string) {
    const { data } = await this.storyblok.get(cdn/stories/${slug}, {
      version: 'published',
      resolve_relations: 'featured-products.products'
    });

    return data.story;
  }

  async getRelatedContent(tags: string[]) {
    const { data } = await this.storyblok.get('cdn/stories', {
      version: 'published',
      with_tag: tags.join(',')
    });

    return data.stories;
  }
}

The last piece of the puzzle is a real example of integration.

Again, many are already available in the Storyblok marketplace, and you can easily control them from the dashboard. However, to fully leverage the Composable Architecture, we can use the most powerful tool in the developer’s hand: code.

Let’s imagine a modern e-commerce platform that uses Storyblok as its content hub, Shopify for inventory and orders, Algolia for product search, and Stripe for payments.

Once each account is set up and we have our access tokens, we could quickly build a front-end page for our store. This isn’t production-ready code, but just to get a quick idea, let’s use React to build the page for a single product that integrates our services.

First, we should initialize our clients:

import StoryblokClient from "storyblok-js-client";
import { algoliasearch } from "algoliasearch";
import Client from "shopify-buy";


const storyblok = new StoryblokClient({
  accessToken: "your_storyblok_token",
});
const algoliaClient = algoliasearch(
  "your_algolia_app_id",
  "your_algolia_api_key",
);
const shopifyClient = Client.buildClient({
  domain: "your-shopify-store.myshopify.com",
  storefrontAccessToken: "your_storefront_access_token",
});

Given that we created a blok in Storyblok that holds product information such as the product_id, we could write a component that takes the productSlug, fetches the product content from Storyblok, the inventory data from Shopify, and some related products from the Algolia index:

async function fetchProduct() {
  // get product from Storyblok
  const { data } = await storyblok.get(cdn/stories/${productSlug});

  // fetch inventory from Shopify
  const shopifyInventory = await shopifyClient.product.fetch(
    data.story.content.product_id
  );

  // fetch related products using Algolia
  const { hits } = await algoliaIndex.search("products", {
    filters: category:${data.story.content.category},
  });
}

We could then set a simple component state:

const [productData, setProductData] = useState(null);
const [inventory, setInventory] = useState(null);
const [relatedProducts, setRelatedProducts] = useState([]);

useEffect(() =>
  // ...
  // combine fetchProduct() with setState to update the state
  // ...

  fetchProduct();
}, [productSlug]);

And return a template with all our data:

<h1>{productData.content.title}</h1>
<p>{productData.content.description}</p>
<h2>Price: ${inventory.variants[0].price}</h2>
<h3>Related Products</h3>
<ul>
  {relatedProducts.map((product) => (
    <li key={product.objectID}>{product.name}</li>
  ))}
</ul>

We could then use an event-driven approach and create a server that listens to our shop events and processes the checkout with Stripe (credits to Manuel Spigolon for this tutorial):

const stripe = require('stripe')

module.exports = async function plugin (app, opts) {
  const stripeClient = stripe(app.config.STRIPE_PRIVATE_KEY)

  server.post('/create-checkout-session', async (request, reply) => {
    const session = await stripeClient.checkout.sessions.create({
      line_items: [...], // from request.body
      mode: 'payment',
      success_url: "https://your-site.com/success",
      cancel_url: "https://your-site.com/cancel",
    })

    return reply.redirect(303, session.url)
  })
// ...

And with this approach, each service is independent of the others, which helps us achieve our business goals (performance, scalability, flexibility) with a good developer experience and a smaller and simpler application that’s easier to maintain.

Conclusion

The integration between headless CMSs and modern web services represents the current and future state of high-performance web applications. By using specialized, decoupled services, developers can focus on business logic and user experience. A composable ecosystem is not only modular but also resilient to the evolving needs of the modern enterprise.

These integrations highlight the importance of mastering API-driven architectures and understanding how different tools can harmoniously fit into a larger tech stack.

In today’s digital landscape, success lies in choosing tools that offer flexibility and efficiency, adapt to evolving demands, and create applications that are future-proof against the challenges of tomorrow.

If you want to dive deeper into the integrations you can build with Storyblok and other services, check out Storyblok’s integrations page. You can also take your projects further by creating your own plugins with Storyblok’s plugin development resources.

]]>
hello@smashingmagazine.com (Edoardo Dusi)
<![CDATA[Look Closer, Inspiration Lies Everywhere (February 2025 Wallpapers Edition)]]> https://smashingmagazine.com/2025/01/desktop-wallpaper-calendars-february-2025/ https://smashingmagazine.com/2025/01/desktop-wallpaper-calendars-february-2025/ Fri, 31 Jan 2025 09:30:00 GMT As designers, we are always on the lookout for some fresh inspiration, and well, sometimes, the best inspiration lies right in front of us. With that in mind, we embarked on our wallpapers adventure more than thirteen years ago. The idea: to provide you with a new batch of beautiful and inspiring desktop wallpapers every month. This February is no exception, of course.

The wallpapers in this post were designed by artists and designers from across the globe and come in versions with and without a calendar for February 2025. And since so many unique wallpaper designs have seen the light of day since we first started this monthly series, we also added some February “oldies but goodies” from our archives to the collection — so maybe you’ll spot one of your almost-forgotten favorites in here, too?

This wallpapers post wouldn’t have been possible without the kind support of our wonderful community who tickles their creativity each month anew to keep the steady stream of wallpapers flowing. So, a huge thank-you to everyone who shared their designs with us this time around! If you too would like to get featured in one of our next wallpapers posts, please don’t hesitate to submit your design. We can’t wait to see what you’ll come up with! Happy February!

  • You can click on every image to see a larger preview.
  • We respect and carefully consider the ideas and motivation behind each and every artist’s work. This is why we give all artists the full freedom to explore their creativity and express emotions and experience through their works. This is also why the themes of the wallpapers weren’t anyhow influenced by us but rather designed from scratch by the artists themselves.
  • Submit your wallpaper design! 👩‍🎨
    Feeling inspired? We are always looking for creative talent and would love to feature your desktop wallpaper in one of our upcoming posts. Join in ↬

Fall In Love With Yourself

“We dedicate February to Frida Kahlo to illuminate the world with color. Fall in love with yourself, with life and then with whoever you want.” — Designed by Veronica Valenzuela from Spain.

Sweet Valentine

“Everyone deserves a sweet Valentine’s Day, no matter their relationship status. It’s a day to celebrate love in all its forms — self-love, friendship, and the love we share with others. A little kindness or just a little chocolate can make anyone feel special, reminding us that everyone is worthy of love and joy.” — Designed by LibraFire from Serbia.

Mochi

Designed by Ricardo Gimenes from Spain.

Cyber Voodoo

Designed by Ricardo Gimenes from Spain.

Pop Into Fun

“Blow the biggest bubbles, chew on the sweetest memories, and let your inner kid shine! Celebrate Bubble Gum Day with us and share the joy of every POP!” — Designed by PopArt Studio from Serbia.

Believe

“‘Believe’ reminds us to trust ourselves and our potential. It fuels faith, even in challenges, and drives us to pursue our dreams. Belief unlocks strength to overcome obstacles and creates possibilities. It’s the foundation of success, starting with the courage to believe.” — Designed by Hitesh Puri from Delhi, India.

Plants

“I wanted to draw some very cozy place, both realistic and cartoonish, filled with little details. A space with a slightly unreal atmosphere that some great shops or cafes have. A mix of plants, books, bottles, and shelves seemed like a perfect fit. I must admit, it took longer to draw than most of my other pictures! But it was totally worth it. Watch the making-of.” — Designed by Vlad Gerasimov from Georgia.

Love Is In The Play

“Forget Lady and the Tramp and their spaghetti kiss, ’cause Snowflake and Cloudy are enjoying their bliss. The cold and chilly February weather made our kitties knit themselves a sweater. Knitting and playing, the kitties tangled in the yarn and fell in love in your neighbor’s barn.” — Designed by PopArt Studio from Serbia.

Farewell, Winter

“Although I love winter (mostly because of the fun winter sports), there are other great activities ahead. Thanks, winter, and see you next year!” — Designed by Igor Izhik from Canada.

True Love

Designed by Ricardo Gimenes from Spain.

Balloons

Designed by Xenia Latii from Germany.

Magic Of Music

Designed by Vlad Gerasimov from Georgia.

Febpurrary

“I was doodling pictures of my cat one day and decided I could turn it into a fun wallpaper — because a cold, winter night in February is the perfect time for staying in and cuddling with your cat, your significant other, or both!” — Designed by Angelia DiAntonio from Ohio, USA.

Dog Year Ahead

Designed by PopArt Studio from Serbia.

Good Times Ahead

Designed by Ricardo Gimenes from Spain.

Romance Beneath The Waves

“The 14th of February is just around the corner. And love is in the air, water, and everywhere!” — Designed by Teodora Vasileva from Bulgaria.

February Ferns

Designed by Nathalie Ouederni from France.

The Great Beyond

Designed by Lars Pauwels from Belgium.

It’s A Cupcake Kind Of Day

“Sprinkles are fun, festive, and filled with love… especially when topped on a cupcake! Everyone is creative in their own unique way, so why not try baking some cupcakes and decorating them for your sweetie this month? Something homemade, like a cupcake or DIY craft, is always a sweet gesture.” — Designed by Artsy Cupcake from the United States.

Snow

Designed by Elise Vanoorbeek from Belgium.

Share The Same Orbit!

“I prepared a simple and chill layout design for February called ‘Share The Same Orbit!’ which suggests to share the love orbit.” — Designed by Valentin Keleti from Romania.

Dark Temptation

“A dark romantic feel, walking through the city on a dark and rainy night.” — Designed by Matthew Talebi from the United States.

Ice Cream Love

“My inspiration for this wallpaper is the biggest love someone can have in life: the love for ice cream!” — Designed by Zlatina Petrova from Bulgaria.

Lovely Day

Designed by Ricardo Gimenes from Spain.

Time Thief

“Who has stolen our time? Maybe the time thief, so be sure to enjoy the other 28 days of February.” — Designed by Colorsfera from Spain.

In Another Place At The Same Time

“February is the month of love par excellence, but also a different month. Perhaps because it is shorter than the rest or because it is the one that makes way for spring, but we consider it a special month. It is a perfect month to make plans because we have already finished the post-Christmas crunch and we notice that spring and summer are coming closer. That is why I like to imagine that maybe in another place someone is also making plans to travel to unknown lands.” — Designed by Verónica Valenzuela from Spain.

French Fries

Designed by Doreen Bethge from Germany.

Frozen Worlds

“A view of two frozen planets, lots of blue tints.” — Designed by Rutger Berghmans from Belgium.

Out There, There’s Someone Like You

“I am a true believer that out there in this world there is another person who is just like us, the problem is to find her/him.” — Designed by Maria Keller from Mexico.

“Greben” Icebreaker

“Danube is Europe’s second largest river, connecting ten different countries. In these cold days, when ice paralyzes rivers and closes waterways, a small but brave icebreaker called Greben (Serbian word for ‘reef’) seems stronger than winter. It cuts through the ice on Đerdap gorge (Iron Gate) — the longest and biggest gorge in Europe — thus helping the production of electricity in the power plant. This is our way to give thanks to Greben!” — Designed by PopArt Studio from Serbia.

Sharp

“I was sick recently and squinting through my blinds made a neat effect with shapes and colors.” — Designed by Dylan Baumann from Omaha, NE.

On The Light Side

Designed by Ricardo Gimenes from Spain.

Febrewery

“I live in Madison, WI, which is famous for its breweries. Wisconsin even named their baseball team “The Brewers.” If you like beer, brats, and lots of cheese, it’s the place for you!” — Designed by Danny Gugger from the United States.

Love Angel Vader

“Valentine’s Day is coming? Noooooooooooo!” — Designed by Ricardo Gimenes from Spain.

Made In Japan

“See the beautiful colors, precision, and the nature of Japan in one picture.” — Designed by Fatih Yilmaz from the Netherlands.

Groundhog

“The Groundhog emerged from its burrow on February 2. If it is cloudy, then spring will come early, but if it is sunny, the groundhog will see its shadow, will retreat back into its burrow, and the winter weather will continue for six more weeks.” — Designed by Oscar Marcelo from Portugal.

]]>
hello@smashingmagazine.com (Cosima Mielke)
<![CDATA[The Digital Playbook: A Crucial Counterpart To Your Design System]]> https://smashingmagazine.com/2025/01/digital-playbook-crucial-counterpart-design-system/ https://smashingmagazine.com/2025/01/digital-playbook-crucial-counterpart-design-system/ Thu, 30 Jan 2025 08:00:00 GMT I recently wrote for Smashing Magazine about how UX leaders face increasing pressure to deliver more with limited resources. Let me show you how a digital playbook can help meet this challenge by enhancing our work’s visibility while boosting efficiency.

While a design system ensures visual coherence, a digital playbook lays out the strategic and operational framework for how digital projects should be executed and managed. Here’s why a digital playbook deserves a place in your organization’s toolbox and what it should include to drive meaningful impact.

What Is A Digital Playbook?

A digital playbook is essentially your organization’s handbook for navigating the complexities of digital work. As a user experience consultant, I often help organizations create tools like this to streamline their processes and improve outcomes. It’s a collection of strategies, principles, and processes that provide clarity on how to handle everything from website creation to content management and beyond. Think of it as a how-to guide for all things digital.

Unlike rigid rulebooks that feel constraining, you’ll find that a playbook evolves with your organization’s unique culture and challenges. You can use it to help stakeholders learn, standardize your work, and help everybody be more effective. Let me show you how a playbook can transform the way your team works.

Why You Need A Digital Playbook

Have you ever faced challenges like these?

  • Stakeholders with conflicting expectations of what the digital team should deliver.
  • Endless debates over project priorities and workflows that stall progress.
  • A patchwork of tools and inconsistent policies that create confusion.
  • Uncertainty about best practices, leading to inefficiencies and missed opportunities.

Let me show you how a playbook can help you and your team in four key ways:

  • It helps you educate your stakeholders by making digital processes transparent and building trust. I’ve found that when you explain best practices clearly, everyone gets on the same page quickly.
  • You’ll streamline your processes with clear, standardized workflows. This means less confusion and faster progress on your projects.
  • Your digital team gains more credibility as you step into a leadership role. You’ll be able to show your real value to the organization.
  • Best of all, you’ll reduce friction in your daily work. When everyone understands the policies, you’ll face fewer misunderstandings and conflicts.

A digital playbook isn’t just a tool; it’s a way to transform challenges into opportunities for greater impact.

But, no doubt you are wondering, what exactly goes into a digital playbook?

Key Components Of A Digital Playbook

Every digital playbook is unique, but if you’ve ever wondered where to start, here are some key areas to consider. Let’s walk through them together.

Engaging With The Digital Team

Have you ever had people come to you too late in the process or approach you with solutions rather than explaining the underlying problems? A playbook can help mitigate these issues by providing clear guidance on:

  • How to request a new website or content update at the right time;
  • What information you require to do your job;
  • What stakeholders need to consider before requesting your help.

By addressing these common challenges, you’re not just reducing your frustrations — you’re educating stakeholders and encouraging better collaboration.

Digital Project Lifecycle

Most digital projects can feel overwhelming without a clear structure, especially for stakeholders who may not understand the intricacies of the process. That’s why it’s essential to communicate the key phases clearly to those requesting your team’s help. For example:

  • Discovery: Explain how your team will research goals, user needs, and requirements to ensure the project starts on solid ground.
  • Prototyping: Highlight the importance of testing initial concepts to validate ideas before full development.
  • Build: Detail the process of developing the final product and incorporating feedback.
  • Launch: Set clear expectations for rolling out the project with a structured plan.
  • Management: Clarify how the team will optimize and maintain the product over time.
  • Retirement: Help stakeholders understand when and how to phase out outdated tools or content effectively.

I’ve structured the lifecycle this way to help stakeholders understand what to expect. When they know what’s happening at each stage, it builds trust and helps the working relationship. Stakeholders will see exactly what role you play and how your team adds value throughout the process.

Publishing Best Practices

Writing for the web isn’t the same as traditional writing, and it’s critical for your team to help stakeholders understand the differences. Your playbook can include practical advice to guide them, such as:

  • Planning and organizing content to align with user needs and business goals.
  • Crafting content that’s user-friendly, SEO-optimized, and designed for clarity.
  • Maintaining accessible and high-quality standards to ensure inclusivity.

By providing this guidance, you empower stakeholders to create content that’s not only effective but also reflects your team’s standards.

Understanding Your Users

Helping stakeholders understand your audience is essential for creating user-centered experiences. Your digital playbook can support this by including:

  • Detailed user personas that highlight specific needs and behaviors.
  • Recommendations for tools and methods to gather and analyze user data.
  • Practical tips for ensuring digital experiences are inclusive and accessible to all.

By sharing this knowledge, your team helps stakeholders make decisions that prioritize users, ultimately leading to more successful outcomes.

Recommended Resources

Stakeholders often are unaware of the wealth of resources that can help them improve their digital deliverables. Your playbook can help by recommending trusted solutions, such as:

  • Tools that enable stakeholders to carry out their own user research and testing.
  • Analytics tools that allow stakeholders to track the performance of their websites.
  • A list of preferred suppliers in case stakeholders need to bring in external experts.

These recommendations ensure stakeholders are equipped with reliable resources that align with your team’s processes.

Policies And Governance

Uncertainty about organizational policies can lead to confusion and missteps. Your playbook should provide clarity by outlining:

  • Accessibility and inclusivity standards to ensure compliance and user satisfaction.
  • Data privacy and security protocols to safeguard user information.
  • Clear processes for prioritizing and governing projects to maintain focus and consistency.

By setting these expectations, your team establishes a foundation of trust and accountability that stakeholders can rely on.

Of course, you can have the best digital playbook in the world, but if people don’t reference it, then it is a wasted opportunity.

Making Your Digital Playbook Stick

It falls to you and your team to ensure as many stakeholders as possible engage with your playbook. Try the following:

  • Make It Easy to Find
    How often do stakeholders struggle to find important resources? Avoid hosting the playbook in a forgotten corner of your intranet. Instead, place it front and center on a well-maintained, user-friendly site that’s accessible to everyone.
  • Keep It Engaging
    Let’s face it — nobody wants to sift through walls of text. Use visuals like infographics, short explainer videos, and clear headings to make your playbook not only digestible but also enjoyable to use. Think of it as creating a resource your stakeholders will actually want to refer back to.
  • Frame It as a Resource
    A common pitfall is presenting the playbook as a rigid set of rules. Instead, position it as a helpful guide designed to make everyone’s work easier. Highlight how it can simplify workflows, improve outcomes, and solve real-world problems your stakeholders face daily.
  • Share at Relevant Moments
    Don’t wait for stakeholders to find the playbook themselves. Instead, proactively share relevant sections when they’re most needed. For example, send the discovery phase documentation when starting a new project or share content guidelines when someone is preparing to write for the website. This just-in-time approach ensures the playbook’s guidance is applied when it matters most.
Start Small, Then Scale

Creating a digital playbook might sound like a daunting task, but it doesn’t have to be. Begin with a few core sections and expand over time. Assign ownership to a specific team or individual to ensure it remains updated and relevant.

In the end, a digital playbook is an investment. It saves time, reduces conflicts, and elevates your organization’s digital maturity.

Just as a design system is critical for visual harmony, a digital playbook is essential for operational excellence.

Further Reading On SmashingMag

]]>
hello@smashingmagazine.com (Paul Boag)
<![CDATA[Transitioning Top-Layer Entries And The Display Property In CSS]]> https://smashingmagazine.com/2025/01/transitioning-top-layer-entries-display-property-css/ https://smashingmagazine.com/2025/01/transitioning-top-layer-entries-display-property-css/ Wed, 29 Jan 2025 10:00:00 GMT Animating from and to display: none; was something we could only achieve with JavaScript to change classes or create other hacks. The reason why we couldn’t do this in CSS is explained in the new CSS Transitions Level 2 specification:

“In Level 1 of this specification, transitions can only start during a style change event for elements that have a defined before-change style established by the previous style change event. That means a transition could not be started on an element that was not being rendered for the previous style change event.”

In simple terms, this means that we couldn’t start a transition on an element that is hidden or that has just been created.

What Does transition-behavior: allow-discrete Do?

allow-discrete is a bit of a strange name for a CSS property value, right? We are going on about transitioning display: none, so why isn’t this named transition-behavior: allow-display instead? The reason is that this does a bit more than handling the CSS display property, as there are other “discrete” properties in CSS. A simple rule of thumb is that discrete properties do not transition but usually flip right away between two states. Other examples of discrete properties are visibility and mix-blend-mode. I’ll include an example of these at the end of this article.

To summarise, setting the transition-behavior property to allow-discrete allows us to tell the browser it can swap the values of a discrete property (e.g., display, visibility, and mix-blend-mode) at the 50% mark instead of the 0% mark of a transition.

What Does @starting-style Do?

The @starting-style rule defines the styles of an element right before it is rendered to the page. This is highly needed in combination with transition-behavior and this is why:

When an item is added to the DOM or is initially set to display: none, it needs some sort of “starting style” from which it needs to transition. To take the example further, popovers and dialog elements are added to a top layer which is a layer that is outside of your document flow, you can kind of look at it as a sibling of the <html> element in your page’s structure. Now, when opening this dialog or popover, they get created inside that top layer, so they don’t have any styles to start transitioning from, which is why we set @starting-style. Don’t worry if all of this sounds a bit confusing. The demos might make it more clearly. The important thing to know is that we can give the browser something to start the animation with since it otherwise has nothing to animate from.

A Note On Browser Support

At the moment of writing, the transition-behavior is available in Chrome, Edge, Safari, and Firefox. It’s the same for @starting-style, but Firefox currently does not support animating from display: none. But remember that everything in this article can be perfectly used as a progressive enhancement.

Now that we have the theory of all this behind us, let’s get practical. I’ll be covering three use cases in this article:

  • Animating from and to display: none in the DOM.
  • Animating dialogs and popovers entering and exiting the top layer.
  • More “discrete properties” we can handle.
Animating From And To display: none In The DOM

For the first example, let’s take a look at @starting-style alone. I created this demo purely to explain the magic. Imagine you want two buttons on a page to add or remove list items inside of an unordered list.

This could be your starting HTML:

<button type="button" class="btn-add">
  Add item
</button>
<button type="button" class="btn-remove">
  Remove item
</button>
<ul role="list"></ul>

Next, we add actions that add or remove those list items. This can be any method of your choosing, but for demo purposes, I quickly wrote a bit of JavaScript for it:

document.addEventListener("DOMContentLoaded", () => {
  const addButton = document.querySelector(".btn-add");
  const removeButton = document.querySelector(".btn-remove");
  const list = document.querySelector('ul[role="list"]');

  addButton.addEventListener("click", () => {
    const newItem = document.createElement("li");
    list.appendChild(newItem);
  });

  removeButton.addEventListener("click", () => {
    if (list.lastElementChild) {
      list.lastElementChild.classList.add("removing");
      setTimeout(() => {
        list.removeChild(list.lastElementChild);
      }, 200);
    }
  });
});

When clicking the addButton, an empty list item gets created inside of the unordered list. When clicking the removeButton, the last item gets a new .removing class and finally gets taken out of the DOM after 200ms.

With this in place, we can write some CSS for our items to animate the removing part:

ul {
    li {
      transition: opacity 0.2s, transform 0.2s;

      &.removing {
        opacity: 0;
        transform: translate(0, 50%);
      }
    }
  }

This is great! Our .removing animation is already looking perfect, but what we were looking for here was a way to animate the entry of items coming inside of our DOM. For this, we will need to define those starting styles, as well as the final state of our list items.

First, let’s update the CSS to have the final state inside of that list item:

ul {
    li {
      opacity: 1;
      transform: translate(0, 0);
      transition: opacity 0.2s, transform 0.2s;

      &.removing {
        opacity: 0;
        transform: translate(0, 50%);
      }
    }
  }

Not much has changed, but now it’s up to us to let the browser know what the starting styles should be. We could set this the same way we did the .removing styles like so:

ul {
    li {
      opacity: 1;
      transform: translate(0, 0);
      transition: opacity 0.2s, transform 0.2s;

      @starting-style {
        opacity: 0;
        transform: translate(0, 50%);
      }

      &.removing {
        opacity: 0;
        transform: translate(0, 50%);
      }
    }
  }

Now we’ve let the browser know that the @starting-style should include zero opacity and be slightly nudged to the bottom using a transform. The final result is something like this:

But we don’t need to stop there! We could use different animations for entering and exiting. We could, for example, update our starting style to the following:

@starting-style {
  opacity: 0;
  transform: translate(0, -50%);
}

Doing this, the items will enter from the top and exit to the bottom. See the full example in this CodePen:

See the Pen @starting-style demo - up-in, down-out [forked] by utilitybend.

When To Use transition-behavior: allow-discrete

In the previous example, we added and removed items from our DOM. In the next demo, we will show and hide items using the CSS display property. The basic setup is pretty much the same, except we will add eight list items to our DOM with the .hidden class attached to it:

  <button type="button" class="btn-add">
    Show item
  </button>
  <button type="button" class="btn-remove">
    Hide item
  </button>

<ul role="list">
  <li class="hidden"></li>
  <li class="hidden"></li>
  <li class="hidden"></li>
  <li class="hidden"></li>
  <li class="hidden"></li>
  <li class="hidden"></li>
  <li class="hidden"></li>
  <li class="hidden"></li>
</ul>

Once again, for demo purposes, I added a bit of JavaScript that, this time, removes the .hidden class of the next item when clicking the addButton and adds the hidden class back when clicking the removeButton:

document.addEventListener("DOMContentLoaded", () => {
  const addButton = document.querySelector(".btn-add");
  const removeButton = document.querySelector(".btn-remove");
  const listItems = document.querySelectorAll('ul[role="list"] li');

  let activeCount = 0;

  addButton.addEventListener("click", () => {
    if (activeCount < listItems.length) {
      listItems[activeCount].classList.remove("hidden");
      activeCount++;
    }
  });

  removeButton.addEventListener("click", () => {
    if (activeCount > 0) {
      activeCount--;
      listItems[activeCount].classList.add("hidden");
    }
  });
});

Let’s put together everything we learned so far, add a @starting-style to our items, and do the basic setup in CSS:

ul {
    li {
      display: block;
      opacity: 1;
      transform: translate(0, 0);
      transition: opacity 0.2s, transform 0.2s;

      @starting-style {
        opacity: 0;
        transform: translate(0, -50%);
      }

      &.hidden {
        display: none;
        opacity: 0;
        transform: translate(0, 50%);
      }
    }
  }

This time, we have added the .hidden class, set it to display: none, and added the same opacity and transform declarations as we previously did with the .removing class in the last example. As you might expect, we get a nice fade-in for our items, but removing them is still very abrupt as we set our items directly to display: none.

This is where the transition-behavior property comes into play. To break it down a bit more, let’s remove the transition property shorthand of our previous CSS and open it up a bit:

ul {
    li {
      display: block;
      opacity: 1;
      transform: translate(0, 0);
      transition-property: opacity, transform;
      transition-duration: 0.2s;
    }
  }

All that is left to do is transition the display property and set the transition-behavior property to allow-discrete:

ul {
    li {
      display: block;
      opacity: 1;
      transform: translate(0, 0);
      transition-property: opacity, transform, display;
      transition-duration: 0.2s;
      transition-behavior: allow-discrete;
      /* etc. */
    }
  }

We are now animating the element from display: none, and the result is exactly as we wanted it:

We can use the transition shorthand property to make our code a little less verbose:

transition: opacity 0.2s, transform 0.2s, display 0.2s allow-discrete;

You can add allow-discrete in there. But if you do, take note that if you declare a shorthand transition after transition-behavior, it will be overruled. So, instead of this:

transition-behavior: allow-discrete;
transition: opacity 0.2s, transform 0.2s, display 0.2s;

…we want to declare transition-behavior after the transition shorthand:

transition: opacity 0.2s, transform 0.2s, display 0.2s;
transition-behavior: allow-discrete;

Otherwise, the transition shorthand property overrides transition-behavior.

See the Pen @starting-style and transition-behavior: allow-discrete [forked] by utilitybend.

Animating Dialogs And Popovers Entering And Exiting The Top Layer

Let’s add a few use cases with dialogs and popovers. Dialogs and popovers are good examples because they get added to the top layer when opening them.

What Is That Top Layer?

We’ve already likened the “top layer” to a sibling of the <html> element, but you might also think of it as a special layer that sits above everything else on a web page. It's like a transparent sheet that you can place over a drawing. Anything you draw on that sheet will be visible on top of the original drawing.

The original drawing, in this example, is the DOM. This means that the top layer is out of the document flow, which provides us with a few benefits. For example, as I stated before, dialogs and popovers are added to this top layer, and that makes perfect sense because they should always be on top of everything else. No more z-index: 9999!

But it’s more than that:

  • z-index is irrelevant: Elements on the top layer are always on top, regardless of their z-index value.
  • DOM hierarchy doesn’t matter: An element’s position in the DOM doesn’t affect its stacking order on the top layer.
  • Backdrops: We get access to a new ::backdrop pseudo-element that lets us style the area between the top layer and the DOM beneath it.

Hopefully, you are starting to understand the importance of the top layer and how we can transition elements in and out of it as we would with popovers and dialogues.

Transitioning The Dialog Element In The Top Layer

The following HTML contains a button that opens a <dialog> element, and that <dialog> element contains another button that closes the <dialog>. So, we have one button that opens the <dialog> and one that closes it.

<button class="open-dialog" data-target="my-modal">Show dialog</button>

<dialog id="my-modal">
  <p>Hi, there!</p>
  <button class="outline close-dialog" data-target="my-modal">
    close
  </button>
</dialog>

A lot is happening in HTML with invoker commands that will make the following step a bit easier, but for now, let’s add a bit of JavaScript to make this modal actually work:

// Get all open dialog buttons.
const openButtons = document.querySelectorAll(".open-dialog");
// Get all close dialog buttons.
const closeButtons = document.querySelectorAll(".close-dialog");

// Add click event listeners to open buttons.
openButtons.forEach((button) =< {
  button.addEventListener("click", () =< {
    const targetId = button.getAttribute("data-target");
    const dialog = document.getElementById(targetId);
    if (dialog) {
      dialog.showModal();
    }
  });
});

// Add click event listeners to close buttons.
closeButtons.forEach((button) =< {
  button.addEventListener("click", () =< {
    const targetId = button.getAttribute("data-target");
    const dialog = document.getElementById(targetId);
    if (dialog) {
      dialog.close();
    }
  });
});

I’m using the following styles as a starting point. Notice how I’m styling the ::backdrop as an added bonus!

dialog {
  padding: 30px;
  width: 100%;
  max-width: 600px;
  background: #fff;
  border-radius: 8px;
  border: 0;
  box-shadow: 
    rgba(0, 0, 0, 0.3) 0px 19px 38px,
    rgba(0, 0, 0, 0.22) 0px 15px 12px;

  &::backdrop {
    background-image: linear-gradient(
      45deg in oklab,
      oklch(80% 0.4 222) 0%,
      oklch(35% 0.5 313) 100%
    );
  }
}

This results in a pretty hard transition for the entry, meaning it’s not very smooth:

Let’s add transitions to this dialog element and the backdrop. I’m going a bit faster this time because by now, you likely see the pattern and know what’s happening:

dialog {
  opacity: 0;
  translate: 0 30%;
  transition-property: opacity, translate, display;
  transition-duration: 0.8s;

  transition-behavior: allow-discrete;

  &[open] {
    opacity: 1;
    translate: 0 0;

    @starting-style {
      opacity: 0;
      translate: 0 -30%;
    }
  }
}

When a dialog is open, the browser slaps an open attribute on it:

<dialog open> ... </dialog>

And that’s something else we can target with CSS, like dialog[open]. So, in this case, we need to set a @starting-style for when the dialog is in an open state.

Let’s add a transition for our backdrop while we’re at it:

dialog {
  /* etc. */
  &::backdrop {
    opacity: 0;
    transition-property: opacity;
    transition-duration: 1s;
  }

  &[open] {
    /* etc. */
    &::backdrop {
      opacity: 0.8;

      @starting-style {
        opacity: 0;
      }
    }
  }
}

Now you’re probably thinking: A-ha! But you should have added the display property and the transition-behavior: allow-discrete on the backdrop!

But no, that is not the case. Even if I would change my backdrop pseudo-element to the following CSS, the result would stay the same:

 &::backdrop {
    opacity: 0;
    transition-property: opacity, display;
    transition-duration: 1s;
    transition-behavior: allow-discrete;
  }

It turns out that we are working with a ::backdrop and when working with a ::backdrop, we’re implicitly also working with the CSS overlay property, which specifies whether an element appearing in the top layer is currently rendered in the top layer.

And overlay just so happens to be another discrete property that we need to include in the transition-property declaration:

dialog {
  /* etc. */

&::backdrop {
  transition-property: opacity, display, overlay;
  /* etc. */
}

Unfortunately, this is currently only supported in Chromium browsers, but it can be perfectly used as a progressive enhancement.

And, yes, we need to add it to the dialog styles as well:

dialog {
  transition-property: opacity, translate, display, overlay;
  /* etc. */

&::backdrop {
  transition-property: opacity, display, overlay;
  /* etc. */
}

See the Pen Dialog: starting-style, transition-behavior, overlay [forked] by utilitybend.

It’s pretty much the same thing for a popover instead of a dialog. I’m using the same technique, only working with popovers this time:

See the Pen Popover transition with @starting-style [forked] by utilitybend.

Other Discrete Properties

There are a few other discrete properties besides the ones we covered here. If you remember the second demo, where we transitioned some items from and to display: none, the same can be achieved with the visibility property instead. This can be handy for those cases where you want items to preserve space for the element’s box, even though it is invisible.

So, here’s the same example, only using visibility instead of display.

See the Pen Transitioning the visibility property [forked] by utilitybend.

The CSS mix-blend-mode property is another one that is considered discrete. To be completely honest, I can’t find a good use case for a demo. But I went ahead and created a somewhat trite example where two mix-blend-modes switch right in the middle of the transition instead of right away.

See the Pen Transitioning mix-blend-mode [forked] by utilitybend.

Wrapping Up

That’s an overview of how we can transition elements in and out of the top layer! In an ideal world, we could get away without needing a completely new property like transition-behavior just to transition otherwise “un-transitionable” properties, but here we are, and I’m glad we have it.

But we also got to learn about @starting-style and how it provides browsers with a set of styles that we can apply to the start of a transition for an element that’s in the top layer. Otherwise, the element has nothing to transition from at first render, and we’d have no way to transition them smoothly in and out of the top layer.

]]>
hello@smashingmagazine.com (Brecht De Ruyte)
<![CDATA[Svelte 5 And The Future Of Frameworks: A Chat With Rich Harris]]> https://smashingmagazine.com/2025/01/svelte-5-future-frameworks-chat-rich-harris/ https://smashingmagazine.com/2025/01/svelte-5-future-frameworks-chat-rich-harris/ Tue, 28 Jan 2025 15:00:00 GMT Svelte occupies a curious space within the web development world. It’s been around in one form or another for eight years now, and despite being used by the likes of Apple, Spotify, IKEA, and the New York Times, it still feels like something of an upstart, maybe even a black sheep. As creator Rich Harris recently put it,

“If React is Taylor Swift, we’re more of a Phoebe Bridges. She’s critically acclaimed, and you’ve heard of her, but you probably can’t name that many of her songs.”

— Rich Harris

This may be why the release of Svelte 5 in October this year felt like such a big deal. It tries to square the circle of convention and innovation. Can it remain one of the best-loved frameworks on the web while shaking off suspicions that it can’t quite rub shoulders with React, Vue, and others when it comes to scalability? Whisper it, but they might just have pulled it off. The post-launch reaction has been largely glowing, with weekly npm downloads doubling compared to six months ago.

Still, I’m not in the predictions game. The coming months and years will be the ultimate measure of Svelte 5. And why speculate on the most pressing questions when I can just ask Rich Harris myself? He kindly took some time to chat with me about Svelte and the future of web development.

Not Magic, But Magical

Svelte 5 is a ground-up rewrite. I don’t want to get into the weeds here — key changes are covered nicely in the migration guide — but suffice it to say the big one where day-to-day users are concerned is runes. At times, magic feeling $ has given way to the more explicit $state, $derived, and $effect.

A lot of the talk around Svelte 5 included the sentiment that it marks the ‘maturation’ of the framework. To Harris and the Svelte team, it feels like a culmination, with lessons learned combined with aspirations to form something fresh yet familiar.

“This does sort of feel like a new chapter. I’m trying to build something that you don’t feel like you need to get a degree in it before you can be productive in it. And that seems to have been carried through with Svelte 5.”

— Rich Harris

Although raw usage numbers aren’t everything, seeing the uptick in installations has been a welcome signal for Harris and the Svelte team.

“For us, success is definitely not based around adoption, though seeing the number go up and to the right gives us reassurance that we’re doing the right thing and we’re on the right track. Even if it’s not the goal, it is a useful indication. But success is really people building their apps with this framework and building higher quality, more resilient, more accessible apps.”

— Rich Harris

The tenets of a Svelte philosophy outlined by Harris earlier this year reinforce the point:

  1. The web matters.
  2. Optimise for vibes.
  3. Don’t optimise for adoption.
  4. HTML, The Mother Language.
  5. Embrace progress.
  6. Numbers lie.
  7. Magical, not magic.
  8. Dream big.
  9. No one cares.
  10. Design by consensus.

Click the link above to hear these expounded upon, but you get the crux. Svelte is very much a qualitative project. Although Svelte performs well in a fair few performance metrics itself, Harris has long been a critic of metrics like Lighthouse being treated as ends in themselves. Fastest doesn’t necessarily mean best. At the end of the day, we are all in the business of making quality websites.

Frameworks are a means to that end, and Harris sees plenty of work to be done there.

Software Is Broken

Every milestone is a cause for celebration. It’s also a natural pause in which to ask, “Now what?” For the Svelte team, the sights seem firmly set on shoring up the quality of the web.

“A conclusion that we reached over the course of a recent discussion is that most software in the world is kind of terrible. Things are not good. Half the stuff on my phone just doesn’t work. It fails at basic tasks. And the same is true for a lot of websites. The number of times I’ve had to open DevTools to remove the disabled attribute from a button so that I can submit a form, or been unclear on whether a payment went through or not.”

— Rich Harris

This certainly meshes with my experience and, doubtless, countless others. Between enshittification, manipulative algorithms, and the seemingly endless influx of AI-generated slop, it’s hard to shake the feeling that the web is becoming increasingly decadent and depraved.

“So many pieces of software that we use are just terrible. They’re just bad software. And it’s not because software engineers are idiots. Our main priority as toolmakers should be to enable people to build software that isn’t broken. As a baseline, people should be able to build software that works.”

— Rich Harris

This sense of responsibility for the creation and maintenance of good software speaks to the Svelte team’s holistic outlook and also looks to influence priorities going forward.

Brave New World

Part of Svelte 5 feels like a new chapter in the sense of fresh foundations. Anyone who’s worked in software development or web design will tell you how much of a headache ground-up rewrites are. Rebuilding the foundations is something to celebrate when you pull it off, but it also begs the question: What are the foundations for?

Harris has his eyes on the wider ecosystem around frameworks.

“I don’t think there’s a lot more to do to solve the problem of taking some changing application state and turning it into DOM, but I think there’s a huge amount to be done around the ancillary problems. How do we load the data that we put in those components? Where does that data live? How do we deploy our applications?”

— Rich Harris

In the short to medium term, this will likely translate into some love for SvelteKit, the web application framework built around Svelte. The framework might start having opinions about authentication and databases, an official component library perhaps, and dev tools in the spirit of the Astro dev toolbar. And all these could be precursors to even bigger explorations.

“I want there to be a Rails or a Laravel for JavaScript. In fact, I want there to be multiple such things. And I think that at least part of Svelte’s long-term goal is to be part of that. There are too many things that you need to learn in order to build a full stack application today using JavaScript.”

— Rich Harris
Onward

Although Svelte has been ticking along happily for years, the release of version 5 has felt like a new lease of life for the ecosystem around it. Every day brings new and exciting projects to the front page of the /r/sveltejs subreddit, while this year’s Advent of Svelte has kept up a sense of momentum following the stable release.

Below are just a handful of the Svelte-based projects that have caught my eye:

Despite the turbulence and inescapable sense of existential dread surrounding much tech, this feels like an exciting time for web development. The conditions are ripe for lovely new things to emerge.

And as for Svelte 5 itself, what does Rich Harris say to those who might be on the fence?

“I would say you have nothing to lose but an afternoon if you try it. We have a tutorial that will take you from knowing nothing about Svelte or even existing frameworks. You can go from that to being able to build applications using Svelte in three or four hours. If you just want to learn Svelte basics, then that’s an hour. Try it.”

— Rich Harris

Further Reading On SmashingMag

]]>
hello@smashingmagazine.com (Frederick O’Brien)
<![CDATA[Navigating The Challenges Of Modern Open-Source Authoring: Lessons Learned]]> https://smashingmagazine.com/2025/01/navigating-challenges-modern-open-source-authoring/ https://smashingmagazine.com/2025/01/navigating-challenges-modern-open-source-authoring/ Tue, 21 Jan 2025 08:00:00 GMT This article is a sponsored by Storyblok

Open source is the backbone of modern software development. As someone deeply involved in both community-driven and company-driven open source, I’ve had the privilege of experiencing its diverse approaches firsthand. This article dives into what modern OSS (Open Source) authoring looks like, focusing on front-end JavaScript libraries such as TresJS and tools I’ve contributed to at Storyblok.

But let me be clear:

There’s no universal playbook for OSS. Every language, framework, and project has its own workflows, rules, and culture — and that’s okay. These variations are what make open source so adaptable and diverse.

The Art Of OSS Authoring

Authoring an open-source project often begins with scratching your own itch — solving a problem you face as a developer. But as your “experiment” gains traction, the challenge shifts to addressing diverse use cases while maintaining the simplicity and focus of the original idea.

Take TresJS as an example. All I wanted was to add 3D to my personal Nuxt portfolio, but at that time, there wasn’t a maintained, feature-rich alternative to React Three Fiber in VueJS. So, I decided to create one. Funny enough, after two years after the library’s launch, my portfolio remains unfinished.

Community-driven OSS Authoring: Lessons From TresJS

Continuing with TresJS as an example of a community-driven OSS project, the community has been an integral part of its growth, offering ideas, filing issues (around 531 in total), and submitting pull requests (around 936 PRs) of which 90% eventually made it to production. As an author, this is the best thing that can happen — it’s probably one of the biggest reasons I fell in love with open source. The continuous collaboration creates an environment where new ideas can evolve into meaningful contributions.

However, it also comes with its own challenges. The more ideas come in, the harder it becomes to maintain the project’s focus on its original purpose.

As authors, it’s our responsibility to keep the vision of the library clear — even if that means saying no to great ideas from the community.

Over time, some of the most consistent collaborators became part of a core team, helping to share the responsibility of maintaining the library and ensuring it stays aligned with its original goals.

Another crucial aspect of scaling a project, especially one like TresJS, which has grown into an ecosystem of packages, is the ability to delegate. The more the project expands, the more essential it becomes to distribute responsibilities among contributors. Delegation helps in reducing the burden of the massive workload and empowers contributors to take ownership of specific areas. As a core author, it’s equally important to provide the necessary tools, CI workflows, and clear conventions to make the process of contributing as simple and efficient as possible. A well-prepared foundation ensures that new and existing collaborators can focus on what truly matters — pushing the project forward.

Company-driven OSS Authoring: The Storyblok Perspective

Now that we’ve explored the bright spots and challenges of community-driven OSS let’s jump into a different realm: company-driven OSS.

I had experience with inner-source and open-source in previous companies, so I already had a grasp of how OSS works in the context of a company environment. However, my most meaningful experience would come later, specifically earlier this year, when I switched my role from DevRel to a full-time Developer Experience Engineer, and I say “full-time” because before taking the role, I was already contributing to Storyblok’s SDK ecosystem.

At Storyblok, open source plays a crucial role in how we engage with developers and how they seamlessly use our product with their favorite framework. Our goal is to provide the same developer experience regardless of the flavor, making the experience of using Storyblok as simple, effective, and enjoyable as possible.

To achieve this, it’s crucial to balance the needs of the developer community — which often reflect the needs of the clients they work for — with the company’s broader goals. One of the things I find more challenging is managing expectations. For instance, while the community may want feature requests and bug fixes to be implemented quickly, the company’s priorities might dictate focusing on stability, scalability, and often strategic integrations. Clear communication and prioritization are key to maintaining healthy alignment and trust between both sides.

One of the unique advantages of company-driven open source is the availability of resources:

  • Dedicated engineering time,
  • Infrastructure (which many OSS authors often cannot afford),
  • Access to knowledge from internal teams like design, QA, and product management.

However, this setup often comes with the challenge of dealing with legacy codebases — typically written by developers who may not be familiar with OSS principles. This can lead to inconsistencies in structure, testing, and documentation that require significant refactoring before the project can align with open-source best practices.

Navigating The Spectrum: Community vs. Company

I like to think of community-driven OSS as being like jazz music—freeform, improvised, and deeply collaborative. In contrast, company-driven OSS resembles an orchestra, with a conductor guiding the performance and ensuring all the pieces fit together seamlessly.

The truth is that most OSS projects — if not the vast majority — exist somewhere along this spectrum. For example, TresJS began as a purely community-driven project, but as it matured and gained traction, elements of structured decision-making — more typical of company-driven projects — became necessary to maintain focus and scalability. Together with the core team, we defined a vision and goals for the project to ensure it continued to grow without losing sight of its original purpose.

Interestingly, the reverse is also true: Company-driven OSS can benefit significantly from the fast-paced innovation seen in community-driven projects.

Many of the improvements I’ve introduced to the Storyblok ecosystem since joining were inspired by ideas first explored in TresJS. For instance, migrating the TresJS ecosystem to pnpm workspaces demonstrated how streamlined dependency management could improve development workflows like playgrounds and e2e — an approach we gradually adapted later for Storyblok’s ecosystem.

Similarly, transitioning Storyblok testing from Jest to Vitest, with its improved performance and developer experience, was influenced by how testing is approached in community-driven projects. Likewise, our switch from Prettier to ESLint’s v9 flat configuration with auto-fix helped consolidate linting and formatting into a single workflow, streamlining developer productivity.

Even more granular processes, such as modernizing CI workflows, found their way into Storyblok. TresJS’s evolution from a single monolithic release action to granular steps for linting, testing, and building provided a blueprint for enhancing our pipelines at Storyblok. We also adopted continuous release practices inspired by pkg.pr.new, enabling faster delivery of incremental changes and testing package releases in real client projects to gather immediate feedback before merging the PRs.

That said, TresJS also benefited from my experiences at Storyblok, which had a more mature and battle-tested ecosystem, particularly in adopting automated processes. For example, we integrated Dependabot to keep dependencies up to date and used auto-merge to reduce manual intervention for minor updates, freeing up contributors’ time for more meaningful work. We also implemented an automatic release pipeline using GitHub Actions, inspired by Storyblok’s workflows, ensuring smoother and more reliable releases for the TresJS ecosystem.

The Challenges of Modern OSS Authoring

Throughout this article, we’ve touched on several modern OSS challenges, but if one deserves the crown, it’s managing breaking changes and maintaining compatibility. We know how fast the pace of technology is, especially on the web, and users expect libraries and tools to keep up with the latest trends. I’m not the first person to say that hype-driven development can be fun, but it is inherently risky and not your best ally when building reliable, high-performance software — especially in enterprise contexts.

Breaking changes exist. That’s why semantic versioning comes into play to make our lives easier. However, it is equally important to balance innovation with stability. This becomes more crucial when introducing new features or refactoring for better performance, breaking existing APIs. One key lesson I’ve learned — particularly during my time at Storyblok — is the importance of clear communication. Changelogs, migration guides, and deprecation warnings are invaluable tools to smoothen the transition for users.

A practical example:

My first project as a Developer Experience Engineer was introducing @storyblok/richtext, a library for rich-text processing that (at the time of writing) sees around 172k downloads per month. The library was crafted during my time as a DevRel, but transitioning users to it from the previous rich-text implementation across the ecosystem required careful planning. Since the library would become a dependency of the fundamental JS SDK — and from there propagate to all the framework SDKs — together with my manager, we planned a multi-month transition with a retro-compatible period before the major release. This included communication campaigns, thorough documentation, and gradual adoption to minimize disruption.

Despite these efforts, mistakes happened — and that’s okay. During the rich-text transition, there were instances where updates didn’t arrive on time or where communication and documentation were temporarily out of sync. This led to confusion within the community, which we addressed by providing timely support on GitHub issues and Discord. These moments served as reminders that even with semantic versioning, modular architectures, and meticulous planning, OSS authoring is never perfect. Mistakes are part of the process.

And that takes us to the following point.

Conclusion

Open-source authoring is a journey of continuous learning. Each misstep offers a chance to improve, and each success reinforces the value of collaboration and experimentation.

There’s no “perfect” way to do OSS, and that’s the beauty of it. Every project has its own set of workflows, challenges, and quirks shaped by the community and its contributors. These differences make open source adaptable, dynamic, fun, and, above all, impactful. No matter if you’re building something entirely new or contributing to an existing project, remember that progress, not perfection, is the goal.

So, keep contributing, experimenting, and sharing your work. Every pull request, issue, and idea you put forward brings value &mdashp not just to your project but to the broader ecosystem.

Happy coding!

]]>
hello@smashingmagazine.com (Alvaro Saburido)