Articles on Smashing Magazine — For Web Designers And Developers https://www.smashingmagazine.com/ Recent content in Articles on Smashing Magazine — For Web Designers And Developers Wed, 02 Aug 2023 07:41:00 GMT https://validator.w3.org/feed/docs/rss2.html manual en Articles on Smashing Magazine — For Web Designers And Developers https://www.smashingmagazine.com/images/favicon/app-icon-512x512.png https://www.smashingmagazine.com/ All rights reserved 2023, Smashing Media AG Development Design UX Mobile Front-end <![CDATA[Smashing Podcast Episode 64 With Alvin Bryan: What Is A Headless CMS?]]> https://smashingmagazine.com/2023/08/smashing-podcast-episode-64/ https://smashingmagazine.com/2023/08/smashing-podcast-episode-64/ Tue, 01 Aug 2023 18:00:00 GMT We’re talking about headless content management systems. What are they, and how do they differ from more traditional systems? Drew McLellan talks to Alvin Bryan to find out.

Show Notes

Weekly Update

Transcript

Drew: He’s a developer advocate with the content management platform company, Contentful. Before that, he used to be a lead engineer for Dow Jones in the Wall Street Journal and has had various front end roles. He’s very UX driven and happiest when collaborating with designers and pushing boundaries as a team. And these days, he’s learning a lot about DEVREL and loving it. So we know he’s an experienced developer, but did you know he once taught Catherine Zeta Jones to do a cartwheel? My Smashing friends, please welcome Alvin Brian. Hi Alvin, how are you?

Alvin: I’m smashing, thank you so much for having me here. It’s an honor.

Drew: Thanks for joining us. I wanted to talk to you today about one of the key technologies that’s really at the center of so many projects, but perhaps these days doesn’t get the spotlight shone on it so often because maybe it’s not so glamorous as front-end frameworks or any of these other things. It’s content management systems. We’re all using them, but I think sometimes the discussion isn’t there about it when it’s so important. I just — before we start — want to address the elephant in the room and that you’re a developer advocate for Contentful, and I know we have a really savvy audience here at Smashing, and they’d see right through anything that was a thinly veiled ad for your employer. So I just wanted to reassure the audience at this point that this is not that, rather it’s the fact that your work leads you to have some really great upstate knowledge of the space and that’s why you’re the perfect guest for this episode. That’s right, isn’t it?

Alvin: Oh yeah. I think that’s the difference between a developer advocate and a salesman. I’m not here to sell you anything, I’m here to help developers, whatever that looks like. At least this is how we approach DEVREL at Contentful. It varies, and this could be a podcast episode on its own.

Drew: It could be, couldn’t it? What is developer relations? Is it a function of sales? Is it a function of marketing? Is it support? What is it?

Alvin: Yeah.

Drew: So yes, that’s a whole can of worms. Just to give a bit of background on me in this context, I’ve got a lot of history with the content management space from years of building bespoke systems for clients and then distilling all that experience down into a CMS product, which I founded in 2009 and then sold in 2021. All the CMS solutions that I’ve developed have followed this traditional model of the CMS being the entire platform that delivered your website. So it’d be taking content and taking templates and merging all that together to create HTML pages essentially. Is that approach to content management still a valid thing in 2023, do you think?

Alvin: I think it’s valid. Well, it’s valid depending on what you’re trying to build. Squarespace is, I’m pretty sure they’re doing great. I’ve not looked at anything, any numbers, but they’ve been doing great for years and I’m sure they’ll continue. So yeah, it’s definitely a valid thing, but I think for the sort of place that would employ a developer, that may not be anymore.

Drew: It’s almost that market from a development point of view, it’s almost like a solved problem, isn’t it? There are so many good CMSs for rolling out, for example, small websites. I don’t know what the latest stats on how much of the web is powered by WordPress, but it’s approaching half, isn’t it?

Alvin: Yeah. I believe it kept increasing as well, right?

Drew: Right.

Alvin: Yeah, so definitely, it’s a thing for sure.

Drew: I think we’re here today to talk about headless CMS, which of course is a different approach to the same problem. I think most of us will have heard of Contentful in some capacity over the last few years as one of the rising stars in the headless CMS space. And you really can’t talk about content management. You can’t have a content management discussion these days without headless being a factor in that discussion. We mentioned WordPress, but even WordPress has a headless mode. Drupal have what they call a coupled mode, which I think is just the same thing. So getting down to brass tacks, what do we mean when we say a headless CMS? What sort of problem is it solving for us?

Alvin: The problem it’s solving is, it’s making a distinction between what the CMS manages and what you get out of it. With the traditional CMS, you’re tied to a website or a page where you’re made a page on WordPress and ended up being a page on your website. And it’s the approach that, as we said before, this is what Squarespace does, this is what they all do. With headless, you manage your content and you retrieve that content with an API call, so the way that looks on your website is completely decoupled. And this solves a lot of problems, especially with bigger companies. So you can imagine, with Contentful, one of our biggest clients is Ikea, and you can imagine that they don’t just have content on their website, they have physical catalogs, they have ads on the side of the road, so all of that. You really have to break away from this old, one page in the CMS equals one page on the website.

Drew: So you end up more with a multipurpose repository of content with an API that you can then access it. So if you’re Ikea, you can pull the same product description into your mobile app and on your website and into, what, any number. So yeah, I guess it is decoupling, isn’t it? It’s, rather than saying, this content is being produced on this HTML page, it’s saying, this is a system for managing content and here is an API for getting at that content and using it however you want. So it sounds like it makes a whole bunch of problems, especially around the reusing content space, it makes that a lot easier. Are there any things, do you think, that using this approach make more difficult?

Alvin: Well, it’s the time to iteration, because depending on how well your system is set up, you can go around this. But the beauty of it, what we developers love about it is, we have control. What other people in the organization tend to hate is that we have control. And as a result, if you want a brand new section on your website when you need designer to design it, a developer to make it work, you can work around it with templates and other things that we used to do. But in general, this can be a thing where people can be... I can just spin up a completely new section from scratch. Again, it might be, but there would need to be something that’s been set up previously.

Drew: Right. So you can put the content into the system, but you need something then to consume it in a targeted way to make use of it. Yeah, so that, as you say, the iteration speed could be slower. One thing that I sometimes see online is, people say, "Oh, if you use a headless CMS, it’s terrible for SEO." But with my software engineering hat on, that sounds like a symptom of one possible implementation of using a headless CMS, and it’s not inherent to the overall solution, is it? You could be merging this content into a static website offline and then publishing it, or when people take a purely client side single page app approach to using that content, that might have SEO implications and that maybe be is the sort of naive initial implementation that someone might go with.

Drew: But yes, it’s funny how often that crops up almost, sort of one of these myths that drifts around people without maybe fully understanding the implications, just repeat it. One thing that Contentful talks about in a lot of their materials is composable content. What does that mean? What are we talking about with composable content?

Alvin: It’s the fancy new 12-23 thing, isn’t it? Yeah, just to come back on the SEO bit, I think, yeah, as you said, it’s just, anything that Google consumes is, at the end of the day, an HTML tag. It’s no different to the P tag, which you’ll use to display whatever. So it’s also up to you, the developer, to make sure that you create the OG tags that your content is there practically so the engines can crawl it. So it’s nothing... The headless provider will just give you an API, you can do whatever you want with it. To go back to the composable thing now, yeah, so a lot more people have started to move to it. You hear us talking about it, you hear some of our competitors talk about it, and that Defy is also doubling down with the acquisition of Gatsby, for example.

Alvin: The idea is to go headless CMS plus, right? So with headless you say, "Oh, I have this one API that takes care of all my content." But now, what if you could plug other things to this API? What if you could say, "Oh, I want..." I’m just making stuff up here, but what if I want to connect my slack to it? What if we have a weather app or something like that? Any other types of dynamic data that we need to combine with our content to have this one API that gives us everything. And it’s the idea of, again, you’re composing what you need with your headless CMS. And that for us, that looks like an ecosystem of apps, meaning you can extend Contentful with different apps, which could be translation, it’s 2023, so it could be GPT, could be anything else. So that’s the idea. Your headless CMS also integrates with other data providers.

Drew: So it becomes like an aggregator of other content. So as well as having maybe a content editing team creating content, you might also be pulling stuff in from your Instagram feed.

Alvin: Yeah, exactly.

Drew: Or say, a dynamic feed from a third party provider and then making that all available under one API to all your different consumers. Okay, well that kind of makes sense. I think that makes sense.

Alvin: Especially with Instagram, we’ve all seen the horrible Instagram embeds or the Twitter ones. Twitter API is a thing of the past now but anyway, just to give you an example of, you have these horrible embeds, and what if you could get that data from the headless CMS as well and then render it statically? That makes a lot more sense.

Drew: Okay. So yes, it’s just an aggregation function on top of the standard as well as being the source of truth for your own content.

Alvin: Exactly.

Drew: Also then brings in other pieces of content. Now, I’ve personally always been interested in owning my own data where I can, and I’d usually pick a self-hosted solution for something rather than a service, given the choice. Although I have mellowed over the years. The trade-offs I make now are very different from what I would made in the past. But with a headless CMS being API driven, it seems like you’ve got a bit more flexibility there as to where it’s hosted. So you don’t necessarily need the CMS and the website to live on the same server or in the same environment. You could separate those out. So, is there added complexity there or is that an opportunity for simplification? Have you any thoughts?

Alvin: Yeah, for sure. It depends, because everything is from that one API. Depending on your needs, that might get a lot of traffic, which will make self-hosting a problem. As you said, you’ve mellowed as some sort of into self-hosting because it’s become easier to just set up something in the cloud, whereas managing servers, has that necessarily got easier? Tech has changed, but it’s still annoying.

Drew: It’s just got complicated in different directions.

Alvin: Right. Yeah. So there are self-hosted headless CMSs. We’re not one of them because, again, we tend to target bigger clients that have, again, these needs for these APIs, and our CDN takes in, it’s in the billions of requests per month. So we’re pretty for high traffic stuff. But yeah, there are solutions you can install, Strappy is one of them that is self-hosted. You can install it on your own server, and this will give you, as you said, headless CMS. You’ll own your content, you’ll get the API. But the drawback with that is, obviously, if you get a ton of traffic, then it’ll go to your server that you might not scale or you might not want to pay for it to scale that yet. That’s the one true — but it’s possible for sure.

Drew: And I guess you’ve got to manage it then if it’s on your end, you’ve got to keep it updated, keep it running, keep it backed up. I guess the decisions you’d make for a small community website would be different for the things you’d make if you were Ikea. IKEA probably isn’t going to be running Strappy on a VPS. That’s probably not a good solution for them in a lot of ways. So what are the things you should weigh up when picking a headless CMS solution? What are the things to be looking out for that are perhaps different from what we’re used to evaluating for a traditional CMS?

Alvin: I think it depends... Well, as a developer, you’ll know what you’ll be coding so you can look at the API and what it looks like, whether you like it or not. How easy, does it support GraphQL? Everyone does these days, but stuff like that. Then I think it depends on the people who are going to spend a lot of time in the CMS team. As much as it’s great for me if I like the API, but if the people who are going to write in the CMS hate it, then it’s probably not the right choice. So I definitely think you probably want to involve these people to the decision, right? Because they’re going to be the ones spending time. For us, we want to make sure that we can retrieve everything we want from the API as developers, but definitely wants your blog editor to give you the green light and make sure it has everything they need.

Drew: Yeah, you mentioned the API becomes really important, and I’ve seen headless CMSs with Rest APIs with GraphQL, and then various solutions have SDKs that you can import into your project that give you a language native way of interacting. Is there anything you would, from a development point of view, that would be useful to look out for when evaluating availability of SDKs or types of APIs?

Alvin: Yeah, for sure, if it’s something you like. At the end of the day, the beauty of it, is it’s still an API call. So any language under the sun will support that, hopefully. Yeah, so it’s also up to you, right? Do you want Python native SDK where it’s just like, okay, I’m typing three lines of code and I do client, get entry, get this idea whatever, or get these entries that are of this content type, they prefer that grade. But if you’re the kind of person who’s like, "Nope, I’m going to have complete control," it also goes back to what you said about owning your data. The problem with relying on this on an SDK is what happens when there’s a security problem, versus if it’s you and you’re just using the bare bones HTTP client on your language, that there’s less risk. So it also depends on the kind of project you’re working on.

Drew: You talked about the user interface aspect, and it’s got to be one of the big factors, isn’t it? When you’ve got people entering content, creating content in a system and managing it, the user interface that’s provided is a big factor there and how the data is... There are all sorts of different approaches aren’t there? In content management to how you manage data, whether it’s just one big wizzywig block of junk, or whether things are broken down to a granular level for structured content. Presuming that most headless solutions still have some sort of user interface for editing content to get you started, does that reflect what you’ve seen in the marketplace?

Alvin: Yeah, everyone has a wizzywig. Some have varying degrees of support with Markdown, and again, the ecosystem of apps that I was talking about. So more and more players are having their own, and this helps also to extend it so it can help with this discussion of, if you’re talking to a blog editor, it’s like, "Ah, it’s kind of there, but I really wish there was a field that could do X," and you could either extend it yourself or just look for another solution. But yeah, different teams have varying needs. And it could be small things, like for example, scheduling. Like, oh, I want to be able to have this campaign going, I want to make sure that from now, I can make sure a blog post goes on the Monday, another one goes on a Tuesday and another one goes on the Thursday. And if the interface for this is a nightmare versus something else that you might not need. It depends, right? It’s always... But yeah, it’s crucial. Absolutely.

Drew: And I guess, if you’ve got very specific needs, in theory, you could use a headless CMS purely as a content engine and have your own mechanisms for getting data in, and basically write your own interface for writing into that system as well. Is a right interface something that everyone supports? Or is that a feature to look out for when evaluating?

Alvin: I wouldn’t say... I can’t remember if there’s one in particular who doesn’t, but I know we definitely do because-

Drew: It’s a very broad question. With your extreme knowledge of a hundred percent of the marketplace, does every single one...

Alvin: Right. Yeah, I know we support it because, it’s also, I think, with DEVREL, you end up spending a lot more time in your product versus the others, you tend to have a good... And this is where it’s different to sales, what we were saying earlier. When someone comes up to me and say, "Oh, what is the one feature that is different?" I always say, "Well, it depends what you need." What is the one reason I could choose Contentful? And this is where we very much differ from sales, which we’ll have a list right there in the ready. And then we’ll be like, "Oh, for sure you should choose us because A, B, C, D." And for us as developer, is more like, "What do you need? What scale do you have? What do you like working on? How big is your team?" It’s a different question. But as far as writing, having an API that works both ways, we definitely support it, and I would be surprised if others do not.

Drew: It becomes quite crucial, doesn’t it? Because very few projects comparatively start with nothing. Most people have got some sort of system in place before, and taking the data that you’ve already got and migrating it into a new system can be a major project, and a deal breaker for a lot of bigger use cases. You’ve got to be able to get data in. So having a CMS that has an API that you can write code and interface with whatever the previous system is, get that code into a decent shape, and then inject it into the headless system, that’s a major advantage, isn’t it?

Alvin: And you can think in terms of reproducibility as well, because migration is great, but it’s even better if you can say, "Oh, this is the exact script that I ran for migration as opposed to just this collection of random commands that I did on my machine." And it’s like, oh, beta’s over now. It’s much better to say, "Oh, we have this very defined way of transforming this data from this shape to that shape." And having an API helps you with this for sure.

Drew: Hopefully gone are the days when you have two browser windows open and copy and paste content from one form to another. I’ve certainly been there in the distant past. But yeah, flew those days behind us. Is there anything that has particularly caught your attention?

Alvin: I think the composable thing is, it’s been going on for a few months now, but it is definitely a shift. Everyone is starting to think, oh, maybe there’s more than just being the headless CMS solution. And obviously AI, which every market is talking about now. But it’s also content, and especially written content is, the first, right now, at least, the very first industries to be impacted by it. So how do you integrate it? How do you make sure... How do you account for things like attribution? These are discussions that are happening in the content space for sure, definitely. At least right now, it’s the first industry that it’s really attacking.

Drew: Yes. Written content and things like images, there’s Photoshop, new version of Photoshop has come out this last couple of weeks that has completely generative fill in it, which is amazing. So it’s a brave new world, isn’t it? From a content point of view particular, it’s a brave new world. And you mentioned attribution, and that’s a minefield as well, isn’t it? Figuring out how all that works when content has been generated from a model trained on-

Alvin: We don’t know what.

Drew: Yeah, who knows what. So how can you attribute stuff? It’s going to be a very interesting time, going forward, figuring out how we do that. Is there anything else that, say I’m planning a project, I’m going to use a headless CMS. I’ve decided maybe I’m going to use Contentful, and I’m planning out this project. What should I be considering? What lies ahead of me? What should I be worrying about? What is there that I should be thinking about? On embarking on this?

Alvin: I think it’s your content model. So it’s the first thing to get right. We see sometimes people really over-complicating things and having content model for, if you think of an index page, having a separate content for a carousel, and then there may be a river type content. Do you know what I mean? In terms of where you have an image on the left and text on the right, and then it follows by image on the right text on the left. So you can really over-complicate things where it’s content model. You could think, "Oh, rivers is very different to a Carousel," for example. But then you can think, "Oh wait, no, is it just an image with text with it?" And then at the end of the day, oh yeah it is. So it’s stuff like that where it’s easy to over-engineer things, and then having content models that are Carousel homepage one, and then about page Carousel two, and it’s like, well, no, these aren’t the same thing.

Alvin: So it’s trying to think in the abstract way, even if it might be more code originally because you’re building more flexibility into each of the content models, the content types as well. But in the long run, that could save you.

Drew: So it’s about, I guess, thinking of what content you’ve got and what different types it falls into?

Alvin: And what you might have in the future, which is complicated for sure because you can never know. But trying to build flexibility of, in terms of trying to think outside the box that, for example, what if there was a caption in this river? What if one of the images was actually a video? These little tweaks like this, which it will save you a lot of time in the long run, because it will prevent you from having to rethink everything later.

Drew: From a development point of view, from a developer point of view, one thing that always gives me reassurance in my work is having a good test suite that you can run to make sure that things aren’t broken before you deploy.

Alvin: Yeah.

Drew: Is there anything in terms of testing around content that we could make use of?

Alvin: For sure. It’s an API too. So if you can think of using something like Storybook, where you have your different components, you could say, right, so, as we said about, this is a generic component. What happens if there’s three instances of it? What happens if there’s five? What happens again, if one of them is a video? It’s this whole, the meme, cue engineer walks into a bar, orders zero beers, orders a thousand beers, stuff like that.

Drew: What is an elephant?

Alvin: Right. Yeah, exactly. And that type of thing. You can build into test suite and see what happens.

Drew: Yes, that’s quite interesting. I guess, again, it’s the decoupling of things that makes that really easy. And I guess then, if you’re running that Storybook, you can then run it through some visual regression testing and spot breakages or what have you. Yeah, that’s really fascinating.

Alvin: You can think of human errors too, right? Sometimes when you set up everything, you’re like, "Oh yeah, of course they’re going to put a date on the article," but then sometimes you forget or sometimes there’s something else and you realize, oh, that the date you published is different than the date you had in the article, whatever. Or one of the author’s Twitter doesn’t work. Stuff like that is, it’s writing your test suite is a good time to think about these edge cases, right? Sorry, what was your question?

Drew: Yes, no, I was just jabbering on. In fact, I remember years ago writing a sort of system for blogging, and every one of the absolute required fields was a title for the blog post. When I created it, I never even questioned, would a title be optional? A title is fundamental. Every blog post has a title.

Alvin: H1.

Drew: Right. And then you use that to generate the nice slug for the URL and all these sorts of things. And in listings, it’s the title that appears. And then Tumblr came out, and you could create all sorts of posts on that, and you didn’t even need a date or a title.

Alvin: Oh yeah.

Drew: What madness is this? But it’s like you’re saying, it’s thinking outside of the box and thinking about the different types of content that you might have. And it turns out that fundamental assumption that I made early on in that system, that we absolutely a hundred percent always had a title, became a limitation of what we could do with the system, because then when content came along that didn’t have a title, I was stuffed.

Alvin: Yeah, Instagram is another example. Or as we said, the great thing about headless is that it can go anywhere, but if you’re also planning stuff that is written in Contentful, but that will go on your website, again, your catalog, but also on Instagram, we have this great promotion this week for half price of whatever. On Instagram, that might just be the image and nothing else, and the description or something else. Yeah. Or you want to make sure, definitely don’t pull the hashtag into the blog, stuff like that.

Drew: Yes. Yeah, and I guess just thinking about Instagram and using content in that way, having this API with your content opens up all sorts of possibilities for generating images. You could pull the content and render text onto an image and post it to Instagram and do all that sort of things, that imagining doing that with a traditional CMS would just be... It’d be a flight of fancy, it’d be difficult. You’d be fighting against the system rather than working with it.

Alvin: Yeah. And this is where other features, as I said, scheduling for example, can be very important, because you can say, I want to make sure that whenever this campaign launches, we also have the Instagram stuff going out, which are again, these images generated from the new publish in the CMS, and this is where the CMS itself can have a scheduler, you can use Zapier or you can use Zapier to capture it and run a script that will then generate the image. You can do all of this in a Chrome job somewhere. It depends, but this is where these features become important.

Drew: Your content then just sits as one piece in a big chain of loosely joined elements that are delivering your various digital products or what have you onto your customers.

Alvin: And the composable stuff is about this being less loose, is to make sure you have some kind of control that’s defined, right? That’s not like, oh, there’s this Chrome job here and this app here, and it’s all duct tape.

Drew: Yes. So yeah, it gives you a level of control and potentially then, quality control or moderation or any of those steps that you might want to put in between rather than just... Because it would be possible to federate content in a front-end JavaScript app, you could do that, but then you’re missing that potential gate-keeping or any of those steps that you might want to put in, that having a platform that does it for you or enables that as a feature. Sounds super useful. So we’ve been learning all about headless CMSs today. What have you been learning about lately, Alvin?

Alvin: I was on web rush last week, so I did a lot of work with Astro, the different web framework. It’s been out for a bit, but since 2.0, I feel like they’ve really stepped on the gas and started releasing so many things. So I’ve really been looking into it, and it’s great. I have a blog post coming out. But yeah, I’ve been looking at the docs and learning a lot about all the new features, references, which are amazing. If you’ve had to deal with Markdown before and the whole type safety that they’ve added to Markdown, it’s really interesting. And the fact that they support all the frameworks is just even better. So yeah, I’ve been learning a lot about Astro recently.

Drew: That’s great. I think we did an episode on Astro, maybe a couple of years ago now, so perhaps it’s time that the Smashing Podcast Revisited.

Alvin: Yeah, there’s a lot of new stuff that came out

Drew: That’s amazing. If you, dear Listener, would like to hear more from Alvin, you can find his personal website with links to his various projects and social profiles at alvin.codes. Thanks for joining us today, Alvin. Do you have any parting words?

Alvin: No, thank you for having me. I’ve been a reader of Smashing Mag for a long time. My first article, I think, came out last year, which was also a great honor. And yeah, thank you so much for having me.

]]>
hello@smashingmagazine.com (Drew McLellan)
<![CDATA[CSS And Accessibility: Inclusion Through User Choice]]> https://smashingmagazine.com/2023/08/css-accessibility-inclusion-user-choice/ https://smashingmagazine.com/2023/08/css-accessibility-inclusion-user-choice/ Tue, 01 Aug 2023 11:00:00 GMT We make a series of choices every day. Get up early to work out or hit the snooze button? Double foam mocha latte or decaf green tea? Tabs or spaces? Our choices, even the seemingly insignificant ones, shape our identities and influence our perspectives on the world. In today’s modern landscape, we have come to expect a broad range of choices, regardless of the products or services we seek. However, this has not always been the case.

For example, there was a time when the world had only one font family. The first known typeface, a variant of Blackletter, graced Johannes Gutenberg’s pioneering printing press in 1440. The first set of publicly-available GUI colors shipped with the 10th version of the X Window System consisted of 69 primary shades and 138 entries to account for various color variations (e.g., “dark red”). In September 1995, a Netscape programmer, Brendan Eich, introduced “Mocha,” a scripting language that would later be renamed LiveScript and eventually JavaScript.

Fast forward to the present day, and we now have access to over 650,000 web fonts, a hexadecimal system capable of representing 16,777,216 colors, and over 100 public-facing JavaScript frameworks and libraries to choose from. While this is great news for professionals designing and building user interfaces, what choices are we giving actual users? Shouldn’t they have a say in their experience?

CSS Media Features

While designers and developers may have some insights into user needs, it is very challenging to understand the actual user preferences of 7.8 billion people at any given time. Supporting the needs of individuals with disabilities and assistive technology adds a layer of complexity to an already complex situation. Nonetheless, designers and developers are responsible for addressing these user needs as best we can by providing accessible choices. One promising solution is user-focused CSS media features that allow us to customize the user experience and cater to individual preferences.

Media Features For Color

Let’s first focus on CSS media features for color. Color plays a vital role in design, impacting how we perceive brands. Studies suggest that color alone can influence up to 90% of snap judgments about products. Considering the large number of people worldwide with visual deficiencies such as color blindness and low vision, developers and designers have a significant opportunity to improve accessibility with this element alone.

@prefers-color-scheme

The @prefers-color-scheme CSS media feature helps identify whether users prefer light or dark color themes. Users can indicate their preferences through settings in the operating system or user agent.

There are two values for this CSS media feature: light and dark. Typically, the default theme presented to users is the light version, even if the user expresses no preference. However, the opposite can also be true, and websites or apps default to a dark theme and switch to a light theme using the @media (prefers-color-scheme: light) media feature and corresponding code.

Users opting for a dark mode signifies their preference for a dark-themed page. Using @media (prefers-color-scheme: dark), various theme elements, such as text, links, and buttons, are adjusted to ensure they stand out against darker background colors.

In the past, there was also a no-preference value to indicate when users had no theme preference. However, user agents now treat light themes as the default, rendering the no-preference value obsolete.

@media (prefers-color-scheme: dark) {
  body {
    background-color: #282828;
  }

  .without [data-word="without"] .char:before,
  .without [data-word="without"] .char:after {
    color: #fff;
  }
}

The @prefers-color-scheme is one of the most widely used CSS media features today and it has a very large percentage of browser support at 94%. It is so popular that additional values may be introduced in the future to express more specific preferences or color schemes, such as sepia or grayscale.

Switching from the default light mode to dark mode is relatively straightforward. Consult the user setting guides for Mac and Windows operating systems to learn more (select the relevant hardware and operating system version), then navigate to a browser that supports this CSS media feature.

Pro-tip: A more sophisticated solution to demo user preference settings is using Chrome’s Rendering tab coupled with CSS media features emulator to easily switch from light to dark modes to emulate @prefers-color-scheme as users experience it. This solution is convenient for live demos where you need to show the user preference changes quickly or emulate media features not fully supported by your OS or browser.

@forced-colors

The @forced-colors CSS media feature enables the detection of the forced colors mode enforced by the user agent. This mode imposes a limited color palette the user chooses onto the page. This newer media feature provides an alternative approach to handle colors for non-Window devices, and we expect it will replace Windows High Contrast Mode in the future.

There are two values for the forced-colors media feature: none and active. The @media (forced-colors: none) value indicates that the forced colors mode is inactive and uses the default color scheme, while the @media (forced-colors: active) value means that the forced colors mode is active and the user agent enforces the user-selected limited color palette.

It’s worth noting that enabling @forced-colors mode does not necessarily imply a preference for higher contrast. The color adjustments align with the user’s choice, which may not strictly fit into the low or high-contrast categories.

Note: There are some properties affected by the forced-color mode that you need to be aware of when designing and testing your forced-colors theme. Check out Eric Bailey’s article “Windows High Contrast Mode, Forced Colors Mode And CSS Custom Properties” for more information about this media feature and its integration with CSS custom properties.

@media (forced-colors: active) {
  body {
    background-color: #fcba03;
  }

  .without [data-word="without"] .char:before,
  .without [data-word="without"] .char:after {
    color: #ac1663;
  }

  .without {
    color: #004a72;
  }
}

The @forced-colors CSS media feature is currently supported by 31% of the most popular browsers, including desktop versions of Chrome, Edge, and Firefox. Although the browser support for this feature is increasing, not all operating systems currently offer a setting to activate the forced colors mode. The Windows operating system is the only exception, as it provides the necessary functionality for users to create customized themes that override the default ones by utilizing the Windows High Contrast mode.

If you are using a non-Windows machine, you can emulate the behavior of this media feature by following the steps mentioned earlier in the @prefers-color-scheme section using Chrome’s Rendering tab and emulator, but with a focus on emulating @forced-colors instead.

@inverted-colors

The @inverted-colors CSS media feature determines whether to show the content in its standard colors or if it reverses the colors.

Two modes are available for the @inverted-colors media feature: none and inverted. The @media (inverted-colors: none) value indicates that the forced colors mode is not activated and uses the default color scheme. Using the @media (inverted-colors: inverted) value indicates that all pixels within the displayed area have been inverted and renders the inverted color theme when a user chooses this option.

When writing code for the @inverted-colors CSS media feature, one option is to write your code using the inverted value of what you want a user to see to ensure correct rendering after applying the user’s setting.

For example, you want your element’s background to be #e87b2d, which is a tangerine orange. In the theme code, you would write the opposite color, #1784d2, powder blue. Incorporating this inverse color into the code renders the intended tangerine orange instead of its inverse when users enable the @inverted-colors setting.

@media (inverted-colors: inverted) {
  body {
    background-color: #99cc66;
  }

  .without [data-word="without"] .char:before,
  .without [data-word="without"] .char:after {
    color: #ee1166;
  }

  .without {
    color: #111111;
  }
}

Current browser support for @inverted-colors is 20% for Safari desktop and iOS browsers. While Chrome’s Rendering tab and emulator do not work for this particular media feature, you can emulate @inverted-colors using Firefox (version 114 or newer).

  1. Open a new tab in Firefox and type or paste about:config in the address bar, and press Enter/Return. Click the button acknowledging that you will be careful.
  2. In the search box, enter layout.css.inverted-colors and wait for the list to be filtered.
  3. Use the toggle button to switch the preference from false to true.
  4. Enable the inverted colors setting in your operating system and navigate to a webpage or code example with the @inverted-colors theme to observe the inverted effect.

The setting for the @inverted-colors media feature is available on Mac and Windows operating systems.

Media Features For Contrast

Next, let’s talk about CSS media features related to contrast. Contrast plays a crucial role in conveying visual information to users, working hand in hand with color. When proper levels of color contrast are not implemented, it becomes difficult to distinguish essential elements such as text, icons, and important graphics. As a result, the design can become inaccessible not only to the 46 million people worldwide with low vision but also to older adults, individuals using monochrome displays, or those in specific situations like low lighting in a room.

@prefers-contrast

The @prefers-contrast CSS media feature detects the user’s preference for higher or lower contrast on a page. The feature uses the information to make appropriate adjustments, such as modifying the contrast ratio between colors nearby or altering the visual prominence of elements, such as adjusting their borders, to better suit the user’s contrast requirements.

There are four values for this CSS media feature: no-preference, less, more, and custom. The @media (prefers-contrast: no-preference) value indicates that the user has no preference (or did not choose one since it is the default setting), and the @media (prefers-contrast: less) value indicates a user’s preference for less contrast. Conversely, the @media (prefers-contrast: more) value indicates a user’s preference for stronger contrast.

The @media (prefers-contrast: custom) value is a bit more complex as it allows users to use a custom set of colors — which could be specific to contrast — or choose a palette. For example, a user could select a theme composed entirely of shades of blue, primary colors, or even a rainbow theme — anything they choose.

Note: When a user selects the custom contrast setting, it will align with the color palette defined by users of forced-colors: active value, so be sure to account for that in the code.

@media (prefers-contrast: more) {
  .title2 {
    color: var(--clr-6);
  }

  .aurora2__item:nth-of-type(1),
  .aurora2__item:nth-of-type(2),
  .aurora2__item:nth-of-type(3),
  .aurora2__item:nth-of-type(4) {
    background-color: var(--clr-6);
  }
}

@media (prefers-contrast: less) {
  .title {
    color: var(--clr-5);
  }

  .aurora__item:nth-of-type(1),
  .aurora__item:nth-of-type(2),
  .aurora__item:nth-of-type(3),
  .aurora__item:nth-of-type(4) {
    background-color: var(--clr-5);
  }
}

@media (prefers-contrast: custom) {
  .aurora2__item:nth-of-type(1) {
    background-color: var(--clr-1);
  }
  .aurora2__item:nth-of-type(2) {
    background-color: var(--clr-2);
  }
  .aurora2__item:nth-of-type(3) {
    background-color: var(--clr-3);
  }
  .aurora2__item:nth-of-type(4) {
    background-color: var(--clr-4);
  }
}

Currently, 91% of the most widely used browsers offer support for the @prefers-contrast media feature. However, the majority of this support is focused on enhancing contrast rather than reducing it or allowing for personalized contrast themes.

To effectively demo and test all the different contrast options for this CSS media feature, use the Chrome Rendering tab and emulator as described earlier, but with a specific emphasis on emulating the @prefers-contrast media feature this time.

@prefers-reduced-transparency

The @prefers-reduced-transparency CSS media feature determines if the user has requested the system to use fewer transparent or translucent layer effects.

It takes one of two possible values: no-preference and reduce. The @media (prefers-reduced-transparency: no-preference) value indicates that the user has not specified any preference for the system (this is also the default setting). On the other hand, the @media (prefers-reduced-transparency: reduce) value indicates that the user has informed the system about their preference for an interface that minimizes the application of transparent or translucent layer effects.

@media (prefers-reduced-transparency: reduce) {
  .title,
  .title2 {
    opacity: 0.7;
  }
}

The current browser support for @prefers-reduced-transparency stands at 0%. This CSS media feature is highly experimental and should not be utilized in production code at the time I’m writing this article.

However, if you wish to emulate the @prefers-reduced-transparency media feature behavior, you can follow these steps using Firefox (version 113 or newer).

  1. Open a new tab in Firefox and type or paste about:config in the address bar, and press Enter/Return. Click the button acknowledging that you will be careful.
  2. In the search box, type or paste layout.css.prefers-reduced-transparency and wait for the list to be filtered.
  3. Use the toggle button to switch the preference from the default state of false to true.
  4. Adjust your operating system’s transparency settings and navigate to a webpage or code example with the @prefers-reduced-transparency theme to observe the effect of reduced transparency.

Media Features For Motion

Lastly, let’s turn our focus to motion. Whether it involves videos, GIFs, or SVGs, movement can enrich our online experiences. However, this media type can also adversely affect many individuals. People with vestibular disabilities, seizure disorders, and migraine disorders can benefit from accessible media. CSS media features for motion allow us to incorporate both dynamic movement and static states for elements, enabling us to have the best of both worlds.

@prefers-reduced-motion

Using the @prefers-reduced-motion CSS media feature helps determine whether the user has requested the system to minimize the usage of non-essential motion.

This CSS media feature accepts one of two values: no-preference and reduce. The @media (prefers-reduced-motion: no-preference) value indicates that the user has not specified any preference for the system (this is also the default setting). Conversely, the @media (prefers-reduced-motion: reduce) value indicates that the user has informed the system about their preference for an interface that eliminates or substitutes motion-based animations that may cause discomfort or serve as distractions for them.

@media (prefers-reduced-motion: reduce) {
  .bg-rainbow {
    animation: none;
  }

  .perfection {
    .word {
      .char {
        animation: slide-down 5s cubic-bezier(0.75, 0, 0.25, 1) both;
        animation-delay: calc(#{$delay} + (0.5s * var(--word-index)));
      }
    }

    [data-word="perfection"] {
      animation: slide-over 4.5s cubic-bezier(0.5, 0, 0.25, 1) both;
      animation-delay: $delay;

      .char {
        animation: none;
        visibility: hidden;
      }

      .char:before,
      .char:after {
        animation: split-in 4.5s cubic-bezier(0.75, 0, 0.25, 1) both alternate;
        animation-delay: calc(
          3s + -0.2s * (var(--char-total) - var(--char-index))
        );
      }
    }
  }
}

You can compare the difference in the following videos and by viewing a live demo.

@prefers-reduced-data

Last but certainly not least, let’s examine the @prefers-reduced-data CSS media feature. This media feature determines whether the user prefers to receive alternate content that consumes less data when rendering the page.

This CSS media feature has two possible values: no-preference and reduce. The @media (prefers-reduced-motion: no-preference) value indicates that the user has not specified any preference for the system (which is also the default setting). On the other hand, the @media (prefers-reduced-data: reduce) value indicates that the user has expressed a preference for lightweight alternate content.

Unlike other CSS media features, a user’s preference for the @prefers-reduced-data media feature could vary. It may be a system-wide setting exposed by the operating system or settings controlled by the user agent. In the case of the user agent, they may determine this value based on the same user or system preference used for setting the Save-Data HTTP request header.

Note that the Save-Data network client request header is still considered experimental technology, but it has achieved a remarkable 72% browser support across various browsers, except Safari and Firefox on desktop and mobile.

@media (prefers-reduced-data: reduce) {
  .bg-rainbow {
    animation: none;
  }

  .perfection {
    .word {
      .char {
        animation: none;
      }
    }

    [data-word="perfection"] {
      animation: none;

      .char {
        animation: none;
        visibility: hidden;
      }

      .char:before,
      .char:after {
        animation: none;
      }
    }
  }
}

Similar to @prefers-reduced-transparency, the @prefers-reduced-data CSS media feature is highly experimental and should not be utilized in production code at this time as the current browser support for it stands at 0%.

However, if you wish to emulate the @prefers-reduced-data behavior, you can follow these steps using Chrome (version 85 or newer).

  1. Open a new tab in Chrome and type or paste chrome://flags in the address bar and press Enter/Return.
  2. In the search box, type or paste experimental-web-platform-features and wait for the list to be filtered.
  3. Use the dropdown option to switch the preference from the default state of disabled to enabled.
  4. Use the Chrome Rendering tab and choose the appropriate CSS media feature to emulate.

Note that you can also enable the @prefers-reduced-data feature in Edge, Opera, and Chrome Android (all behind the same experimental-web-platform-features flag), but it is less clear how you would emulate the media feature without the rendering tab and emulator found in the desktop version of Chrome.

Amplifying Inclusion Through User Choice

In the tech world, accessibility often receives criticism, particularly with aesthetics and advanced features. However, this negative perception can be changed. It is possible to incorporate stunning design and innovative functionality while prioritizing accessibility by leveraging CSS user-focused media features that address color, contrast, and motion.

Today, by incorporating all available options for each CSS media feature currently supported by browsers (with support exceeding 90%), you can provide users with 16 combinations of options. However, when the browsers and operating systems implement and support more experimental media features, the impact on user customization expands significantly to a staggering 256 combinations of options. A large number of possible options truly amplifies the potential impact designers and developers can have on user experiences.

As professionals within the technology industry, our goal should be to ensure that digital products are accessible to all individuals. By offering users the ability to personalize their experience, we can include an array of remarkable features in a responsible manner. Our job is to provide options and let people choose their own adventure.

Further Reading On SmashingMag

]]>
hello@smashingmagazine.com (Carie Fisher)
<![CDATA[Living In The Moment (August 2023 Wallpapers Edition)]]> https://smashingmagazine.com/2023/07/desktop-wallpaper-calendars-august-2023/ https://smashingmagazine.com/2023/07/desktop-wallpaper-calendars-august-2023/ Mon, 31 Jul 2023 09:50:00 GMT Everybody loves a beautiful wallpaper to freshen up their desktops and home screens, right? To cater for new and unique artworks on a regular basis, we started our monthly wallpapers series more than twelve years ago, and from the very beginning to today, artists and designers from across the globe have accepted the challenge and submitted their designs to it. Just like this month.

In this post, you’ll find their wallpapers for August 2023. All of them come in versions with and without a calendar, so no matter if you need to count down the days to a big deadline (or a few days off, maybe?) or plan to use your favorite wallpaper even after the month has ended, we’ve got you covered. A big thank-you to everyone who shared their designs with us — we sincerely appreciate it!

As a little bonus goodie, we also added some “oldies but goodies” at the end of this post, timeless wallpaper treasures that we rediscovered way down in our archives and that are just too good to be forgotten. Now there’s only one question left to be answered: Which one to choose? Happy August!

  • You can click on every image to see a larger preview,
  • We respect and carefully consider the ideas and motivation behind each and every artist’s work. This is why we give all artists the full freedom to explore their creativity and express emotions and experience through their works. This is also why the themes of the wallpapers weren’t anyhow influenced by us but rather designed from scratch by the artists themselves.
  • Submit a wallpaper!
    Did you know that you could get featured in our next wallpapers post, too? We are always looking for creative talent.
Summer Day

Designed by Kasturi Palmal from India.

Retro Road Trip

“As the sun dips below the horizon, casting a warm glow upon the open road, the retro van finds a resting place for the night. A campsite bathed in moonlight or a cozy motel straight from a postcard become havens where weary travelers can rest, rejuvenate, and prepare for the adventures that await with the dawn of a new day.” — Designed by PopArt Studio from Serbia.

Spooky Campfire Stories

Designed by Ricardo Gimenes from Sweden.

Party Night Under The Stars

“August… it’s time for a party and summer vacation — sea, moon, stars, music… and magical vibrant colors.” — Designed by Teodora Vasileva from Bulgaria.

Japanese Fashion Week

Designed by Ricardo Gimenes from Sweden.

Train Ride

“We got on a plane and went to the other part of the world to Australia. In this case, we got to Melbourne and we are ready to go to Flinders Street Station to catch a train and move around this wonderful country.” — Designed by Veronica Valenzuela from Spain.

Proud

“Chandrayaan-3 is the third and most recent lunar Indian Space Research exploration mission under the Chandrayaan programme. It consists of a lander named Vikram and a rover named Pragyan similar to Chandrayaan-2, but does not have an orbiter. Its propulsion module behaves like a communication relay satellite.” — Designed by Bhabna Basak from India.

Flowers

Designed by Sahra Tamo from Turkey.

Oldies But Goodies

Going for a swim, the smell of a summer field, or that sweet feeling of freedom when you’re on vacation — a lot of things have inspired the design community to create a wallpaper for August in the past few years. Here are some favorites from our wallpapers archives. Please note that these designs don’t come with a calendar.

Happiness Happens In August

“Many people find August one of the happiest months of the year because of holidays. You can spend days sunbathing, swimming, birdwatching, listening to their joyful chirping, and indulging in sheer summer bliss. August 8th is also known as the Happiness Happens Day, so make it worthwhile.” — Designed by PopArt Studio from Serbia.

Swimming In The Summer

“It’s the perfect evening and the water is so warm! Can you feel it? You move your legs just a little bit and you feel the water bubbles dancing around you! It’s just you in there, floating in the clean lake and small sparkly lights shining above you! It’s a wonderful feeling, isn’t it?” — Designed by Creative Pinky from the Netherlands.

Subtle August Chamomiles

“Our designers wanted to create something summery, but not very colorful, something more subtle. The first thing that came to mind was chamomile because there are a lot of them in Ukraine and their smell is associated with a summer field.” — Designed by MasterBundles from Ukraine.

Bee Happy!

“August means that fall is just around the corner, so I designed this wallpaper to remind everyone to ‘bee happy’ even though summer is almost over. Sweeter things are ahead!” — Designed by Emily Haines from the United States.

Colorful Summer

“‘Always keep mint on your windowsill in August, to ensure that the buzzing flies will stay outside where they belong. Don’t think summer is over, even when roses droop and turn brown and the stars shift position in the sky. Never presume August is a safe or reliable time of the year.’ (Alice Hoffman)” — Designed by Lívi from Hungary.

Psst, It’s Camping Time…

“August is one of my favorite months, when the nights are long and deep and crackling fire makes you think of many things at once and nothing at all at the same time. It’s about heat and cold which allow you to touch the eternity for a few moments.” — Designed by Igor Izhik from Canada.

Coffee Break Time

Designed by Ricardo Gimenes from Sweden.

Work Hard, Play Hard

“It seems the feeling of summer breaks we had back in school never leaves us. The mere thought of alarm clocks feels wrong in the summer, especially if you’ve recently come back from a trip to the seaside. So, we try to do our best during working hours and then compensate with fun activities and plenty of rest. Cheers!” — Designed by ActiveCollab from the United States.

Ivory Tower

“August 12th marks World Elephant Day, highlighting the need for the protection and conservation of wild elephants across Asia and Africa. Today, African elephants are endangered due to wildlife crime, primarily poaching for the illegal ivory trade, whereas Asian elephants face habitat loss due to human-elephant conflict. Driven to the brink of extinction, elephants rely on us to create a non-exploitive and sustainable environment where these magnificent creatures can be safe.” — Designed by PopArt Studio from Novi Sad, Serbia.

Love Is In The Air

Designed by Ricardo Gimenes from Sweden.

Ahoy

Designed by Webshift 2.0 from South Africa.

Handwritten August

“I love typograhy handwritten style.” — Designed by Chalermkiat Oncharoen from Thailand.

Oh La La… Paris’ Night

“I like the Paris’ night! All is very bright!” — Designed by Verónica Valenzuela from Spain.

Live In The Moment

“My dog Sami inspired me for this one. He lives in the moment and enjoys every second with a big smile on his face. I wish we could learn to enjoy life like he does! Happy August everyone!” — Designed by Westie Vibes from Portugal.

Summer Nap

Designed by Dorvan Davoudi from Canada.

I Love Summer

“I love the summer nights and the sounds of the sea, the crickets, the music of some nice party.” — Designed by Maria Karapaunova from Bulgaria.

Shrimp Party

“A nice summer shrimp party!” — Designed by Pedro Rolo from Portugal.

Grow Where You Are Planted

“Every experience is a building block on your own life journey, so try to make the most of where you are in life and get the most out of each day.” — Designed by Tazi Design from Australia.

Traveler In Time

“During bright summer days, while wandering around unfamiliar places, it is finally forgotten that one of the biggest human inventions is time itself, future becomes the past, past becomes the present and there are no earthly boundaries, just air.” — Designed by Ana Masnikosa from Belgrade, Serbia.

Cowabunga

Designed by Ricardo Gimenes from Sweden.

Treat Yourself

“It’s still winter in my part of Australia so warm coffee and donuts by the open fire is a treat. For warmer climates an outdoor picnic in the park with coffee and donuts sounds fun, too!” — Designed by Glynnis Owen from Australia.

A Bloom Of Jellyfish

“I love going to aquariums – the colors, patterns and array of blue hues attract the nature lover in me while still appeasing my design eye. One of the highlights is always the jellyfish tanks. They usually have some kind of light show in them, which makes the jellyfish fade from an intense magenta to a deep purple — and it literally tickles me pink. On a recent trip to uShaka Marine World, we discovered that the collective noun for jellyfish is a bloom and, well, it was love-at-first-collective-noun all over again. I’ve used some intense colors to warm up your desktop and hopefully transport you into the depths of your own aquarium.” — Designed by Wonderland Collective from South Africa.

About Everything

“I know what you’ll do this August. Because August is about holiday. It’s about exploring, hiking, biking, swimming, partying, feeling, and laughing. August is about making awesome memories and enjoying the summer. August is about everything. An amazing August to all of you!” — Designed by Ioana Bitin from Bucharest, Romania.

Murder Of Crows

“The inspiration for the Murder Of Crows came from ‘The Raven’ – a poem written by Edgar Allen Poe in January 1845. The Murder Of Crows is part of our ‘Wonderland Collective Noun’ collection. Each month a new interesting collective noun is illustrated, printed and made into a desktop wallpaper.” — Designed by Wonderland Collective from South Africa.

The Ocean Is Waiting

“In August, make sure you swim a lot. Be cautious though.” — Designed by Igor Izhik from Canada.

Vacation Vibes

“Is the time crawling by you’re eagerly awaiting your vacation? Or you’re back in the office, reminiscing the sweet feeling of freedom? Never mind, because our desktop calendar is here to bring a vacation vibe to your life throughout the entire August.” — Designed by PopArt Studio from Serbia.

Childhood Memories

Designed by Francesco Paratici from Australia.

]]>
hello@smashingmagazine.com (Cosima Mielke)
<![CDATA[How To Define An Array Of Colors With CSS]]> https://smashingmagazine.com/2023/07/define-array-colors-css/ https://smashingmagazine.com/2023/07/define-array-colors-css/ Fri, 28 Jul 2023 10:00:00 GMT CSS is mainly known as a language based on a set of property-value pairs. You select an element, define the properties, and write styles for it. There’s nothing wrong with this approach, but CSS has evolved a lot recently, and we now have more robust features, like variables, math formulas, conditional logic, and a bunch of new pseudo selectors, just to name a few.

What if I tell you we can also use CSS to create an array? More precisely, we can create an array of colors. Don’t try to search MDN or the specification because this is not a new CSS feature but a combination of what we already have. It’s like we’re remixing CSS features into something that feels new and different.

For example, how cool would it be to define a variable with a comma-separated array of color values:

--colors: red, blue, green, purple;

Even cooler is being able to change an index variable to select only the color we need from the array. I know this idea may sound impossible, but it is possible — with some limitations, of course, and we’ll get to those.

Enough suspense. Let’s jump straight into the code!

An Array Of Two Colors

We will first start with a basic use case with two colors defined in a variable:

--colors: black, white;

For this one, I will rely on the new color-mix() function. MDN has a nice way of explaining how the function works:

The color-mix() functional notation takes two <color> values and returns the result of mixing them in a given colorspace by a given amount.

The trick is not to use color-mix() for its designed purpose — mixing colors — but to use it instead to return one of the two colors in its argument list.

:root {
  --colors: black, white; /* define an array of color values */
  --i: 0; 

  --_color: color-mix(in hsl, var(--colors) calc(var(--i) * 100%));
}

body {
  color: var(--_color);
}

So far, all we’ve done is assign the array of colors to a --colors variable, then update the index, --i, to select the colors. The index starts from 0, so it’s either 0 or 1, kind of like a Boolean check. The code may look a bit complex, but it becomes clear if we replace the variables with their values. For example, when i=0:

--_color: color-mix(in hsl, black, white 0%);

This results in black because the amount of white is 0%. We mixed 100% black with 0% white to get solid black. When i=1:

--_color: color-mix(in hsl, black, white 100%);

I bet you already know what happens. The result is solid white because the amount of white is 100% while black is 0%.

Think about it: We just created a color switch between two colors using a simple CSS trick. This sort of technique can be helpful if, say, you want to add a dark mode to your site’s design. You define both colors inside the same variable.

The trick is manipulating the gradient to extract the colors based on the index. By definition, a gradient transitions between colors, but we have at least a few pixels of the actual colors defined in the array while we have a mixture or blend of colors in between them. At the very top, we can find red. At the very bottom, we can find purple. And so on.

What if we increase the size of the gradient to something really big?

background-position: 0 calc(var(--i) * 100% / (var(--n) - 1));

Here’s the complete code:

.box {
  --colors: red, blue, green, purple; /* color array */
  --n: 4; /* length of the array */
  --i: 0; /* index of the color [0 to N-1] */

  background:
    linear-gradient(var(--colors)) no-repeat
     0 calc(var(--i)*100%/(var(--n) - 1)) /* position */
     /100% calc(1px*infinity);  /* size */
}

Note: I used no-repeat in the background property. That keyword should be unnecessary, but for some reason, it’s not working without it. It might be that browsers cannot repeat gradients that have an infinite size.

The following demo illustrates the trick:

After that, we can make our gradient very big by, once again, multiplying it by infinity. This time, infinity calculates the gradient’s width and height.

background-size: calc(1px * infinity) calc(1px * infinity);

We place the gradient at the top to zoom in on the top color:

background-position: top;

Then we rotate the gradient to select the color we want:

from calc((var(--i) + 1) * -1turn / var(--n))

It’s like having a color wheel where we only display a few pixels from the top.

Since what we have is essentially a color wheel, we can turn it as many times as we want in any direction and always get a color. This trick allows us to use any value we want for the index! After a full rotation, we get back to the same color.

See the Pen Colors array using only CSS II by Temani Afif.

Note that CSS does have a mod() function. So, instead of the conical gradient implementation, we can also update the first method that uses the linear gradient like this:

.box {
  --colors: red, blue, green, purple; /* color array */
  --n: 4; /* array length  */
  --i: 0; /* index  */

  --_i: mod(var(--i), var(--n)); /* the used index */
  background:
    linear-gradient(var(--colors)) no-repeat
     0 calc(var(--_i) * 100% / (var(--n) - 1)) /* position */
     / 100% calc(1px * infinity);  /* size */
}

I didn’t test the above code because support for mod() is still low for such a function. That said, you can keep this idea somewhere, as it might be helpful in the future and is probably more intuitive than the conic gradient approach.

What Are The limitations?

First, I consider this approach more of a hack than a CSS feature. So, use it cautiously. I’m not totally sure if there are implications to multiplying things by infinity. Forcing the browser to use a huge gradient can probably lead to a performance lag or, worse, accessibility issues. If you spot something, please share them in the comments so I can adjust this accordingly.

Another limitation is that this can only be used with the background property. We could overcome this with other tricks, like using background-clip: text to manipulate text color. But since this uses gradients, which are only supported by specific properties, usage is limited.

The two-color method is safe since it doesn’t rely on any hack. I don’t see any drawbacks to using it on real projects.

Wrapping Up

I hope you enjoyed this little CSS experimentation. We went from a simple two-color switch to an array of colors without adding much code. Now if someone tells you that CSS isn’t a programming language, you can tell them, “Hey, we have arrays!”

Now it’s your turn. Please show me what you will build using this trick. I will be waiting to see what you make, so share below!

Further Reading On SmashingMag

]]>
hello@smashingmagazine.com (Temani Afif)
<![CDATA[How To Use Artificial Intelligence And Machine Learning To Summarize Chat Conversations]]> https://smashingmagazine.com/2023/07/artificial-intelligence-machine-learning-summarize-chat-conversations/ https://smashingmagazine.com/2023/07/artificial-intelligence-machine-learning-summarize-chat-conversations/ Thu, 27 Jul 2023 15:00:00 GMT As developers, we often deal with large volumes of text, and making sense of it can be a challenge. In many cases, we might only be interested in a summary of the text or a quick overview of its main points. This is where text summarization comes in.

Text summarization is the process of automatically creating a shorter version of a text that preserves its key information. It has many applications in natural language processing (NLP), from summarizing news articles to generating abstracts for scientific papers. Even products, including Notion, are integrating AI features that will summarize a block of text on command.

One interesting use case is summarizing chat conversations, where the goal is to distill the main topics and ideas discussed during the conversation. That’s what we are going to explore in this article. Whether you’re an experienced developer or just getting started with natural language processing, this article will provide a practical guide to building a chat summarizer from scratch. By the end, you’ll have a working chat summarizer that you can use to extract the main ideas from your own chat conversations — or any other text data that you might encounter in your projects.

The best part about all of this is that accessing and integrating these sorts of AI and NLP capabilities is easier than ever. Where something like this may have required workarounds and lots of dependencies in the not-so-distant past, there are APIs and existing models readily available that we can leverage. I think you may even be surprised by how few steps there are to pull off this demo of a tool that summarizes chat conversations.

Cohere: Chat Summarization Made Easy

Cohere is a cloud-based natural language processing platform that enables developers to build sophisticated language models without requiring deep expertise in machine learning. It offers a range of powerful tools for text classification, entity extraction, sentiment analysis, and more. One of its most popular features is chat summarization, which can automatically generate a summary of a conversation.

Using Cohere API for chat summarization is a simple and effective way to summarize chat conversations. It requires only a few lines of code to be implemented and can be used to summarize any chat conversation in real-time.

The chat summarization function of Cohere works by using natural language processing algorithms to analyze the text of the conversation. These algorithms identify important sentences and phrases, along with contextual information like speaker identity, timestamps, and sentiment. The output is a brief summary of the conversation that includes essential information and main points.

Using The Cohere API For Chat Summarization

Now that we have a basic understanding of Cohere API and its capabilities, let’s dive into how we can use it to generate chat summaries. In this section, we will discuss the step-by-step process of generating chat summaries using Cohere API.

To get started with the Cohere API, first, you’ll need to sign up for an API key on the Cohere website. Once you have an API key, you can install the Cohere Python package using pip:


pip install cohere

Next, you’ll need to initialize the cohere client by providing the API key:

import cohere

# initialize Cohere client
co = cohere.Client("YOUR_API_KEY")

Once the client is initialized, we can provide input for the summary. In the case of chat summarization, we need to provide the conversation as input. Here’s how you can provide input for the summary:

conversation = """
Senior Dev: Hey, have you seen the latest pull request for the authentication module?
Junior Dev: No, not yet. What’s in it?
Senior Dev: They’ve added support for JWT tokens, so we can use that instead of session cookies for authentication.
Junior Dev: Oh, that’s great. I’ve been wanting to switch to JWT for a while now.
Senior Dev: Yeah, it’s definitely more secure and scalable. I’ve reviewed the code and it looks good, so go ahead and merge it if you’re comfortable with it.
Junior Dev: Will do, thanks for the heads-up!
"""

Now that we provided the input, we can generate the summary using the co.summarize() method. We can also specify the parameters for the summary, such as the model, length, and extractiveness ( . Here’s how you can generate the summary:

response = co.summarize(conversation, model = 'summarize-xlarge', length = 'short', extractiveness = 'high', temperature = 0.5,)summary = response.summary

Finally, we can output the summary using print() or any other method of our choice. Here’s how you can output the summary

print(summary)

And that’s it! With these simple steps, we can generate chat summaries using Cohere API. In the next section, we will discuss how we can deploy the chat summarizer using Gradio.

Deploying The Chat Summarizer To Gradio

Gradio is a user interface library for quickly prototyping machine learning (ML) models. By deploying our chat summarizer model in Gradio, we can create a simple and intuitive interface that anyone can use to summarize conversations.

To get started, we need to import the necessary libraries:

import gradio as gr
import cohere

If you don't have Gradio installed on your machine yet, don't worry! You can easily install it using pip. Open up your terminal or command prompt and enter the following command:

!pip install gradio

This will install the latest version of Gradio and any dependencies that it requires. Once you’ve installed Gradio, you’re ready to start building your own machine learning-powered user interfaces.

Next, we need to initialize the Cohere client. This is done using the following line of code:

co = cohere.Client("YOUR API KEY")

The Client object allows us to interact with the CoHere API, and the API key is passed as an argument to authenticate the client.Now we can define the chat summarizer function:

def chat_summarizer(conversation):
    # generate summary using Cohere API
response = co.summarize(conversation, model = 'summarize-xlarge', length = 'short', extractiveness = 'high', temperature = 0.5)
summary = response.summary

return summary

The chat_summarizer function takes the conversation text as input and generates a summary using the Cohere API. We pass the conversation text to the co.summarize method, along with the parameters that specify the model to use and the length and extractiveness of the summary.

Finally, we can create the Gradio interface using the following code:

chat_input = gr.inputs.Textbox(lines = 10, label = "Conversation")
chat_output = gr.outputs.Textbox(label = "Summary")

chat_interface = gr.Interface(
  fn = chat_summarizer,
  inputs = chat_input,
  outputs = chat_output,
  title = "Chat Summarizer",
  description = "This app generates a summary of a chat conversation using Cohere API."
)

The gr.inputs.textbox and gr.outputs.textbox objects define the input and output fields of the interface, respectively. We pass these objects, along with the chat_summarizer function, to the gr.Interface constructor to create the interface. We also provide a title and description for the interface.

To launch the interface, we call the launch method on the interface object:

chat_interface.launch()

This will launch a webpage with our interface where users can enter their dialogue and generate a summary with a single click.

Conclusion

In today’s fast-paced digital world, where communication happens mostly through chat, chat summarization plays a vital role in saving time and improving productivity. The ability to quickly and accurately summarize lengthy chat conversations can help individuals and businesses make informed decisions and avoid misunderstandings.

Imagine using it to summarize a chain of email replies, saving you time from having to untangle the conversation yourself. Or perhaps you’re reviewing a particularly dense webpage of content, and the summarizer can help distill the essential points.

With the help of advanced AI and NLP techniques, summarization features have become more accurate and efficient than ever before. So, if you haven't tried summarizing yet, I highly encourage you to give it a try and share your feedback. It could be a game-changer in your daily communication routine.

]]>
hello@smashingmagazine.com (Joas Pambou)
<![CDATA[Modern Technology And The Future Of Language Translation]]> https://smashingmagazine.com/2023/07/modern-technology-future-language-translation/ https://smashingmagazine.com/2023/07/modern-technology-future-language-translation/ Wed, 26 Jul 2023 12:00:00 GMT Multilingual content development presents its own set of difficulties, necessitating close attention to language translations and the use of the right tools. The exciting part is that translation technology has advanced remarkably over time.

In this article, we’ll explore the growth of translation technology throughout time, as well as its origins, and lead up to whether machine translation and artificial intelligence (AI) actually outperform their conventional counterparts when it comes to managing translations. In the process, we’ll discuss the fascinating opportunities offered by automated approaches to language translation as we examine their advantages and potential drawbacks.

And finally, we will speculate on the future of language translation, specifically the exhilarating showdown between OpenAI and Google in their race to dominate the AI landscape.

The Evolution Of Translation Technology

Translation technology can be traced back to Al-Kindi’s Manuscript on Deciphering Cryptographic Messages. However, with the arrival of computers in the mid-twentieth century, translation technology began taking shape. Over the years, significant milestones have marked the evolution, shaping how translations are performed and enhancing the capabilities of language professionals.

Georgetown University and IBM conducted the so-called Georgetown-IBM experiment in the 1950s. The experiment was designed primarily to capture governmental and public interests and funding by demonstrating machine translation capabilities. It was far from a fully featured system. This early system, however, was rule-based and lexicographical, resulting in low reliability and slow translation speeds. Despite its weaknesses, it laid the foundation for future advancements in the field.

The late 1980s and early 1990s marked the rise of statistical machine translation (SMT) pioneered by IBM researchers. By leveraging bilingual corpora, SMT improved translation accuracy and laid the groundwork for more advanced translation techniques.

In the early 1990s, commercial computer-assisted translation (CAT) tools became widely available, empowering translators and boosting productivity. These tools utilized translation memories, glossaries, and other resources to support the translation process and enhance efficiency.

The late 1990s saw IBM release a rule-based statistical translation engine (pdf), which became the industry standard heading into the new century. IBM’s translation engine introduced predictive algorithms and statistical translation, bringing machine translation to the forefront of language translation technology.

In the early 2000s, the first cloud-based translation management systems (TMS) began appearing in the market. While there were some early non-cloud-based versions in the mid-1980s, these modern systems transformed the translation process by allowing teams of people to work more flexibly and collaborate with other company members regardless of their location. The cloud-based approach improved accessibility, scalability, and collaboration capabilities, completely changing how translation projects were managed.

2006 is a significant milestone in translation management because it marks the launch of Google Translate. Using predictive algorithms and statistical translation, Google Translate brought machine translation to the masses and has remained the de facto tool for online multilingual translations. Despite its powerful features, it gained a reputation for inaccurate translations. Still, it plays a pivotal role in making translation technology more widely known and utilized, paving the way for future advancements.

In 2016, Google Translate made a significant leap by introducing neural machine translation (NMT). NMT surpassed previous translation tools, offering improved quality, fluency, and context preservation.

NMT set a new commercial standard and propelled the field forward. By 2017, DeepL emerged as an AI-powered machine translation system renowned for its high-quality translations and natural-sounding output. DeepL’s capabilities further demonstrated the advancements achieved in the field of translation technology.

From 2018 onward, the focus has remained on enhancing NMT models, which continue to outperform traditional statistical machine translation (SMT) approaches. NMT has proven instrumental in improving translation accuracy and has become the preferred approach in today’s many translation applications.

What Translation Technology Came Into Place Over the Years

Translation technology has evolved significantly over the years, offering various tools to enhance the translation process. The main types of translation technology include:

  • Computer-assisted translation (CAT)
    These software applications support translators by providing databases of previous translations, translation memories, glossaries, and advanced search and navigation tools. CAT tools revolutionize translation by improving efficiency and enabling translators to focus more on the translation itself.
  • Machine translation (MT)
    Machine translation is an automated system that produces translated content without human intervention. It can be categorized into rule-based (RBMT), statistical (SMT), or neural (NMT) approaches. MT’s output quality varies based on language pairs, subject matter, pre-editing, available training data, and post-editing resources. Raw machine translation may be used for low-impact content while post-editing by human translators is advisable for high-impact or sensitive content.
  • Translation management systems (TMS)
    TMS platforms streamline translation project management, offering support for multiple languages and file formats, real-time collaboration, integration with CAT tools and machine translation, reporting features, and customization options. TMS solutions ensure organized workflow and scalability for efficient translation project handling.

Translation technology advancements have transformed the translation process, making it more efficient, cost-effective, and scalable.

Finding The Right Translation Approach: Machine Vs. Human

Finding the proper translation approach involves weighing the benefits and drawbacks of machine translation (MT) and human translation. Each approach has its own strengths and considerations to take into account.

Human translation, performed by professional linguists and subject-matter experts, offers accuracy, particularly for complex documents like legal and technical content. Humans can grasp linguistic intricacies and apply their own experiences and instincts to deliver high-quality translations. They can break down a language, ensure cultural nuances are correctly understood, and inject creativity to make the content compelling.

Collaborating with human translators allows direct communication, reducing the chances of missing project objectives and minimizing the need for revisions.

That said, human translation does have some downsides, namely that it is resource-intensive and time-consuming compared to machine translation. If you have ever worked on a multilingual project, then you understand the costs associated with human translation — not every team has a resident translator, and finding one for a particular project can be extremely difficult. The costs often run high, and the process may not align with tight timelines or projects that prioritize speed over contextual accuracy.

Nevertheless, when it comes to localization and capturing the essence of messaging for a specific target audience, human translators excel in fine-tuning the content to resonate deeply. Machine translation cannot replicate the nuanced touch that human translators bring to the table.

On the other hand, machine translation — powered by artificial intelligence and advanced algorithms — is rapidly improving its understanding of context and cultural nuances. Machine translation offers speed and cost-efficiency compared to that manual translations, making it suitable for certain projects that prioritize quick turnarounds and where contextual accuracy is not the primary concern.

Modern TMSs often integrate machine and human translation capabilities, allowing users to choose the most appropriate approach for their specific requirements. Combining human translators with machine translation tools can create a powerful translation workflow. Machine translation can be used as a starting point and paired with human post-editing to ensure linguistic precision, cultural adaptation, and overall quality.

Translation management systems often provide options for leveraging both approaches, allowing for flexibility and optimization based on the content, time constraints, budget, and desired outcome. Ultimately, finding the proper translation approach depends on the content’s nature, the desired accuracy level, project objectives, budget considerations, and time constraints. Assessing these factors and considering the advantages and disadvantages of human and machine translation will guide you in making informed decisions that align with your or your team’s needs and goals.

AI and Machine Translation

Thanks to machine learning and AI advancements, translation technology has come a long way in recent years. However, complete translation automation is not yet feasible, as human translators and specialized machine translation tools offer unique advantages that complement each other.

The future of translation lies in the collaboration between human intelligence and AI-powered machine translation. Human translators excel in creative thinking and adapting translations for specific audiences, while AI is ideal for automating repetitive tasks.

This collaborative approach could result in a seamless translation process where human translators and AI tools work together in unison.

Machine-translation post-editing ensures the accuracy and fluency of AI-generated translations, while human translators provide the final touches to cater to specific needs. This shift should lead to a transition from computer-assisted human translation to human-assisted computer translation. Translation technology will continue to evolve, allowing translators to focus on more complex translations while AI-powered tools handle tedious tasks. It is no longer a question of whether to use translation technology but which tools to utilize for optimal results.

The future of translation looks promising as technology empowers translators to deliver high-quality translations efficiently, combining the strengths of human expertise and AI-powered capabilities.

The Rise of Translation Management Systems

Regarding AI and human interaction, TMSs play a crucial role in facilitating seamless collaboration. Here are five more examples of how TMSs enhance the synergy between human translators and AI.

Terminology Management

TMSs offer robust terminology management features, allowing users to create and maintain comprehensive term bases or glossaries, ensuring consistent usage of specific terminology across translations, and improving accuracy.

Quality Assurance Tools

TMSs often incorporate quality assurance tools that help identify potential translation errors and inconsistencies. These tools can flag untranslated segments, incorrect numbers, or inconsistent translations, enabling human translators to review and rectify them efficiently.

Workflow Automation

TMSs streamline the translation process by automating repetitive tasks. They can automatically assign translation tasks to appropriate translators, track progress, and manage deadlines. This automation improves efficiency and allows human translators to focus more on the creative aspects of translation, like nuances in the voice and tone of the content.

Collaboration And Communication

TMSs provide collaborative features that enable real-time communication and collaboration among translation teams. They allow translators to collaborate on projects, discuss specific translation challenges, and share feedback, fostering a cohesive and efficient workflow.

Reporting And Analytics

TMSs offer comprehensive reporting and analytics capabilities, providing valuable insights into translation projects. Users can track project progress, measure translator productivity, and analyze translation quality, allowing for continuous improvement and informed decision-making.

By leveraging the power of translation management systems, the interaction between AI and human translators becomes more seamless, efficient, and productive, resulting in high-quality translations that meet the specific needs of each project.

Google And OpenAI Competition

We’re already seeing brewing competition between Google and OpenAI for dominance in AI-powered search and generated content. I expect 2024 to be the year that the clash involves translation technology.

That said, when comparing OpenAI’s platform to Google Translate or DeepL, it’s important to consider the respective strengths and areas of specialization of each one. Let’s briefly consider the strengths of each one to see precisely how they differ.

Continuously Improved And Robust Translation

Google Translate and DeepL are dedicated to the field of machine translation and have been, for many years, focusing on refining their translation capabilities.

As a result, they have developed robust systems that excel in delivering high-quality translations. These platforms have leveraged extensive data and advanced techniques to improve their translation models, addressing real-world translation challenges continuously. Their systems’ continuous refinement and optimization have allowed them to achieve impressive translation accuracy and fluency.

Generating Text

OpenAI primarily focuses on generating human-like text and language generation tasks.

While OpenAI’s models, including ChatGPT, can perform machine translation tasks, they may not possess the same level of specialization and domain-specific knowledge as Google Translate and DeepL.

The primary objective of OpenAI’s language models is to generate coherent and contextually appropriate text rather than specifically fine-tuning their models for machine translation.

Compared to ChatGPT, Google Translate and DeepL excel in domain-specific sentences while factoring in obstacles to translation, such as background sounds when receiving audio input. In that sense, Google Translate and DeepL have demonstrated their ability to handle real-world translation challenges effectively, showcasing their continuous improvement and adaptation to different linguistic contexts.

The Future Of Machine Translation

Overall, when it comes to machine translation, Google Translate and DeepL have established themselves as leaders in the field, with a focus on delivering high-quality translations. Their extensive experience and focus on continual improvement contribute to their reputation for accuracy and fluency. While OpenAI’s ChatGPT models technically offer translation capabilities, they may not possess the same level of specialization or optimization tailored explicitly for machine translation tasks.

It’s important to note that the landscape of machine translation is continuously evolving, and the relative strengths of different platforms may change over time. While Google Translate and DeepL have demonstrated their superiority in translation quality, it’s worth considering that OpenAI’s focus on language generation and natural language processing research could benefit future advancements in their machine translation capabilities. Together, the three systems could make a perfect trifecta of accurate translations, speed and efficiency, and natural language processing.

OpenAI’s commitment to pushing the boundaries of AI technology and its track record of innovation suggests it may invest more resources in improving machine translation performance. As OpenAI continues to refine its models and explore new approaches, there is a possibility that it could bridge that gap and catch up with Google Translate and DeepL in terms of translation quality and specialization.

The machine translation landscape is highly competitive, with multiple research and industry players continuously striving to enhance translation models. As advancements in machine learning and neural networks continue, it’s conceivable that newer platforms or models could emerge and disrupt the current dynamics, introducing even higher-quality translations or specialized solutions in specific domains.

So, even though Google Translate and DeepL currently hold an advantage regarding translation quality and domain-specific expertise today in 2023, it’s essential to acknowledge the potential for future changes in the competitive landscape in the years to come. As technology progresses and new breakthroughs occur, the relative strengths and weaknesses of different platforms may shift, leading to exciting developments in the field of machine translation.

Conclusion

In summary, the evolution of translation technology has brought advancements to the multilingual space:

  • The choice of translation approach depends on project requirements, considering factors such as accuracy, budget, and desired outcomes.
  • Machine translation offers speed and cost-efficiency, while human translation excels in complex content.
  • Collaboration between human translators and AI-powered machines is best to get accurate translations that consider voice and tone.
  • Translation management systems are crucial in facilitating collaboration between AI and human translators.

While Google Translate and DeepL have demonstrated higher translation quality and specialization, OpenAI’s focus on human-like text generation may lead to improvements in machine translation capabilities. And those are only a few of the providers.

That means the future of translation technology is incredibly bright as platforms, like locize, continue to evolve. As we’ve seen, there are plenty of opportunities to push this field further, and the outcomes will be enjoyable to watch in the coming years.

Further Reading On SmashingMag

]]>
hello@smashingmagazine.com (Adriano Raiano)
<![CDATA[Recreating YouTube’s Ambient Mode Glow Effect]]> https://smashingmagazine.com/2023/07/recreating-youtube-ambient-mode-glow-effect/ https://smashingmagazine.com/2023/07/recreating-youtube-ambient-mode-glow-effect/ Mon, 24 Jul 2023 12:00:00 GMT ` and the `requestAnimationFrame` function are used to create the glowing effect.]]> I noticed a charming effect on YouTube’s video player while using its dark theme some time ago. The background around the video would change as the video played, creating a lush glow around the video player, making an otherwise bland background a lot more interesting.

This effect is called Ambient Mode. The feature was released sometime in 2022, and YouTube describes it like this:

“Ambient mode uses a lighting effect to make watching videos in the Dark theme more immersive by casting gentle colors from the video into your screen’s background.”
— YouTube

It is an incredibly subtle effect, especially when the video’s colors are dark and have less contrast against the dark theme’s background.

Curiosity hit me, and I set out to replicate the effect on my own. After digging around YouTube’s convoluted DOM tree and source code in DevTools, I hit an obstacle: all the magic was hidden behind the HTML <canvas> element and bundles of mangled and minified JavaScript code.

Despite having very little to go on, I decided to reverse-engineer the code and share my process for creating an ambient glow around the videos. I prefer to keep things simple and accessible, so this article won’t involve complicated color sampling algorithms, although we will utilize them via different methods.

Before we start writing code, I think it’s a good idea to revisit the HTML Canvas element and see why and how it is used for this little effect.

HTML Canvas

The HTML <canvas> element is a container element on which we can draw graphics with JavaScript using its own Canvas API and WebGL API. Out of the box, a <canvas> is empty — a blank canvas, if you will — and the aforementioned Canvas and WebGL APIs are used to fill the <canvas> with content.

HTML <canvas> is not limited to presentation; we can also make interactive graphics with them that respond to standard mouse and keyboard events.

But SVG can also do most of that stuff, right? That’s true, but <canvas> is more performant than SVG because it doesn’t require any additional DOM nodes for drawing paths and shapes the way SVG does. Also, <canvas> is easy to update, which makes it ideal for more complex and performance-heavy use cases, like YouTube’s Ambient Mode.

As you might expect with many HTML elements, <canvas> accepts attributes. For example, we can give our drawing space a width and height:

<canvas width="10" height="6" id="js-canvas"></canvas>

Notice that <canvas> is not a self-closing tag, like an <iframe> or <img>. We can add content between the opening and closing tags, which is rendered only when the browser cannot render the canvas. This can also be useful for making the element more accessible, which we’ll touch on later.

Returning to the width and height attributes, they define the <canvas>’s coordinate system. Interestingly, we can apply a responsive width using relative units in CSS, but the <canvas> still respects the set coordinate system. We are working with pixel graphics here, so stretching a smaller canvas in a wider container results in a blurry and pixelated image.

The downside of <canvas> is its accessibility. All of the content updates happen in JavaScript in the background as the DOM is not updated, so we need to put effort into making it accessible ourselves. One approach (of many) is to create a Fallback DOM by placing standard HTML elements inside the <canvas>, then manually updating them to reflect the current content that is displayed on the canvas.

Numerous canvas frameworks — including ZIM, Konva, and Fabric, to name a few — are designed for complex use cases that can simplify the process with a plethora of abstractions and utilities. ZIM’s framework has accessibility features built into its interactive components, which makes developing accessible <canvas>-based experiences a bit easier.

For this example, we’ll use the Canvas API. We will also use the element for decorative purposes (i.e., it doesn’t introduce any new content), so we won’t have to worry about making it accessible, but rather safely hide the <canvas> from assistive devices.

That said, we will still need to disable — or minimize — the effect for those who have enabled reduced motion settings at the system or browser level.

requestAnimationFrame

The <canvas> element can handle the rendering part of the problem, but we need to somehow keep the <canvas> in sync with the playing <video>and make sure that the <canvas> updates with each video frame. We’ll also need to stop the sync if the video is paused or has ended.

We could use setInterval in JavaScript and rig it to run at 60fps to match the video’s playback rate, but that approach comes with some problems and caveats. Luckily, there is a better way of handling a function that must be called on so often.

That is where the requestAnimationFrame method comes in. It instructs the browser to run a function before the next repaint. That function runs asynchronously and returns a number that represents the request ID. We can then use the ID with the cancelAnimationFrame function to instruct the browser to stop running the previously scheduled function.

let requestId;

const loopStart = () => {
  /* ... */

  /* Initialize the infinite loop and keep track of the requestId */
  requestId = window.requestAnimationFrame(loopStart);
};

const loopCancel = () => {
  window.cancelAnimationFrame(requestId);
  requestId = undefined;
};

Now that we have all our bases covered by learning how to keep our update loop and rendering performant, we can start working on the Ambient Mode effect!

The Approach

Let’s briefly outline the steps we’ll take to create this effect.

First, we must render the displayed video frame on a canvas and keep everything in sync. We’ll render the frame onto a smaller canvas (resulting in a pixelated image). When an image is downscaled, the important and most-dominant parts of an image are preserved at the cost of losing small details. By reducing the image to a low resolution, we’re reducing it to the most dominant colors and details, effectively doing something similar to color sampling, albeit not as accurately.

Next, we’ll blur the canvas, which blends the pixelated colors. We will place the canvas behind the video using CSS absolute positioning.

And finally, we’ll apply additional CSS to make the glow effect a bit more subtle and as close to YouTube’s effect as possible.

HTML Markup

First, let’s start by setting up the markup. We’ll need to wrap the <video> and <canvas> elements in a parent container because that allows us to contain the absolute positioning we will be using to position the <canvas> behind the <video>. But more on that in a moment.

Next, we will set a fixed width and height on the <canvas>, although the element will remain responsive. By setting the width and height attributes, we define the coordinate space in CSS pixels. The video’s frame is 1920×720, so we will draw an image that is 10×6 pixels image on the canvas. As we’ve seen in the previous examples, we’ll get a pixelated image with dominant colors somewhat preserved.

<section class="wrapper">
  <video controls muted class="video" id="js-video" src="video.mp4"></video>
  <canvas width="10" height="6" aria-hidden="true" class="canvas" id="js-canvas"></canvas>
</section>
Syncing <canvas> And <video>

First, let’s start by setting up our variables. We need the <canvas>’s rendering context to draw on it, so saving it as a variable is useful, and we can do that by using JavaScript’s getCanvasContext function. We’ll also use a variable called step to keep track of the request ID of the requestAnimationFrame method.

const video = document.getElementById("js-video");
const canvas = document.getElementById("js-canvas");
const ctx = canvas.getContext("2d");

let step; // Keep track of requestAnimationFrame id

Next, we’ll create the drawing and update loop functions. We can actually draw the current video frame on the <canvas> by passing the <video> element to the drawImage function, which takes four values corresponding to the video’s starting and ending points in the <canvas> coordinate system, which, if you remember, is mapped to the width and height attributes in the markup. It’s that simple!

const draw = () => {
  ctx.drawImage(video, 0, 0, canvas.width, canvas.height);
};

Now, all we need to do is create the loop that calls the drawImage function while the video is playing, as well as a function that cancels the loop.

const drawLoop = () => {
  draw();
  step = window.requestAnimationFrame(drawLoop);
};

const drawPause = () => {
  window.cancelAnimationFrame(step);
  step = undefined;
};

And finally, we need to create two main functions that set up and clear event listeners on page load and unload, respectively. These are all of the video events we need to cover:

  • loadeddata: This fires when the first frame of the video loads. In this case, we only need to draw the current frame onto the canvas.
  • seeked: This fires when the video finishes seeking and is ready to play (i.e., the frame has been updated). In this case, we only need to draw the current frame onto the canvas.
  • play: This fires when the video starts playing. We need to start the loop for this event.
  • pause: This fires when the video is paused. We need to stop the loop for this event.
  • ended: This fires when the video stops playing when it reaches its end. We need to stop the loop for this event.
const init = () => {
  video.addEventListener("loadeddata", draw, false);
  video.addEventListener("seeked", draw, false);
  video.addEventListener("play", drawLoop, false);
  video.addEventListener("pause", drawPause, false);
  video.addEventListener("ended", drawPause, false);
};

const cleanup = () => {
  video.removeEventListener("loadeddata", draw);
  video.removeEventListener("seeked", draw);
  video.removeEventListener("play", drawLoop);
  video.removeEventListener("pause", drawPause);
  video.removeEventListener("ended", drawPause);
};

window.addEventListener("load", init);
window.addEventListener("unload", cleanup);

Let’s check out what we’ve achieved so far with the variables, functions, and event listeners we have configured.

Creating A Reusable Class

Let’s make this code reusable by converting it to an ES6 class so that we can create a new instance for any <video> and <canvas> pairing.

class VideoWithBackground {
  video;
  canvas;
  step;
  ctx;

  constructor(videoId, canvasId) {
    this.video = document.getElementById(videoId);
    this.canvas = document.getElementById(canvasId);

    window.addEventListener("load", this.init, false);
    window.addEventListener("unload", this.cleanup, false);
  }

  draw = () => {
    this.ctx.drawImage(this.video, 0, 0, this.canvas.width, this.canvas.height);
  };

  drawLoop = () => {
    this.draw();
    this.step = window.requestAnimationFrame(this.drawLoop);
  };

  drawPause = () => {
    window.cancelAnimationFrame(this.step);
    this.step = undefined;
  };

  init = () => {
    this.ctx = this.canvas.getContext("2d");
    this.ctx.filter = "blur(1px)";

    this.video.addEventListener("loadeddata", this.draw, false);
    this.video.addEventListener("seeked", this.draw, false);
    this.video.addEventListener("play", this.drawLoop, false);
    this.video.addEventListener("pause", this.drawPause, false);
    this.video.addEventListener("ended", this.drawPause, false);
  };

  cleanup = () => {
    this.video.removeEventListener("loadeddata", this.draw);
    this.video.removeEventListener("seeked", this.draw);
    this.video.removeEventListener("play", this.drawLoop);
    this.video.removeEventListener("pause", this.drawPause);
    this.video.removeEventListener("ended", this.drawPause);
  };
    }

Now, we can create a new instance by passing the id values for the <video> and <canvas> elements into a VideoWithBackground() class:

const el = new VideoWithBackground("js-video", "js-canvas");
Respecting User Preferences

Earlier, we briefly discussed that we would need to disable or minimize the effect’s motion for users who prefer reduced motion. We have to consider that for decorative flourishes like this.

The easy way out? We can detect the user’s motion preferences with the prefers-reduced-motion media query and completely hide the decorative canvas if reduced motion is the preference.

@media (prefers-reduced-motion: reduce) {
  .canvas {
    display: none !important;
  }
}

Another way we respect reduced motion preferences is to use JavaScript’s matchMedia function to detect the user’s preference and prevent the necessary event listeners from registering.

constructor(videoId, canvasId) {
  const mediaQuery = window.matchMedia("(prefers-reduced-motion: reduce)");

  if (!mediaQuery.matches) {
    this.video = document.getElementById(videoId);
    this.canvas = document.getElementById(canvasId);

    window.addEventListener("load", this.init, false);
    window.addEventListener("unload", this.cleanup, false);
  }
}
Final Demo

We’ve created a reusable ES6 class that we can use to create new instances. Feel free to check out and play around with the completed demo.

See the Pen Youtube video glow effect - dominant color [forked] by Adrian Bece.

Creating A React Component

Let’s migrate this code to the React library, as there are key differences in the implementation that are worth knowing if you plan on using this effect in a React project.

Creating A Custom Hook

Let’s start by creating a custom React hook. Instead of using the getElementById function for selecting DOM elements, we can access them with a ref on the useRef hook and assign it to the <canvas> and <video> elements.

We’ll also reach for the useEffect hook to initialize and clear the event listeners to ensure they only run once all of the necessary elements have mounted.

Our custom hook must return the ref values we need to attach to the <canvas> and <video> elements, respectively.

import { useRef, useEffect } from "react";

export const useVideoBackground = () => {
  const mediaQuery = window.matchMedia("(prefers-reduced-motion: reduce)");
  const canvasRef = useRef();
  const videoRef = useRef();

  const init = () => {
    const video = videoRef.current;
    const canvas = canvasRef.current;
    let step;

    if (mediaQuery.matches) {
      return;
    }

    const ctx = canvas.getContext("2d");

    ctx.filter = "blur(1px)";

    const draw = () => {
      ctx.drawImage(video, 0, 0, canvas.width, canvas.height);
    };

    const drawLoop = () => {
      draw();
      step = window.requestAnimationFrame(drawLoop);
    };

    const drawPause = () => {
      window.cancelAnimationFrame(step);
      step = undefined;
    };

    // Initialize
    video.addEventListener("loadeddata", draw, false);
    video.addEventListener("seeked", draw, false);
    video.addEventListener("play", drawLoop, false);
    video.addEventListener("pause", drawPause, false);
    video.addEventListener("ended", drawPause, false);

    // Run cleanup on unmount event
    return () => {
      video.removeEventListener("loadeddata", draw);
      video.removeEventListener("seeked", draw);
      video.removeEventListener("play", drawLoop);
      video.removeEventListener("pause", drawPause);
      video.removeEventListener("ended", drawPause);
    };
  };

  useEffect(init, []);

  return {
    canvasRef,
    videoRef,
  };
};

Defining The Component

We’ll use similar markup for the actual component, then call our custom hook and attach the ref values to their respective elements. We’ll make the component configurable so we can pass any <video> element attribute as a prop, like src, for example.

import React from "react";
import { useVideoBackground } from "../hooks/useVideoBackground";

import "./VideoWithBackground.css";

export const VideoWithBackground = (props) => {
  const { videoRef, canvasRef } = useVideoBackground();

  return (
    <section className="wrapper">
      <video ref={ videoRef } controls className="video" { ...props } />
      <canvas width="10" height="6" aria-hidden="true" className="canvas" ref={ canvasRef } />
    </section>
  );
};

All that’s left to do is to call the component and pass the video URL to it as a prop.

import { VideoWithBackground } from "../components/VideoWithBackground";

function App() {
  return (
    <VideoWithBackground src="http://commondatastorage.googleapis.com/gtv-videos-bucket/sample/BigBuckBunny.mp4" />
  );
}

export default App;
Conclusion

We combined the HTML <canvas> element and the corresponding Canvas API with JavaScript’s requestAnimationFrame method to create the same charming — but performance-intensive — visual effect that makes YouTube’s Ambient Mode feature. We found a way to draw the current <video> frame on the <canvas>, keep the two elements in sync, and position them so that the blurred <canvas> sits properly behind the <video>.

We covered a few other considerations in the process. For example, we established the <canvas> as a decorative image that can be removed or hidden when a user’s system is set to a reduced motion preference. Further, we considered the maintainability of our work by establishing it as a reusable ES6 class that can be used to add more instances on a page. Lastly, we converted the effect into a component that can be used in a React project.

Feel free to play around with the finished demo. I encourage you to continue building on top of it and share your results with me in the comments, or, similarly, you can reach out to me on Twitter. I’d love to hear your thoughts and see what you can make out of it!

References

]]>
hello@smashingmagazine.com (Adrian Bece)
<![CDATA[The Art Of Looking Back: A Critical Reflection For Individual Contributors]]> https://smashingmagazine.com/2023/07/critical-reflection-individual-contributors/ https://smashingmagazine.com/2023/07/critical-reflection-individual-contributors/ Fri, 21 Jul 2023 12:00:00 GMT Have you ever looked back at your younger self and wondered, “What was I even thinking?” If you have, then you know how important it is to acknowledge change, appreciate growth, and learn from your mistakes.

Søren Kierkegaard, the first existentialist philosopher, famously wrote:

“Life can only be understood by looking backward, but it must be lived forwards.”
Søren Kierkegaard

By looking back at our past selves, we compare them not only to who we are today but to who we want to be tomorrow.

This process is called reflection.

Critical reflection is the craft of “bringing unconscious aspects of experience to conscious awareness, thereby making them available for conscious choice.” At its core, reflection focuses on challenging your takeaways from practical experiences, nudging you to explore better ways of achieving your goals.

Learning and growth are impossible without reflection. In the 1970s, David Kolb, an educational theorist, developed the “Cycle of Learning”, comprising four stages: Concrete Experience, Reflective Observation, Abstract Conceptualization, and Active Experimentation.

According to Kolb, each new experience yields learning when all of its aspects are analyzed, assessed, and challenged — in theory (through reflection and conceptualization) and in practice (through experimentation). In turn, new learning informs new experiences, therefore completing the circle: act, analyze, challenge, plan, repeat.

Reflection takes the central stage: it evaluates the outcomes of each concrete experience, informs future decisions, and leads to new discoveries. What’s more important, reflection takes every aspect of learning into consideration: from actions and feelings to thoughts and observations.

Design is, by nature, reflective. Ambiguity requires designers to be flexible and analyze the situation on the go. We need to adapt to the ever-changing environment, learn from every failure, and constantly doubt our expertise. Rephrasing Donald Schön, an American philosopher, instead of applying experience to a situation, designers should be open to the “situation’s back talk.”

On the other hand, designers often reflect almost unconsciously, and their reflections may lack structure and depth, especially at first. Reflection is the process of “thinking analytically” about all elements of your practice, but structureless reflection is not critical nor meaningful.

Luckily, a reflective framework exists to provide the necessary guidance.

Practicing Critical Reflection

In 1988, Professor Graham Gibbs published his book Learning by Doing, where he first introduced the Reflective Cycle — a framework represented by a closed loop of exercises, designed to help streamline the process of critical reflection.

In a nutshell, reflection comes down to describing the experience and your feelings towards it, analyzing your actions and thoughts, and devising an action plan. What sets it apart from a retrospective is continuity: the cycle is never complete, and every new iteration is built on the foundation of a previous one.

Imagine a situation: You are tasked with launching a survey and collecting at least 50 responses. However, a week later, you barely receive 15, despite having sent it to over a hundred people. You are angry and disappointed; your gut tells you the problem is in the research tool, and you are tempted to try again with another service.

Then, you take a deep breath and reflect on the experience.

Describe the situation

Begin by describing the situation in detail. Remember that a good description resembles a story, where every event is a consequence of past actions. Employ the “But & Therefore” rule of screenwriting and focus on drawing the connection between your actions and their outcomes.

First, provide a brief outline: What did you do, and how did it go?

Last week, I launched a research survey using Microsoft Forms, but despite my best efforts, it failed to collect a number of responses large enough to draw a meaningful conclusion. Upon analyzing the results, I noticed that a significant portion of the participants bounced from the question, which required them to choose a sector of a multi-layered pie chart.

Then, add some details: describe how you went about reaching your objective and what was your assumption at the time.

The technical limitations of Microsoft Forms made embedding a large image impossible, so I uploaded a low-resolution thumbnail and provided an external link (“Click to enlarge the image”). A large portion of participants, however, did not notice the link and couldn’t complete the task, stating that the image in the form was too small to comprehend. As a result, we have only collected 15 complete responses.

Recommendations

  • Avoid analyzing the experience at this stage. Focus on describing the situation in as many details as possible.
  • Disregard your experience and your gut urging you to solve a problem. Be patient, observant, and mindful.
  • Reflection doesn’t have to take place after the experience. In fact, you can reflect during the event or beforehand, trying to set the right expectations and plan your actions accordingly.

Describe Your Feelings

At this stage, focus on understanding your emotions before, during, and after the experience. Be mindful of the origin of your feelings and how they manifested and changed over time.

I was rather excited to see that Microsoft Forms offer a comprehensive set of survey tools. Moreover, I was captivated by the UI of the form, the option to choose a video background, and the feature that automatically calculated the time to complete the survey.

You will notice how describing your emotions helps you understand your motivations, beliefs, and convictions. In this particular example, by admitting to being enchanted by the platform’s interface, you essentially confirm that your decision was not a result of reasonable judgement or unbiased analysis.

I was somewhat disappointed to learn that I could not embed a large image, but I did not pay enough attention at the time.

This step is important: as your feelings change, so do your actions. A seemingly minor transition from “excitement” to “disappointment” is a signal that you have obviously overlooked. We will get back to it as we begin analyzing the situation.

Lastly, focus on your current state. How do you feel about your experience now when it’s over? Does any emotion stand out in particular?

I feel ashamed that I have overlooked such an obvious flaw and allowed it to impact the survey outcome.

Describing your emotions is, perhaps, the most challenging part of critical reflection. In traditional reflective practice, emotions are often excluded: we are commanded to focus on our actions, whether we are capable of acting rationally and disregard the feelings. However, in modern reflection, emotional reflection is highly encouraged.

Humans are obviously emotional beings. Our feelings determine our actions more than any facts ever could:

Our judgement is clouded by emotions, and understanding the connection between them and our actions is the key to managing our professional and personal growth.

Recommendations

  • Analyze your feelings constantly: before, during, and after the action. This will help you make better decisions, challenge your initial response, and be mindful of what drives you.
  • Don’t think you are capable of making rational decisions and don’t demand it of others, too. Emotions play an important role in decision–making, and you should strive to understand them, not obtain control over them.
  • Don’t neglect your and your team’s feelings. When reflecting on or discussing your actions, talk about how they made you feel and why.

Evaluate And Analyze

Evaluation and analysis is the most critical step of the reflective process. During this stage, you focus not only on the impact of your actions but on the root cause, challenging your beliefs, reservations, and decisions.

W3C’s guidelines for complex images require providing a long description as an alternative to displaying a complex image. Despite knowing that, I believed that providing a link to a larger image would be sufficient and that the participants would either be accessing my survey on the web or zooming in on their mobile devices.

Switching the focus from actions to the underlying motivation compliments the emotional reflection. It demonstrates the tangible impact of your feelings on your decisions: being positively overwhelmed has blinded you, and you didn’t spend enough time empathizing with your participant to predict their struggles.

Moreover, I chose an image that was quite complex and featured many layers of information. I thought providing various options would help my participants make a better-informed decision. Unfortunately, it may have contributed to causing choice overload and increasing the bounce rate.

Being critical of your beliefs is what sets reflection apart from the retelling. Things we believe in shape and define our actions — some of them stem from our experience, and others are imposed onto us by communities, groups, and leaders.

Irving Janis, an American research psychologist, in his 1972 study, introduced the term “groupthink”, an “unquestioned belief in the morality of the group and its choices.” The pressure to conform and accept the authority of the group, and the fear of rejection, make us fall victim to numerous biases and stereotypes.

Critical reflection frees us from believing the myths by doubting their validity and challenging their origins. For instance, your experience tells you that reducing the number of options reduces the choice overload as well. Critical reflection, however, nudges you to dig deeper and search for concrete evidence.

However, I am not convinced that the abundance of options led to overwhelming the participants. In fact, I managed to find some studies that point out how “more choices may instead facilitate choice and increase satisfaction.”

Recommendations

  • Learn to disregard your experience and not rely on authority when making important decisions. Plan and execute your own experiments, but be critical of the outcomes as well.
  • Research alternative theories and methods that, however unfamiliar, may provide you with a better way of achieving your goals. Don’t hesitate to get into uncharted waters.

Draw A Conclusion And Set A Goal

Summarize your reflection and highlight what you can improve. Do your best to identify various opportunities.

As a result, 85% of the participants dropped out, which severely damaged the credibility of my research. Reflecting on my emotions and actions, I conclude that providing the information in a clear and accessible format could have helped increase the response rate.
Alternatively, I could have used a different survey tool that would allow me to embed large images: however, that might require additional budget and doesn’t necessarily guarantee results.

Lastly, use your reflection to frame a SMART (Specific, Measurable, Achievable, Relevant, and Time-Bound) goal.

Only focus on goals that align with your professional and personal aspirations. Lock every goal in time and define clear and unambiguous success criteria. This will help you hold yourself accountable in the future.

As my next step, I will research alternative ways of presenting complex information that is accessible and tool-agnostic, as this will provide more flexibility, a better user experience to survey participants and ensure better survey outcomes for my future research projects. I will launch a new survey in 14 days and reflect accordingly.

At this point, you have reached the conclusion of this reflective cycle. You no longer blame the tool, nor do you feel disappointed or irate. In fact, you now have a concrete plan that will lead you to pick up a new, relevant, and valuable skill. More than that, the next time the thrill takes you, you will stop to think about whether you are making a rational decision.

Recommendations

  • SMART is a good framework, but not every goal has to fit it perfectly. Some goals may have questionable relevancy, and others may have a flexible timeline. Make sure you are confident that your goals are attainable, and constantly reflect on your progress.
  • Challenge your goals and filter out those that don’t make practical sense. Are your goals overly ambiguous? How will you know when you have achieved your goal?
Daily Reflection

Reflection guides you by helping you set clear, relevant goals and assess progress. As you learn and explore, make decisions, and overcome challenges, reflection becomes an integral part of your practice, channels your growth, and informs your plans.

Reflection spans multiple cognitive areas (“reflective domains”), from discipline and motivation to emotional management. You can analyze how you learn new things and communicate with your peers, how you control your emotions, and stay motivated. Reflecting on different aspects of your practice will help you achieve well–balanced growth.

As you collect your daily reflections, note down what they revolve around, for example, skills and knowledge, discipline, emotions, communication, meaningful growth, and learning. In time, you may notice how some domains will accumulate more entries than others, and this will signal you which areas to focus more on when moving forward.

Finally, one thing that can make a continuous, goal-driven reflective process even more effective is sharing.

Keeping a public reflective journal is a great practice. It holds you accountable, requires discipline to publish entries regularly, and demands quality reflection and impact analysis. It improves your writing and storytelling, helps you create more engaging content, and work with the audience.

Most importantly, a public reflective journal connects you with like-minded people. Sharing your growth and reflecting on your challenges is a great way to make new friends, inspire others, and find support.

Conclusion

In Plato’s “Apology,” Socrates says, “I neither know nor think I know.” In a way, that passage embodies the spirit of a reflective mindset: admitting to knowing nothing and accepting that no belief can be objectively accurate is the first step to becoming a better practitioner.

Reflection is an argument between your former and future selves: a messy continuous exercise that is not designed to provide closure, but to ask more questions and leave many open for further discussions. It is a combination of occasional revelations, uncomfortable questions, and tough challenges.

Reflection is not stepping out of your comfort zone. It is demolishing it, tearing it apart, and rebuilding it with new, better materials.

Don’t stop reflecting.

Further Reading on Smashing Magazine

]]>
hello@smashingmagazine.com (Kristian Mikhel)
<![CDATA[Designing Age-Inclusive Products: Guidelines And Best Practices]]> https://smashingmagazine.com/2023/07/designing-age-inclusive-products-guidelines-best-practices/ https://smashingmagazine.com/2023/07/designing-age-inclusive-products-guidelines-best-practices/ Tue, 18 Jul 2023 12:00:00 GMT Why is it so important to take into account older adults? One person in eight on the planet is over 60, and they are more online than ever. Approximately one billion people aged 60+ are alive today. Most of them are healthy and active and have discretionary income. Moreover, it is growing faster than any other age group and is projected to be 20% of the world’s population (~2 billion people) by 2050. They are also the fastest-growing category of e-commerce shoppers.

Older people today are adopting technology more than ever before. From the use of the Internet, smartphones, tablets, and wearables to smart TVs and speakers, a growing number of older people are users. Ownership of smartphones, for example, increased from 70% to 77% among the 50+ population in the United States between 2017 and 2021. Moreover, during the Covid-19 pandemic, there was a significant rise in older adults’ motivation to use digital technology.

However, many older people still lack sufficient Internet connectivity or technological skill to use devices and consume digital services. It is estimated that two in five feel technology is not designed for them (PDF).

Opportunity To Integrate Older people Into The Digital World

More and more aspects of life are conducted on digital platforms: interpersonal communication, banking, healthcare, personal consumption, and exercising one’s rights are just some of them. Therefore, digital platforms that are challenging to use for older people have a negative impact on their quality of life. It prevents them from accessing essential services and integrating equally into society.

According to the inclusive design approach, one should take into account the needs of as many users as possible without stigmatizing or excluding a specific group by designing niche products.

If you adopt this principle, you can design a digital platform that serves a wide range of people, not just those aged 65+. Usually, a service that meets the needs of people aged 65+ will serve other audiences as well.

Adopting Age-appropriate Navigation And Orientation Practices

Advancing age can also bring with it a decrease in the rate of information processing, whether in understanding, thinking, or remembering. Plus, the ability to ignore distractions, focus on one stimulus, and perform several complex actions simultaneously also decreases.

Additionally, due to their age, some suffer from a decrease in executive functions that enable planning, executing, and delaying reactions. Therefore, there is a higher chance they will perform random actions such as clicking on unintended places, closing pages, or making errors when using apps. Some may have difficulty understanding that icons carry the same meaning across different apps or in dealing with situations that do not correspond with their expectations of the digital world. Despite such difficulties, it is essential to stress that the ability to learn from feedback — for example, via affirmations — does not diminish with age.

What Should We Do If We Want To Increase Their Engagement?

Here are a few guidelines to help you design a more inclusive product. Those guidelines can increase the usage of younger users but are highly crucial for older users:

Minimize The Number Of Required Actions And Create Shortcuts

Some people over 65 find it challenging to cope with information overload and multiple options.

Proper information architecture and hierarchy will indicate what is important to the user and require less effort. We should ensure that the required actions appear immediately and easily so that the user does not have to search for them. Some important ones to mention are white space, content placement, space, language, number of actions, and others. Below are these and some others listed with explanations:

  • White space
    Reducing the number of elements on a screen, increasing the spacing between them, and retaining whitespace will make the screen feel less crowded and, therefore, clearer and more inviting. The added value is in the feeling of simplicity it creates. This improves the user’s sense of competence and ability to focus. Clear typography following one of the established typographic scales is relevant for websites, apps, and complex data systems.

  • Dialog box
    A limited number of options prevents cognitive overload. Therefore, conduct a careful mapping of the digital platform, distilling out the most important actions and contents.
  • Central placement
    The most important themes should be positioned at the center of the screen.
  • Large & spacious
    The most important buttons should be enlarged and positioned prominently to allow immediate recognition.
  • Clear language
    Topics should be clearly labeled, and the labels should be verified in usability testing. Complicated terms should be avoided since they might not be familiar to the target audience.
  • Limited number of actions
    The number of steps (clicks and scrolls) necessary to achieve a goal should be minimized.
  • Shortcuts and multiple/redundant paths
    Make it easy for users to reach their goals by providing multiple options, such as Quick Links, as seen in the example below.

What To Know About Navigation & Orientation?

Some people aged 65+ can experience a decline in memory recall. Some are also unfamiliar with the principles of the digital world. Therefore, to enhance their sense of control, the following principles should be adopted: rely on recognition, not memory; allow going back; design clear navigation keys; be consistent in design and operation; and provide indicators. Below we will discuss several principles to facilitate navigation.

  • Recognition, not memory
    This principle means creating an interface where users do not have to use their memory to recall information. Instead, they will be asked to identify familiar and prominent components, such as quick access to previously visited pages or actions. The illustration below is an example of how Korea’s post office presents the main tasks on the main screen in a way that doesn’t require memory but recognition. Additional support is the use of color to help the users recognize how to navigate the site once they return to it.

  • Indicators and feedback:
    • Emphasize the performed actions: Use breadcrumbs to indicate which links or buttons the user has clicked on and their location.
    • Create a conspicuous and permanent back icon that takes the user to a previous page/stage. This is in addition to the browser’s back and homepage icons.
    • Clear and prominent navigation buttons: Emphasize navigation buttons and add text to explain their function.
  • Consistency
    Create an ongoing, consistent user experience using recurring items. Allow the users to learn the interface, generating a sense of success and building anticipation for the next stage. Pay attention to the location and design of fixed buttons that have the same function.
  • Progress
    Create obvious hints, such as a progress bar, that help users to understand where they are in the process.
  • Success & mistakes
    Highlight progress and successful actions. Additionally, indicate errors clearly and provide ways for easy recovery.
  • Contact Us
    Choose a prominent location for the help options. Provide contact information using various channels, e.g., telephone number and email address.

Choosing components with an age-inclusive mindset can change our day-to-day decisions and allow us to create an easy-to-operate interface.

The following guidelines relate to changes in motor functions.

Creating An Interface That Is Easy To Operate

As age increases, it may be accompanied by difficulty in touching a specific spot accurately, regulating a click’s intensity, or performing quick actions, such as double-clicking. Therefore, the following principles should be ensured: space out the keys, avoid the need for a high degree of precision, avoid gestures requiring sensory regulation, and enable users to progress at their own pace.

  • Large & spacious
    Design large and well-spaced elements.
  • Individual pace:
    • When creating pop-up/toast messages, allow users to initiate closing or at least leave the messages visible for longer for slower readers.
    • Avoid menus that open on hover. Always use click-tap menus instead.
  • Indication
    Provide clear scroll indicators (e.g., side arrows). Reduce the need for precision: Avoid small clicking areas, mouse hovering, and double-clicking.
  • Ensure that the interface is responsive on all screens.
    For touch screens:
    • Click actions should not rely on touch intensity or precision.
    • Avoid, as much as possible, long strokes and drag gestures, and reduce scroll options. At the very least, provide clear indication and instruction, and offer alternatives, such as an arrow or button directing to specific places.
    • Avoid the need for very precise actions, such as in small clicking areas, and try to keep to the minimum size for comfortable tapping/clicking of at least 44x44 px.
    • Avoid actions requiring fine motor regulation such as spread, pinch, and rotate.

Let’s examine the homepage of three famous e-commerce websites in the eyes of an older user without going into details.

Aliexpress website, as seen in the image above, is a good example of an “unfriendly” website — it suffers from visual overload. Too many buttons and tags are emphasized, which makes it hard to focus older user attention. Moreover, given the visual overload, the side menu is easy to ignore, and many users will be required to use the search bar, which needs to be more dominant and rely on their recall of items, not recognition.

On the Amazon website (see the image below), the size of each category image may be distracting, requiring the user to scroll a lot and challenging his ability to navigate.

In contrast with these previous sites, eBay does better regarding visual overload; the dominance of the search bar and even the menu placement make it easier for older users to navigate.

Wrapping Up

Every designer would probably state that they want their website and app to be inclusive, easy to navigate, and user-friendly. Yet, as we have seen in our examples above, we need to create awareness to integrate details that would make it easier for older users to navigate the Internet. As we age, all of us will experience longer reaction times, changes in selective attention, attention-splitting, and changes in our motor functions. Therefore, designing for an older audience will achieve the goals of having an inclusive, easy-to-navigate, and user-friendly for our future selves as well.

To sum up, in order to create an easy operation and orientation of an interface, we should pay attention to the following:

  • Large, spaced-out objects;
  • Operation does not require fine gestures and precision;
  • A minimum number of actions to achieve goals;
  • The user should control the rate of progress;
  • Reassuring notifications upon successful actions;
  • Consistency of design and operation;
  • Minimizing the number of choices the user must make;
  • Highlighting the actions performed by the user.

The article is based on one of the chapters of the Log In guide. The guide was created within the framework of the National Initiative to Promote Digital Literacy Among Older Adults, which is a partnership between the Israel National Digital Agency and JDC-ESHEL.

Further Reading On SmashingMag

]]>
hello@smashingmagazine.com (Michal Halperin Ben Zvi)
<![CDATA[Writing CSS In 2023: Is It Any Different Than A Few Years Ago?]]> https://smashingmagazine.com/2023/07/writing-css-2023/ https://smashingmagazine.com/2023/07/writing-css-2023/ Fri, 14 Jul 2023 12:00:00 GMT Is there anything in the front-end world that’s evolving faster than CSS these days? After what seemed like a long lull following blockbusters Flexbox and Grid, watching CSS release new features over the past few years has been more like watching a wild game of rugby on the telly. The pace is exciting, if not overwhelming at the same time.

But have all these bells and whistles actually changed the way you write CSS? New features have certainly influenced the way I write CSS today, but perhaps not quite as radically as I would have expected.

And while I’ve seen no shortage of blog posts with high-level examples and creative experiments of all these newfangled things that are available to us, I have yet to see practical applications make their way into production or everyday use. I remember when Sass started finding its way into CSS tutorials, often used as the go-to syntax for code examples and snippets. I’m not exactly seeing that same organic adoption happen with, say, logical properties, and we’ve had full browser support for them for about two years now.

This isn’t to rag on anyone or anything. I, for one, am stoked beyond all heck about how CSS is evolving. Many of the latest features are ones we have craved for many, many years. And indeed, there are several of them finding their way into my CSS. Again, not drastically, but enough that I’m enjoying writing CSS more now than ever.

Let me count the ways.

More And More Container Queries

I’ll say it: I’ve never loved writing media queries for responsive layouts. Content responds differently to the width of the viewport depending on the component it’s in. And balancing the content in one component has always been a juggling act with balancing the content in other components, adding up to a mess of media queries at seemingly arbitrary breakpoints. Nesting media queries inside a selector with Sass has made it tolerable, but not to the extent that I “enjoyed” writing new queries and modifying existing ones each time a new design with UI changes is handed to me.

Container queries are the right answer for me. Now I can scope child elements to a parent container and rely on the container’s size for defining where the layout shifts without paying any mind to other surrounding components.

The other thing I like about container queries is that they feel very CSS-y. Defining a container directly on a selector matches a natural property-value syntax and helps me avoid having to figure out math upfront to determine breakpoints.

.parent {
  container-type: inline-size;
}

@container (min-width: 600px) {
  .child {
    align-self: center;
  }
}

I still use media queries for responsive layouts but tend to reserve them for “bigger” layouts that are made up of assembled containers. Breakpoints are more predictable (and can actually more explicitly target specific devices) when there’s no need to consider what is happening inside each individual container.

Learn About Container Queries

Grouping Styles In Layers

I love this way of managing the cascade! Now, if I have a reset or some third-party CSS from a framework or whatever, I can wrap those in a cascade layer and chuck them at the bottom of a file so my own styles are front and center.

I have yet to ship anything using cascade layers, but I now reach for them for nearly every CodePen demo I make. The browser support is there, so that’s no issue. It’s more that I still rely on Sass on projects for certain affordances, and maintaining styles in partialized files still feels nice to me, at least for that sort of work.

But in an isolated demo where all my styles are in one place, like CodePen? Yeah, all the cascade layers, please! Well, all I really need is one layer for the base styles since un-layered styles have higher specificity than layered ones. That leaves my demo-specific styles clean, uncluttered, and still able to override the base at the top, which makes it way more convenient to access them.

body {
  display: grid;
  gap: 3rem;
  place-items: center;
}

@layer base {
  body {
    font-size: 1.25rem;
    line-height: 1.35;
    padding: 3rem;
  }
}

Learn More About Cascade Layers

:is() And :where()

I definitely reach for these newer relational pseudo-selectors, but not really for the benefits of selecting elements conditionally based on relationships.

Instead, I use them most often for managing specificity. But unlike cascade layers, I actually use these in production.

Why? Because with :is(), specificity is determined not by the main selector but by the most specific selector in its argument list.

/* Specificity: 0 1 1 */
:is(ol, .list, ul) li {}

/* Specificity: 0 0 2 */
ol li {}

The .list selector gives the first ruleset a higher specificity score meaning it “beats” the second ruleset even though the first ruleset is higher in the cascade.

On the flip side, the specificity of :where() is a big ol’ score of zero, so it does not add to the overall score of whatever selector it’s on. It simply doesn’t matter at all what’s in its argument list. For the same reason I use :is() to add specificity, I use :where() to strip it out. I love keeping specificity generally low because I still want the cascade to operate with as little friction as possible, and :where() makes that possible, especially for defining global styles.

A perfect example is wrapping :not() inside :where() to prevent :not() from bumping up specificity:

/* Specificity: 0 0 0 */
:where(:not(.some-element)) {}

Taken together, :is() and :where() not only help manage specificity but also take some cognitive load from “naming” things.

I’m one of those folks who still love the BEM syntax. But naming is one of the hardest things about it. I often find myself running out of names that help describe the function of an element and its relationship to elements around it. The specificity-wrangling powers of :is() and :where() means I can rely less on elaborate class names and more on element selectors instead.

Learn More About :is() And :where()

The New Color Function Syntax

The updated syntax for color functions like rgb() and hsl() (and the evolving oklch() and oklab()) isn’t the sort of attention-grabbing headline that leads to oo’s and aw’s, but it sure does make it a lot better to define color values.

For one, I never have to reach for rgba() or hsla() when I need an alpha value. In fact, I always used those whether or not I needed alpha because I didn’t want to bother deciding which version to use.

color: hsl(50deg, 100%, 50%);

/* Same */
color: hsla(50deg, 100%, 50% / 1)

Yes, writing the extra a, /, and 1 was worth the cost of not having to think about which function to use.

But the updated color syntax is like a honey badger: it just doesn’t care. It doesn’t care about the extra a in the function name. It doesn’t even care about commas.

color: hsl(50deg 100% 50% / .5);

So, yeah. That’s definitely changed the way I write colors in CSS.

What I’m really excited to start using is the newer oklch() and oklab() color spaces now that they have full browser support!

Learn More About CSS Color 4 Features

Sniffing Out User Preferences

I think a lot of us were pretty stoked when we got media queries that respect a user’s display preferences, the key one being the user’s preferred color scheme for quickly creating dark and light interfaces.

:root {
  --bg-color: hsl(0deg 0% 100%);
  --text-color: hsl(0deg 0% 0%);
}

@media (prefers-color-scheme: dark) {
  :root {
    --bg-color: hsl(0deg 0% 0%);
    --text-color: hsl(0deg 0% 100%);
  }
}

body {
  background: var(--bg-color);
  color: var(--text-color);
}

But it’s the prefers-reduced-motion query that has changed my CSS the most. It’s the first thing I think about any time a project involves CSS animations and transitions. I love the idea that a reduced motion preference doesn’t mean nuking all animation, so I’ll often use prefers-reduced-motion to slow everything down when that’s the preference. That means I have something like this (usually in a cascade layer for base styles):

@layer base {
  :root {
    --anim-duration: 1s;
  }

  /* Reduced motion by default */
  body {
    animation-duration: --anim-duration;
    transition: --anim-duration;
  }

  /* Opt into increased motion */
  @media screen and (prefers-reduced-motion: no-preference) {
    body {
      --anim-duration: .25s;
    }
  }
}

Learn More About User Preference Queries

Defining Color Palettes

I’ve used variables for defining and assigning colors ever since I adopted Sass and was thrilled when CSS custom properties came. I’d give generic names to the colors in a palette before passing them into variables with more functional names.

/* Color Palette */
--red: #ff0000;
/* etc. */

/* Brand Colors */
--color-primary: var(--red);
/* etc. */

I still do this, but now I will abstract things even further using color functions on projects with big palettes:

:root {
  /* Primary Color HSL */
  --h: 21deg;
  --s: 100%;
  --l: 50%;

  --color-primary: hsl(var(--h) var(--s) var(--l) / 1);
}

.bg-color {
  background: var(--color-primary);
}

.bg-color--secondary {
  --h: 56deg;
  background: hsl(var(--h) var(--s) var(--l) / 1);
}

A little too abstract? Maybe. But for those projects where you might have ten different varieties of red, orange, yellow, and so on, it’s nice to have this level of fine-grained control to manipulate them. Perhaps there is more control with color-mix() that I just haven’t explored yet.

Learn More About Defining Color Palettes

What I’m Not Using

Huh, I guess I am writing CSS a bit differently than I used to! It just doesn’t feel like it, but that probably has to do with the fact that there are so many other new features I am not currently using. The number of new features I am using is much, much lower than the number of features I have yet to pick up, whether it’s because of browser support or because I just haven’t gotten to it yet.

CSS Nesting

I’m really looking forward to this because it just might be the tipping point where I completely drop Sass for vanilla CSS. It’s waiting for Firefox to support it at the time of this writing, so it could be right around the corner.

Style Queries

I’ve made no secret that applying styles to elements based on the styles of other elements is something that I’m really interested in. That might be more of an academic interest because specific use cases for style queries elude me. Maybe that will change as they gain browser support, and we see a lot more blog posts where smart folks experiment with them.

:has()

I’ll definitely use this when Firefox supports it. Until then, I’ve merely tinkered with it and have enjoyed how others have been experimenting with it. Without full support, though, it hasn’t changed the way I write CSS. I expect that it will, though, because how can having the ability to select a parent element based on the child it contains be a bad thing, right?

Dynamic Viewport Units

I’ve started sprinkling these in my styles since they gained wide support at the end of 2022. Like style queries, I only see limited use cases — most notably when setting elements to full height on a mobile device. So, instead of using height: 100vh, I’m starting to write height: 100dvh more and more. I guess that has influenced how I write CSS, even if it’s subtle.

Media Query Range Syntax

Honestly, I just haven’t thought much about the fact that there’s a nicer way to write responsive media queries on the viewport. I’m aware of it but haven’t made it a part of my everyday CSS for no other reason than ignorance.

OKLCH/OKLAB Color Spaces

oklch() will most definitely be my go-to color function. It gained wide support in March of this year, so I’ve only had a couple of months and no projects to use it. But given the time, I expect it will be the most widely used way to define colors in my CSS.

The only issue with it, I see, is that oklch() is incompatible with another color feature I’m excited about...

color()

It’s widely supported now, as of May 2023! That’s just too new to make its way into my everyday CSS, but you can bet that it will. The ability to tap into any color space — be it sRGB, Display P3, or Rec2020 — is just so much nicer than having to reach for a specific color function, at least for colors in a color space with RGB channels (that’s why color() is incompatible with oklch() and other non-RGB color spaces).

--primary-color: color(display-p3 1 0.4 0);

I’m not in love with RGB values because they’re tough to understand, unlike, say, HSL. I’m sure I’ll still use oklch() or hsl() in most cases for that very reason. It’s a bummer we can’t do something like this:

/* 👎 */
--primary-color: color(oklch 70% 0.222 41.29);

We have to do this instead:

/* 👍 */
--primary-color: oklch(70% 0.222 41.29);

The confusing thing about that is it’s not like Display P3 has its own function like OKLCH:

/* 👎 */
--primary-color: display-p3(1 0.434 0.088);

We’re forced to use color() to tap into Display P3. That’s at odds with OKLCH/OKLAB, where we’re forced to reach for those specific functions.

Maybe one day we’ll have a global color() function that supports them all! Until then, my CSS will use both color() and specific functions like oklch() and decide which is best for whatever I’m working on.

I’ll also toss color-mix() in this bucket, as it gained full support at the same time as color(). It’s not something I write regularly yet, but I’ll certainly play with it, likely for creating color palettes.

Honorable Mentions

It would be quite a feat to comment on every single new CSS feature that has shipped over the past five or so years. The main theme when it comes to which features I am not using in my day-to-day work is that they are simply too new or they lack browser support. That doesn’t mean I won’t use them (I likely will!), but for the time being, I’m merely keeping a side-eye on them or simply having a fun time dabbling in them.

Those include:

  • Trigonometric functions,
  • Anchor position,
  • Scroll-linked animations,
  • initial-letter,
  • <selectmenu> and <popover>,
  • View transitions,
  • Scoped Styles.

What about you? You must be writing CSS differently now than you were five years ago, right? Are you handling the cascade differently? Do you write more vanilla CSS than reaching for a preprocessor? How about typography, like managing line heights and scale? Tell me — or better yet, show me — how you’re CSS-ing these days.

]]>
hello@smashingmagazine.com (Geoff Graham)
<![CDATA[How To Create A Rapid Research Program To Support Insights At Scale]]> https://smashingmagazine.com/2023/07/create-rapid-research-program-support-insights-scale/ https://smashingmagazine.com/2023/07/create-rapid-research-program-support-insights-scale/ Wed, 12 Jul 2023 15:00:00 GMT While the User Experience practice has been expanding and will continue to balloon in the coming years, so have its sub-disciplines such as content strategy, operations, and user research. As the practice of UX Research matures, scalability will continue to be important in order to meet the rapid needs of iterative product development.

While there are several effective ways to scale user research, such as increasing researcher-to-designer ratios, leveraging big data and real-time analytics, or research democratization, one of the most effective methods is developing a Rapid Research program. In a Rapid Research program, teams are provided quick insight into key problems at an unprecedented operational speed.

Rapid Research-type support has been around for a while and has taken different shapes across different organizations. What remains true, however, is the goal to provide actionable insights from end-users at a quick pace that fits within product sprints and maintains pace with agile development practices.

In this article, I’m going to unpack what a Rapid Research program is, how to build one in your organization, and underscore the unique benefits that a program like this can provide to your team. Given that there is no singular ‘right way’ to scale insights or mature a user research practice, this outline is intended to provide building blocks and considerations that you may take in the context of the culture, opportunities, and challenges of your organization.

What Is Rapid Research?

Rapid research is a relatively recent program where typical user research practices and operations are standardized and templatized to provide a consistent, repeatable cadence of insights. As the name suggests, a core requirement of a rapid research program is that it delivers quicker-than-average insights. In many teams, this means delivering research on a weekly cadence where a confluence of guardrails, templates, and requirements work to ensure a smooth and consistent process.

Programs like Rapid Research may be created out of a necessity to keep up with the pace of development while freeing the bandwidth of expert researchers’ time for more complex discovery work that often takes longer. A rapid research program can be a crucial component of any team’s insight ecosystem, balanced against solving different business problems with flexible levels of support.

Scope

Research Methods

In order to make research more rapid, teams may consider some research methodologies out of the question in their Rapid Research program. Methods such as longitudinal diary studies, surveys, or long-form interviews might suffer from lower quality if done too quickly. When determining the scope of your rapid research program, ask yourself what methods you can easily templatize and, most importantly, which best support the needs of your experience teams.

For example, if your experience teams work on 2-week sprints and need insights in that time, then you will need to consider which research methods can reliably be conducted in 1–2 week increments.

Sample Size And Research Duration

Methods alone won’t ensure a successful implementation of a rapid research program. You will also need to consider sample size and session duration. Even if you decide usability tests are a reasonable methodology for your rapid research framework, you may be introducing too much complexity to run them with 15+ users within 60-min sessions and analyze all that data efficiently. This may require you to narrow your focus to fewer sessions with shorter duration.

Participant Recruitment

While there may be fewer and shorter sessions for each study, you also need to consider your participant pool. Recruitment is one of the most difficult aspects of conducting any user research, and this effort must be considered when determining the scope of the program. Recruitment can jeopardize the pace of your program if you source highly specific participants or if they are harder to reach due to internal bureaucracy or compliance constraints.

In order to simplify recruitment, consider what types of participants are both the easiest to reach and who account for the most use cases or products you expect to be researching. Be careful with this, though, as you don’t want to broaden your customer profiles too much for fear of not getting the helpful feedback you need, as UserZoom says:

“Why is sourcing participants such a challenge? Well, you could probably find as many users as you like by spreading the net as wide as possible and offering generous incentives, but you won’t necessarily find the ‘right’ participants.”

— UserZoom, “Four top challenges UX teams face in 2020 and how to solve them

Timing

Why Timing Matters

Coupled tightly with scope, the timing of your rapid research end-to-end process will be paramount to the program’s success. Even if you have narrowed the scope to only a handful of research methods with limited sessions at shorter durations and with specific participant profiles, it won’t be ‘rapid’ if your end-to-end project timeline is as long as your average traditional study. Care must be taken to ensure that the project timelines of your rapid research studies are notably quicker than your average studies so that this program feels differentiating and adds value on top of the work your team is already doing.

Reconsidering scope

If your timelines are about the same, or your rapid cadence is less than 50% more efficient than your average study, consider whether or not you’re being judicious enough in your scope above. Always monitor your timelines and identify where you can speed things up or limit the scope in order to reach a quick turnaround, which is acceptable. One way to support shorter project timelines is through compartmentalization.

Compartmentalization

About Compartmentalization

One way to balance scope, timing, and consistency is by breaking up pieces of your average study process into smaller, separate efforts. Consider what your program would look like if you separated project intake from the study kick-off or if discussion guides were not dependent on recruitment or participant types. Splitting out your workflow into separate parts and templating them may eliminate typical dependencies and streamline your processes.

Ways To Compartmentalize

Once you’ve determined the set of research methods and ideal participants to include in your program, you may:

  • Templatize the discussion guides to provide a quick starting point for researchers and cut down on upfront preparation time.
  • Create a consistent recruitment schedule independent of the study method to start before study intake or kick-off to save upfront time.
  • Pre-schedule recurring kick-off and readout sessions to set expectations for all studies while limiting timeline risk when at the mercy of others’ calendars.

There is a myriad of opportunities to do things differently than your typical research study when you reconsider the relationships and interdependencies in the process.

Consistency

Expectability

While a quality rapid research program takes into consideration scope, timing, and compartmentalization, it also needs to consider consistency. It would be difficult to discern whether or not the program was ‘rapid’ if, on one week, a study takes one week, and on another week, a study takes 2.5 weeks. Both may be below your current study average. However, project stakeholders may blur the lines between the differences in your rapid studies and your typical studies due to the variability in approach. In addition, it may be difficult to operationalize compartmentalization or rapid recruitment without some form of expected cadence.

More Agility

As you and your team get used to operating within your rapid cadence, you may identify additional opportunities to templatize, compartmentalize or focus scope. If the program is inconsistent from study to study, it may be more difficult to notice these opportunities for increased agility, hindering your program from becoming even more rapid over time.

A Rapid Research Case Study

While working at one of the largest telecommunications companies in the US, I had the privilege of witnessing the growth of the UX Research team from just four practitioners to over 25 by the time I left. During this time, the company had matured its user experience practice, including the standards, processes, and discipline of user research.

Identifying The Need

As we grew, human insight became a central part of the product development process, which meant an exponential increase in its demand. While this was a great thing and allowed our team to grow, the work we were doing was not sustainable — we were constantly trying to keep pace with product teams who brought us in too late in the process simply to validate their ideas. Not only did we always feel rushed, but we were stuck doing only evaluative work, which not only stifled innovation but also did not satisfy our more senior researchers who wished to do more generative research.

How It Fits In

Once diagnosing this issue, our leadership initiated several new processes to build a more well-rounded research portfolio that supported iterative research while enabling generative research. This included a democratization program, quarterly planning, and my initiative: Rapid Research. We determined that we needed a program that would allow us to take on mid-sized projects at the pace of product development while providing a new opportunity to hire junior researchers who would be a great talent pool for our team and provide a meaningful way for those new to the field to grow their skills.

Getting Started

In order to build the rapid research program, I audited the previous year’s worth of research to determine our average timelines, the most common methodologies used for iterative and mid-sized projects, and to identify our primary customer who we do research with most often. My findings would be the bedrock of the program:

  • Most iterative research was lite interviews and brief usability tests.
  • Many objectives could be covered in 30-minute sessions.
  • Mid-sized projects were often with just a handful of current customers.
  • Our average study time was 2–3 weeks, so we’d need to cut this down.
  • Given the above constraints, study goals should be highly focused.

Building The Program

At first, we did not have the budget for hiring new junior researchers to staff the program team. What we did have, however, was a contract with a research vendor who we’ve worked with for years, so we decided to partner with researchers from their team to run our rapid research program.

  • We created specific templates for ‘rapid’ usability tests and interviews.
  • Studies were capped at two objectives and only a handful of questions in order to fit into 30-min sessions.
  • Study intake was governed via a simple intake form, required to be filled out by EOD every Wednesday.
  • We scheduled standing kick-off and readout sessions every Friday and shared these invites with product teams for visibility.
  • To further establish our senior researchers as Portfolio Research Leads and to protect against scope creep, we required teams to formally request ‘rapid’ studies through them first.
  • We started our rapid cadence at two weeks and were able to cut it down to just one week after piloting the program for a month.

Strong Results

We saw the incredible value and strong results from building our rapid research program, especially alongside the other processes our team was standing up to support varying insights needs.

  • Speed
    We were able to eventually run three research studies simultaneously, enabling us to deliver more research at twice the pace of a traditional study.
  • Scale
    Through this enablement of speed, consistent recruitment, and templatized process, we ran over 100 studies & 650+ moderated interviews.
  • Impact
    Because we outsourced rapid research to a vendor, our team was freed up to deliver foundational research, which doubled our work capacity.
  • Growth
    Eventually, we hired junior researchers and transitioned the program from the vendor, increasing subject matter expertise & operational efficiency.
How To Build A Rapid Research Program

The following steps outline a process for getting started with building your own rapid research program in your organization. Exactly which steps you choose to follow, or if you decide to add more or less to your process, will be entirely up to you and the unique needs of your team. Follow the proceeding steps while considering the above guidelines regarding scope, timing, compartmentalization, and consistency.

Determine If You Even Need A Rapid Research Program

While seemingly counter-intuitive, the first step in building a rapid research program is considering whether you even need one in the first place. Every new initiative or tactic intended to mature user research practice should consider the available talent and capabilities of the team and the needs or opportunities of the organization it sits within. It would be unfortunate to invest time to build a robust, rapid research program only to find that nobody uses or needs it.

Reflection On Current Needs

Start by documenting the needs of your experience teams or the organization you support by the different types of requests you receive.

  • Are you often asked to deliver research faster?
  • What are the types of research which are most often requested?
  • Does your team have the capability or operational rigor required to deliver at a faster pace?
  • Are you staffed enough to support a more rapid pace, even if you could deliver one?
  • Is delivering faster, rigidly-scoped research in service to your long-term goals as a research team, or might it sacrifice them?

Gather More Information

Answering these questions should be your first step before any meaningful work is done to build a rapid research program. In addition, you might consider the following information-gathering activities:

  • Audit previous research you or your team have done to determine their average scope, timeline, and method.
  • Conduct a series of internal stakeholder interviews to identify what potential value a rapid research program might hold.
  • Look for signals for where the organization is going. If leadership is hiring or training teams on agile methods or demanding teams to take a step back to focus on discovery can help you decide when and where to invest your time.

These additional inputs will either help you refine your approach to building a program or to steer away from doing so.

Limitations Of Rapid Research

Finally, when considering if you should build a rapid research program in the first place, you should consider what the program cannot do.

  • What a rapid research program might save on time, it cannot necessarily save on effort. You will still need researchers to deliver this work, which means you may need to restructure your team or hire more people.
  • If you decide to make your rapid research program self-service, you likely will still need ResOps support for recruitment and managing the intake process effectively.
  • It is also possible to hire a research vendor partner to lead this program, though that will require a budget that not every team may have.
  • As mentioned above, a good rapid research program is tight and focused in its scope, which limits the type of projects it can accommodate.

Identify Your Starting Scope, Timing & Cadence

Once you’ve decided to pursue a rapid research program, you’ll need to understand what form your program should take in order to deliver the highest value to your team and those you support. As mentioned above, a right-sized scope should consider the research methods, requirements, session quantity & duration, and participant profiles, which you can confidently accommodate. And you will need to determine the end-to-end timing and program cadence that differentiates from current work while providing just enough time to still deliver sustainable quality.

Determine Participant Profiles

Start building your scope backwards from the needs gaps you’re filling within your team based on the answers to the discovery questions above. You’ll want to identify the primary type(s) of end-users this program will research.

  1. Audit the past 6–12 months of research you or your team has done, looking at the most common customer type with whom you do research.
  2. Then, couple that with any knowledge you may have of where the business or your experience teams will be focused for the following 6–12 months.

For example, if your audit revealed that your team had focused most frequently on current customers over the past year, and you also know that your business will soon focus on the acquisition of new customers, consider including both current customers and prospective customers in your rapid research scope.

Remember the important note about consistency above? Once you’ve identified potential participant profiles, make sure you can consistently recruit them. For example, if you use a research panel to source participants for research studies, test the incidence of your participant profiles. If you find they don’t have many panelists with the attributes you need, you might spend too much time in recruitment and jeopardize the speed of the program.

A balance should be struck between participant profiles that are specific enough to be useful for most projects and those broad enough to reach easily.

Determine Research Methods

You can conduct the same audit and rough forecasting when determining the research methods your program ought to support but with two additional considerations:

  1. Team strategy,
  2. Individual career development.

User researchers tend to focus their work further upstream, where they’re driving product roadmaps or influencing business strategy. This can bode well for your rapid research program if it is focused on evaluative research projects, which are often quicker and cheaper to conduct.

The ultimate goal is for the rapid research program to be a complement to what your team provides or as an enabler for freeing up their bandwidth so that they can focus on the type of work they want to do more of.

Right-size Research Methods

Once you’ve determined which research methods you want to include in your rapid research program, consider the level of rigor you need to balance effort and complexity.

Determining Timelines

Project timelines within a rapid research cadence are directly affected by the above scope decisions for participant profiles and research methodology. Timelines can also compound in highly regulated industries such as healthcare or banking, where you may be required to gather legal & compliance approval on every moderation guide. In order to call this a rapid research program, the end-to-end project timelines need to be shorter than a typical project of a similar scope, or at least feel that way.

  1. Scope current minimum effort
    Start by jotting down the minimum amount of time it takes a researcher on your team to do each sub-step in your current non-rapid research process. Do this for the same participant profiles and methods you want to include in your rapid research program.
  2. Dependencies
    Now, identify which sub-steps are dependent on others and think of ways to program them in order to build efficiency. For example, if you need legal approval on every moderation guide before data collection, which takes 2–3 days, see if Legal will commit to a change to a 24-hour SLA for rapid research-specific projects. Another example is if you typically give stakeholders a few days to provide feedback on moderation guides, change this for rapid research projects to cut down dependency time.
  3. Identify compartmentalization
    In addition to programming project dependencies, consider the above guidance for compartmentalizing some of the programs in order to remove dependencies entirely, such as with recruitment. Identify what parts of the process don’t have the same dependencies in your rapid research program and can be started earlier. By removing dependencies entirely, you may be able to do several things simultaneously to speed up project timelines.

Once you’ve documented your current research process (steps, dependencies, timing) and the changes you need to make to build efficiencies or remove dependencies, document what ‘must be true’ in order to consistently deliver identified changes. Create a table to document all of these details, then sum up the total timelines to compare your typical end-to-end research project timeline with your potential new ‘rapid’ timeline.

Ask yourself if this seems ‘rapid’ when stacked against your average study duration.

  • If not, look back at the guidance above. Ask yourself if there are other customer types that may be easier to get in front of that you haven’t considered. Consider whether you need to create a new process, expedite existing processes, or create new relationships in order to make your timelines even more rapid.
  • If so, congratulations! You might have just landed on the right scope for your rapid research program. Consider whether this new rapid timeline is something that you can deliver consistently and reliably over time and whether or not you have enough access to participants, and enough budget, to carry out this cadence long-term.

Build Infrastructure, Standards & Rules

It’s time to set the foundation. Return back to the tables you made above and create an action plan with the following steps and a timeline to build the infrastructure required to bring your program to life. As part of this, you’ll need to establish the rules and standards for communicating with partners. You might consider a playbook and formal scope document to inform others of the ins/outs of the program.

Gather Buy-in

Prioritize any work that requires buy-in, generating understanding, or acquiring budget first before spending your time and energy building templates or documentation. You wouldn’t want to create a 20-page scope document outlining the bandwidth for two researchers, a limit to 1 round of stakeholder feedback, and a 24hr SLA for legal approval, only to find out others cannot commit to that.

Create Templates

You’ll need plenty of templates, tools, and processes specific to the scope of your program.

  • If you’re limiting moderation guides to a maximum of 10 questions, then create a specific discussion guide template reflecting that.
  • If your data analysis will be sped up by using structured note-taking templates, create those.
  • If you’ve determined that all rapid research projects only require an executive summary one-pager, make that too.

Staffing

As mentioned above, even a drastically reduced version of your typical research processes still requires effort to support. You’ll need to determine, based on the expected scope and cadence of each rapid research project, how many researchers and/or research operations coordinators you’ll need to support the program. While all rapid research programs will require dedicated effort, there are creative ways of staffing the program, such as:

  • A dedicated team of 1–2 researchers and 1–2 Ops coordinators to deliver projects with the greatest efficiency and quality.
  • A dedicated team of 1–2 researchers who also handle the operations of running the program itself.
  • A self-service program, with 1–2 Ops coordinators for supporting anyone doing the research work.
  • Outsourcing the entire program to a vendor.

Work with your leadership, HR, and TA professionals on securing approval for any team restructure, needed headcount budget, or to onboard a new vendor. Then, take the appropriate steps to hire your next researcher or secure the staffing help you need to support your program.

Coaching And Guidance

Consider training, coaching, and check-in meetings as part of your infrastructure.

  • If you are staffing new researchers to this rapid research program, make sure they understand the expectations and have what they need to succeed.
  • If you’re implementing a self-service model, provide brown-bag sessions to partners to explain the program do’s and don’ts.
  • Schedule quarterly check-ins with partners and leadership to discuss the program accomplishments and any needed adjustments to ensure it stays relevant.

Pilot, Get Feedback, And Iterate Over Time

No matter how much preparation you do or how much time and effort you spend building the alliances, infrastructure, training, and support required to run your rapid research program effectively, you will learn that there are improvements you should make once you put it into practice.

There are many benefits to piloting a new program in an organization. One benefit is that it can mitigate risks and allow teams to learn quickly and early enough to make positive enhancements.

“Piloting offers a realistic preview experience for users at the earliest stages of development. It allows the organization and design team to gather real-time insights that can be used to shape and refine the product and prepare it for commercialization.”

— Entrepreneur, “Tasting As You Go: The 5 Benefits of ‘Piloting’

This means setting expectations early. Consider your first few projects as pilots and expect them to be rocky and imperfect. Use this to your advantage by asking stakeholders you’re closest with to be your trial projects and let them know how important their honest feedback is throughout the process. Ensure that you have clear mechanisms to gather feedback at each project milestone so that you can track progress. It is especially important to capture what might be slowing you down along the way or putting your ‘rapid’ timelines at risk.

Program Evolutions, Impacts & Considerations

Potential Evolutions & Variations

While I’ve outlined a process for getting started, there are many ways in which your rapid research program may evolve over time to meet the needs of your organization better.

  • After a few periods, you might identify volume isn’t as high as you anticipated, so you extend the 1-week timeline to every two weeks.
  • After a few months, your business might launch a new product line, requiring you to consider a new set of customer profiles in recruitment.
  • You may decide to leverage your rapid cadence for individual segments of a longitudinal diary study to accommodate new methods.
  • You might use rapid research projects to exclusively evaluate in-market products while others on the team focus on in-progress / new products.
  • Rapid research projects could be a stage-gate for larger projects — proving a customer need before larger time investments are made.

However your rapid research program takes shape, revisit its goals, scope, and operations often in relation to your organizational needs and context so that it remains relevant and delivers the highest impact.

Solid Impacts From Rapid Research

Building a rapid research program can have a big impact and can contribute positively toward your team’s long-term strategy. One impact of instituting a rapid research program could be that now your team is freed up to focus on more generative research, which unlocks your ability to deliver deep customer insights that pave the way for innovation or strategy. And due to your new rapid pace, you may be able to keep pace with agile development and conduct end-to-end research within 2-week sprints. Another impact is that you may catch more usability issues further upstream, saving you over 100x in overhead business cost. A final impact of a rapid research program is that it can double your team’s throughput, allowing your team to deliver more research, more frequently, to accommodate more organizational needs.

Be sure to track these impacts over time so that you not only get credit for the hard work you put into building the program but so that you can sustain and grow the program over time.

Considerations When Building A Rapid Research Program

As mentioned in this article, there are many benefits to building a rapid research program. That being said, there are limitations to rapid research in regard to its pros and cons when it should be used, and if you have the available time to stand up a program yourself.

Pros And Cons

As with building any new program, one should consider both its benefits as well as drawbacks. Here are a few for rapid research programs:

Pros:

  • Can free time for foundational work;
  • Rapid studies may keep a better pace with development cycles;
  • Can create meaningful opportunities for junior staff;
  • Can double project throughput, increasing output volume.

Cons:

  • Still requires work and dedicated bandwidth;
  • Another thing to diligently track and manage;
  • Not great for all types of research studies;
  • May cost more money or resources you don’t have.

Guidance For Using The Program

Rapid Research programs are best for specific types of research which do not take a long time to complete or require rigorous expertise. You may want to educate your partners on when they should expect to use a rapid research program and when they should not.

  • Use rapid research when:
    • Agility or quick turnaround is needed;
    • You need simple iterative research;
    • Stakeholder groups are easier to rally;
    • Participants are easy to reach.
  • Do not use rapid research when:
    • The study method cannot be done quickly without risking quality;
    • A highly complex or mixed-methods study is needed;
    • A project requires high visibility or stakeholder alignment;
    • You have specific, hard-to-reach participants.

Ramp Up Time

While the exact timeline of building a rapid research program varies from team to team, it does take time to do it right. Make sure to plan out enough time to do the upfront work of identifying the appropriate scope, timing, and cadence, as well as gathering consensus from leadership and appropriate stakeholder groups. Standing up a Rapid Research program can take anywhere from 3 months to 1 year, depending on the following:

  • Legal and compliance limitations or requirements.
  • The number of stakeholder groups you need buy-in from.
  • Approval of budget for outside vendors or for hiring an in-house team.
  • Time it takes to build templates, guidelines, and materials.
  • Onboarding, training, and iteration when starting out.
Conclusion

A rapid research program can be a fundamental part of your team’s UX Research strategy, enabling your team to take on new insight challenges and deliver efficient research at an unprecedented pace. Building a rapid research program with high intention by determining the goals, appropriate scope, and necessary infrastructure will set your team up for success and enable you to deliver more value for your organization as you scale your user research practice.

Don’t be afraid to try a rapid research program today!

Further Reading On SmashingMag

]]>
hello@smashingmagazine.com (Devin Harold)
<![CDATA[Smashing Podcast Episode 63 With Chris Ferdinandi: What Is The Transitional Web?]]> https://smashingmagazine.com/2023/07/smashing-podcast-episode-63/ https://smashingmagazine.com/2023/07/smashing-podcast-episode-63/ Tue, 11 Jul 2023 08:00:00 GMT In this episode of The Smashing Podcast, we’re talking about The Transitional Web. What is it, and how does it describe the technologies we’re using? Drew McLellan talks to Chris Ferdinandi to find out.

Show Notes

Weekly Update

Transcript

Drew: He’s the author of the Vanilla JS Pocket Guide series, creator of the Vanilla JS Academy Training Program and host of the Vanilla JS Podcast. We last talked to him in late 2021 where we asked if the web is dead, and I know that because I looked it up on the web. So, we know he is still an expert in Vanilla JS but did you know he invented fish and chips? My smashing friends, please welcome back Chris Ferdinandi. Hi, Chris, how are you?

Chris: I’m smashing, thank you so much. How are you today, Drew?

Drew: I’m also smashing, thank you for asking. It’s always great to have you back on the podcast, the two of us like to chat about maybe some of the bigger picture issues surrounding the web. I think it’s easy to spend time thinking about the minutiae of techniques or day-to-day implementation or what type of CSS we should be using or these things but sometimes it is nice to take a bit of a step back and look at the wider landscape. Late last year, you wrote an article on your Go Make Things website called The Transitional Web. What you were talking about there is the idea that the web is always changing and always in flux. After, I don’t know how long I’ve been doing this, 25 years or so working on the web, I guess change is pretty much the only constant, isn’t it?

Chris: It sure is. Although, to be fair, it feels like a lot of what we do is cyclical and so we’ll learn something and then we’ll unlearn it to learn something new and then we’ll relearn it again just in maybe a slightly different package which is, in many ways, I think the core thesis of the article that you just mentioned.

Drew: And is that just human nature? Is that particular to the web? I always think of, when I was a kid in the ’80s, the 1980s, okay, so we’re talking a long while back-

Chris: It was a wild time.

Drew: One of the pinnacles that, if you had a bit of spending power, one of the things you’d have in your living room was a hi-fi separates. So, you’d have a tape deck, maybe a CD deck, an amplifier and I always remember as a kid, they’d all be silver starting off and those were the really cool ones. And then after a while, a manufacturer would come out with one that wasn’t silver, it was black and suddenly black looked really cool and all the silver stuff looked really old. And so, then you’d have five years of everything being in black and then somebody would say, "Oh, black’s so boring. Here’s our new model, it’s silver," and everyone would get really excited about that again. I feel like somehow the web is slightly and, as I say, maybe it’s a human nature thing, perhaps we’re all just magpies and want to go to something that looks a bit different and a bit exciting and claim that’s the latest, greatest thing. Do you think there’s an element of that?

Chris: Yeah, I think that’s actually probably a really good analogy for what it’s like on what our industry has a tendency to do. I think it’s probably bigger than just that. I had a really, I don’t want to say a really good thought, that sounds arrogant. I had a thought, I don’t know if it was good or not, I forgot what it was.

Drew: Oh, it’ll come back.

Chris: So, I can’t tell you but it was related as you were talking.

Drew: I bamboozled you with talk of hi-fi separates.

Chris: Yeah, no, that’s great. It’s great.

Drew: We last talked about this concept of the lean web where we were seeing a bit of a swing back away from these big frameworks where everything is JavaScript and even our CSS was in JavaScript. And we were beginning to see at that time a launch of things like Petite Vue, Alpine.js, Preact, these smaller, more focused libraries that try and reduce the weight of JavaScript and be a little bit more targeted. Is that a trend that has continued?

Chris: Yeah, and it’s continued in a good way. So, you still see projects like that pop up, I’ve seen since then a few more tiny libraries. But I think one of the other big trends that I’m particularly both excited about and then maybe also a little bit disheartened about is the shift beyond smaller client side libraries into backend compiler tools. So, you have things like Svelte and SvelteKit and Astro which are designed to let people continue to author things with a state-based UI, JavaScripty approach but then compile all of that code that would normally have to exist in the browser and run at runtime into mostly HTML with just the sprinklings of JavaScript that you need to do the specific things you’re trying to do.

Chris: And so, the output looks a lot more traditional DOM manipulation but the input looks a lot more like something you might write in Reactor view. So, I think that’s pretty cool. It’s not without, in my opinion, maybe some holes that people can fall into and I’m starting to see some of those tools do the, hey, we solved this cool thing in an innovative way so let’s go and repeat some of the mistakes of our past but differently traps and we can talk about that, that didn’t make it into the article that you referenced. I recently wrote an update looking back on how things are changing that talks about where they’re headed.

Chris: But I think one of the big things in my article, The Transitional Web, was this musing about whether these tools are the future or just a transitional thing that gets us from where we are to where we’re headed. So, for example, if you’ve been on the web for a while, you may remember that there was a time where jQuery was the client side library.

Drew: It absolutely was, yeah.

Chris: If you were going to do JavaScript in the web, you were going to use jQuery.

Drew: jQuery was everywhere.

Chris: Yeah. And not that you couldn’t get by without it but doing something like getting all of the elements that have a class was incredibly difficult back in the, i.e., six through eight era. And jQuery made it a lot easier, it smoothed things out across browsers, it was great. But eventually browsers caught up, we got things like querySelector and querySelectorAll, the classList API, cool methods for moving elements around like a pen and before and after and removed. And suddenly, a lot of the stuff that was jQuery’s bread and butter, you could just do across any browser with minimal effort. But not everything, there were still some gaps or some areas where you might need polyfills.

Chris: And so, you started to see these smaller tools that were ... jQuery, they did some of the things but they didn’t do everything so the ones that immediately come to mind for me are tools like Umbrella JS or Shoestring from the folks over at Filament Labs. And the thing with those tools is they were really popular for a hot minute, everyone’s like, "You don’t need jQuery, use these," and then the browsers really caught up and they went away entirely. And actually, even before that fully happened, you started to see tools like React and Vue and Angular start to dominate and just, really, people either use jQuery or these other tools, they don’t touch Umbrella or Shoestring at all.

Chris: So, I think the thing I often wonder is are tools like Preact and Solid and Svelte and Astro, are those more like what reacted for the industry or more like Umbrella and Shoestring where they’re just getting us to whatever’s next. At the time that I wrote the article, I suspected that they were transitional. Now, I think my thoughts have shifted a little bit and I feel like tools like Preact and Solid are probably a little bit more and Petite Vue who are ... You called it something weird because you’re British, I forget, Petty Vue or something but-

Drew: Petite Vue, yeah.

Chris: No, I’m just teasing, I’m sorry. I love you, Drew.

Drew: I was attempting to go for the French so ...

Chris: Right? Sorry, I have the way you guys say herb stuck in my head now instead of herb and I just can’t. So, yeah, I feel like those tools are potentially transitional and the what’s next just as an industry is, in my opinion, and a lot could change in the next year or three, the way I’m feeling now, it seems like tools like Astro and Svelte are going to be that next big wave at least until browsers catch up a lot. So, in my opinion, the things that browsers really need to have to make a lot of these tools not particularly necessary is some native DOM diffing method that works as easily as inner HTML does for replacing the DOM but in a way that doesn’t just destroy everything.

Chris: I want to be able to pass in an HTML string and say make the stuff in this element look like this with as little messing up of things as possible. And so, until we have that, I think there’s always going to be some tooling. There’s a lot of other things that these tools do like you can animate transitions between pages like you would in SPA. We’ve got a new API that will hopefully be hitting the browser in the near future, it works in Chrome Canary now but nowhere else, your transitions API. There’s an API in the works for sanitizing HTML strings so that you don’t do terrible cross-site scripting stuff, hasn’t really shipped anywhere yet but it’s in the works.

Chris: So, there’s a lot of library-like things in the works but DOM diffing, I think, is really the big thing. So much of how we build for the web now is grab some data from an API or a database and then dynamically update the UI based on things the user does. And you can do that with DOM manipulation, I absolutely have but, man, it is so much harder to do. So, really, I get the appeal of state-based UI. The flip side is we also use state-based UI for a lot of stuff where it’s not appropriate and it ends up being harder to manage and maintain in the long run. So, I’m rambling, I’m sorry. Drew, stop me, ask me—

Drew: Yeah, I don’t want to gloss over the importance of jQuery as an example for this overall trend because, as you say, at the time, it was really difficult to just find things in the DOM to target something. You could give things an ID and then you had get element by ID and you could target it that way. But say you wanted to get everything with a certain class, that was incredibly difficult to do because there was no way of accessing the class list, you could just get the attribute value and then you would have to dissect that yourself. It’s incredibly inefficient to try and get something by class and what jQuery did was it took an API that we were already familiar with, the CSS selector API essentially, and implemented that in JavaScript.

Drew: And, all of a sudden, it was trivially easy to target things on the page which then made it ... It very quickly just became the defacto way that any JavaScript library was allowing you to address elements in the DOM. And because of that trend, because that’s how everybody was wanting to do it with quite a heavy JavaScript implementation, let’s not forget this was not a cheap thing to do, that the web platform adapted and we got querySelector which does the same thing on querySelectorAll. And of course, then what jQuery did or I think its selector engine was called Sizzle, I think, under the hood. Sizzle then adopted querySelectorAll as part of its implementation.

Drew: So, if a selector could be resolved using the native one, it would. So, actually, the web platform was inspired by jQuery and then improved jQuery in this whole cycle. So, I think the way the web has always progressed is observing what people are doing, looking at the problems that they’re trying to solve and the messes of JavaScript that we’re using to try and do it and then to provide a native way to do that which just makes everything so much easier. Is that the ultimate trend? Is that what we’re looking at here?

Chris: Yeah, for sure. I often describe jQuery as paving the cow paths. So many of the methods that I love and use in the browser, I owe entirely to jQuery and I think recognizing that helped me get less angry at some of the damage that modern frameworks do or modern libraries because the reality is they are ... I think the thing is a lot of them are experiments that show alternate ways to do things and then we have a tendency as an industry to be like, "If it’s good for this, it’s good for everything." And so, React is very good at doing a specific set of things in a specific use case and, through some really good marketing from Facebook, it became the defacto library of the web.

Chris: I think tools like Astro and Svelte are similarly showing a different way we can approach things that involves authoring and adding a compiled step. And they are, by no means, original there, static site generators have existed for a while, they just layer in this. And we’ll also spit out some reactive interacting bits and you don’t have to figure out how to do that or write your own JavaScript for it, just write the stuff, we’ll figure out the rest. So, yeah, I do think that’s the nature of the web platform is libraries are experiments that extend what the platform can already do or abstract away some of the tough stuff so that people can focus on building and then, eventually, hopefully, the best stuff gets absorbed back into the platform.

Chris: The potential problem with that model is that, usually, by the time that happens, the tooling has both gotten incredibly heavy to the detriment of end users and has become really entrenched. So, even though the idea ... Think of jQuery, we talk about it in the past tense but it’s still all over the web because these sites that were built with it aren’t just going to rip it out, it’s a lot of work to do that. And there’s a lot of developers even today who, when they start a new project, they reach for jQuery because that’s what they learned on and that’s what they know and it’s easiest for them.

Chris: So, these tools just really have persistence for better or for worse. It’s great if you invested a lot of time in learning them, it’s great job security, you’re not wasted time. But a lot of these tools are very heavy, very labor-intensive for the browser and ultimately result in a more fragile end user experience which is not always the best thing.

Drew: I remember, at one point, there was a movement calling for React to actually be shipped with the browser as a way of offsetting the penalty of downloading and pausing all that script. It is frustrating because it’s like, okay, you’re on the right path here, this functionality should be native to the browser but then, crucially, at the last moment, you swerve and miss and it’s like, no, we don’t want to embed React, what we want to do is look at the problems that React is helping people solve, look at the functionality that it’s providing and say, okay, how can we take the best version of that and work that into the web platform. Would you agree?

Chris: Yeah. Yeah, exactly. React will eventually be in the browser, just not the way everybody ... I think a lot of people talk about it as in literally, the same way jQuery is in the browser now, too. We absorb the best bits, put some different names on them, arguably more verbose, clunky, difficult to use names in many cases and so I think that’s how it’ll eventually play out. The other thing that libraries do that I wish the web platform was better at, since we’re on this path, is just API consistency.

Chris: So, it’s one thing that jQuery got, really, is the API is very consistent in terms of how methods are authored and how they work. And just, as a counterpoint, in JavaScript proper, just native JavaScript, you could make a strong argument that querySelector and querySelectorAll shouldn’t be separate methods, it should just be one method that has a much shorter name that always returns ... Hell, I’d even argue an array, not a node list because there are so many more methods that you can use to loop over arrays and manipulate them to nodes or node lists.

Chris: Why is the classList API a set of methods on a property instead of just a set of methods you call directly on the element? So, why is it classList add, classList remove instead of add class, remove class, toggle class, et cetera. It’s just lots of little things like that, this death by a thousand cuts, I think, exacerbates this problem that, even when native methods do the thing, you still get a lot of developers who reach for tooling just because it smooths over those rough edges, it often has good documentation. MDN fills the gap but it’s not perfect and, yeah.

Drew: Yes, using a well-designed framework, the methods tend to be guessable. If you’ve seen documentation that includes a remove method, you could probably guess that an add method is the opposite of that because that’s how anybody would logically name it. But it’s not always that way with native code, I guess, because of reasons, I don’t know, designed by committee, historical problems. I know that, at one point, there were, was it MooTools or prototype or some of these old frameworks that would add their own methods and basically meant that those names couldn’t be reused for compatibility reasons.

Chris: Yeah, I remember there was that whole SmooshGate thing that happened where they were trying to figure out how to ... I think it was the flat method or whatever that originally was supposed to be called. MooTools had an array method of the same name attached to the array prototype. If the web standards committee implemented it the way they wanted to, it would break any website using MooTools, a whole thing.

Drew: In some ways, it seems laughable, any website using MooTools. If your website’s using MooTools, good luck at this point. But it is a fundamental attribute of the web that we try not to break things that, once it’s deployed, it should keep running and a browser update isn’t going to make your use of HTML or CSS or whatever invalid, we’re going to keep supporting it for as long as possible. Even if it’s been deprecated, the browsers will keep supporting it.

Chris: Yeah. I was just going to say, the marquee element was deprecated ages ago and it still works in every major browser just for legacy reasons. It’s that core ethos of the web which is the thing I love. I think it’s a good thing, yeah.

Drew: It is but, yes, it is not without its problems as we’ve seen. It’s, yeah, yeah, very difficult. You mentioned the View Transitions API which I think now may be more broadly supported. I don’t know if I saw from one of the web.dev posts that’s now, as of this month, has better support, which is transitional state but like an SPA style transition between one state and another but you can do it with multipage apps.

Chris: Yeah. A discord I’m in, just quick shout out to the Frontend Horse Discord, Adam Argyle was in there today talking about how, because he built this slide demo thing where every slide is its own HTML file and it uses U transitions to make it look like it’s just one single page app. He was saying that it still does require Chrome Canary with a flag turned on but things change quickly, very slowly and then all at once, that’s the-

Drew: Well, that’s pretty up to date and an authoritative statement there, yeah. But we saw, it was Google IO recently, we saw loads of announcements from them back to things they’re working on. Things like the popover API, which is really interesting, which make use of this top layer concept where you don’t have to futz around with Z index to make sure, if something needs to be on the top, it can be on the top. It’s these sorts of solutions that you get from the web platform that are always going to be a bit of a hack if they’re implemented by a library in JavaScript.

Drew: It’s the fact that you can have a popover that you can always guarantee is going to be on top of everything else and has baked into its behavior so that it can be accessibly dismissed to all those really important subtleties that it’s so easy to get wrong with a JavaScript implementation that the web platform just gets right. And I guess that means that the web platform is always going to move more slowly than a big framework like React or what have you but it does it for a reason because every change is considered for, I don’t know, robustness and performance and accessibility and backward compatibility. So, you end up with, ultimately, a better solution even if it has weird method names.

Chris: For sure. Yeah, no, that’s totally fair.

Drew: I think-

Chris: Yeah.

Drew: ... we had-

Chris: Oh, sorry. Go ahead, Rich. Drew rather.

Drew: I was about to mention Rachel, we had Rachel Andrew on the show a few episodes ago talking about Google Baseline which is their initiative to say which features are supported to replace a browser support matrix idea. And if you look at the posts that Rachel writes what’s new on the web on web.dev every month, she does a roundup of what’s now stable, what you can use and there’s just a vast amount being added to the web platform all the time. It could be a change log from a major framework because it is a major framework, it’s the native web platform but there’s just things being added all the time. Is there anything in particular that you’ve seen that you think would make a big difference or are you just hanging out for that DOM diffing after all those things that are yet to come?

Chris: Yeah. So, things like transitions between pages and stuff, I’m going to be honest, those don’t excite me as much as I think they excite a lot of other people. I know that’s a big part of the reason why a lot of developers that I know really like SPAs and, I don’t know, markers, get really excited about that thing, I’ve just never really understood that. I am really holding out for a DOM diff method. I think the API I’m honestly most excited for is the Temporal API which is still in, I think, stage three so it’s not coming anytime soon. But working with dates in JavaScript sucks and the Temporal API is hopefully going to fix a lot of those issues, probably introduce some new ones but fix most of them.

Drew: This is new to me. Give us a top level explanation of what’s going on with that one.

Chris: Oh, yeah, sure. So, one of the big things that’s tough to do with the date object in JavaScript ... For me, there’s two big things that are really particularly painful. One of them is time zones. So, trying to specify a time in a particular time zone or get a time zone from a date object. So, based on when it was created or how it was created, no, this is the time zone. Figuring out the time zone the person is in is really difficult. And then the other aspect that’s difficult is relative time. So, if you’ve got two different dates and you want to just quickly figure out how much time is between them, you can do it but it involves doing a bunch of math and then making some assumptions especially once you get past days or, I guess, weeks.

Chris: So, I could easily look at two date objects, grab timestamps from them and be like, "Okay, this was two weeks ago or several days." But then, once you start getting into months, the amount of days in a month varies. So, if I don’t want to say 37 weeks, I want to say, however many months that ends up being, it’s going to vary based on how long the months were. And so, the Temporal API addresses a lot of those issues. It’s going to have first class support for time zones, it’s going to have specific methods for getting relative time between two temporal objects and, in particular, one where you hopefully won’t have to ...

Chris: It’s been a while since I’ve read the spec but I’m pretty sure it allows you to not have to worry about, if it’s more than seven days, show in weeks, if it’s more than four weeks, use months. You can just get a time string that says this was X amount of time ago or is happening N amount of time in the future or whatever. So, there’s certain things the Date API can do relatively or the date object can do relatively well but then there’s a couple of you’re trying to do appy stuff with it.

Chris: For example, I once tried to build a time zone calculator so I could quickly figure out when some of my colleagues in other parts of the world, when it was for them. And it was just really hard to account for things like, oh, most of Australia shifted daylight savings time this month but this one state there doesn’t, they actually do it a different month or not at all and so it was a huge pain.

Drew: Yeah, anything involving time zones is difficult.

Chris: Yeah. It’s one of the biggest problems in computer science. That and, obviously, naming things. But yeah, it will smooth over a lot of those issues with a nicer, more modern API. If you go over to tc39.es/proposaltemporal, they have the docs of the work in progress or the spec in progress. It’s authored a lot more nicely than what you might normally see on, say, the W3C website in terms of just human readability but you can tell they borrowed a lot of the way the API works from libraries like Moment.js and date-fns and things like that. Which again gets back to this idea that libraries really pave those cow paths and show what a good API might look like and then the best ones usually win out and eventually become part of the browser.

Drew: And again, back to my point about the web platform getting the important details right, if you’ve got native data objects, you’re going to be able to represent those as localized strings which is a whole other headache. I’ve used libraries that will tell you, "Oh, this blog post was posted two weeks ago," but it’ll give you the string two weeks ago and there’s no way to translate that or, yeah. So, all those details, having it baked into the platform, that’s going to be super good.

Chris: Smashing, one might even say.

Drew: Smashing, yeah. It raises a question because the standards process takes time and paving the cow paths, there has to be a cow path before you pave it. So, does that approach always leave us a step behind what can be done in big frameworks?

Chris: Yeah, theoretically. I think we can look at an example where this didn’t work with the Toast API that Google tried to make happen a few years back. That was done relatively quickly, it was done without consensus across browsers, I don’t think it really leaned heavily on ... I think it was just doing what you described, the paving before the cow paths were there and so it was just met with a lot of resistance. But yeah, I think the platform will always be a bit behind, I think libraries are always going to be a part of the web. Even as the Vanilla JS guy, I use libraries all the time for certain things that are particularly difficult.

Chris: For me, that tends to be media stuff. So, if I need to display really nice photo galleries that expand and shrink back down and you can slide through them, I always grab a library for that, I’m not coding that myself. I probably could, I just don’t want to, it’s a lot of little details to manage.

Drew: It’s a lot of work, yeah.

Chris: Yeah. So, I do, I think the platform will always be behind, I don’t think that’s necessarily a bad thing. I think, for me, the big thing I’ve wished for years is that we run through this cycle as an industry where a little tool comes out, does a thing well, throws in more and more features, gets bigger and bigger, becomes a black hole and just sucks up the whole industry. I keep picking on React but React is the library right now. And then eventually people are like, "Oh, this is big, maybe we should not use something as big," and then you start to see little alternatives pop up. And I really wish that we stopped doing that whole bigger black hole thing and the tools just stayed little and people got okay with the idea that you would pull together a bunch of little tools instead of just always grabbing the behemoth that does all the things. I often liken it to people always go for the Swiss Army knife when they really just need a toothpick or a spoon or a pair of scissors. Just grab the tool you need, not the giant multi-tool that has all this stuff you don’t.

Drew: It almost comes back to the classic Unix philosophy of lots of small tools that do specific things that have a common interface between them so that you can change stuff together.

Chris: And that’s probably where ... Now that you’re saying it, I hadn’t really considered this but that’s probably where the behavior or the tendency arises is, if you have a bunch of small libraries from the same author, they often play together very nicely. If you don’t, they don’t always, yeah, it’s tougher to chain them together or connect those dots. And I really wish there was some mechanism in place that incentivized that a little bit more, I don’t know. I got nothing but I hadn’t really considered that until you just said it.

Drew: Maybe it needs to be a web platform feature to be able to plug in functionality.

Chris: Yeah. Remember the jQuery, I think it was called the extend method or they had some hook that, if you were writing a plugin, basically you would attach to the jQuery prototype and add your own things in a non-destructive way. I wish there was some really lightweight core that we could bolt a bunch of stuff into, that would be nice.

Drew: Yes, and I think that would need to come from the platform rather than from any third party because done the interface would never be agreed upon.

Chris: Very true.

Drew: You talk a lot about Vanilla JavaScript as a concept, I think it helps to give things names. I feel like this approach that we’re talking about here is being web platform native. Do you think that describes it accurately?

Chris: Yes. Yeah, definitely.

Drew: Yeah. So, you’ve talked about still reaching for libraries and things where necessary. Would you say that, if it is our approach to pave the cow paths that, really, the ecosystem needs these frameworks to be innovating and pushing the boundaries and finding the requirements that are going to stick, are they just an essential part of the ecosystem and maybe not so—

Chris: Yeah, probably more than I—

Chris: Yeah. I think, more than I’d like to admit, they are an essential part of the ecosystem. And I think what it comes back to for me is I wish that they did the one thing well and stayed a relatively manageable size. Preact, for example, has done a really great job of adding more features and still keeping themselves around three kilobytes or so, minified Gzipped, which is pretty impressive considering how much like React the API is and they have fewer abstractions internally so a lot of the dynamic updates, you, user Drew, interact with the page, some state changes and a render happened, that ended up happening orders of magnitude faster than it does in React as well.

Chris: Now, to be fair, a lot of the reason why is Preact is newer and it benefits from a lot of modern JavaScript methods that didn’t exist when React was created. So, under the hood, there’s a lot more abstraction happening but it’d probably require a relatively big rewrite of React to fix that.

Drew: And we know those are always popular.

Chris: Yeah. They are dangerous, I understand why people don’t like to do them. I’ve done it multiple times, I always end up shooting myself on the foot, it’s not great.

Drew: So, say that I’m a React developer and I’m currently, day-to-day, building client side SPAs but I really like the sound of this more platform native approach and I want to give it a try for my next project. Where should I start? How do I dip a toe into this world?

Chris: Oh, it depends. So, the easiest way, and I hate myself for saying this, but the easiest way, honestly, you got a few options. One of them, you rip out React, you drop in Preact, there’s a second smaller thing you need to smooth over some compatibility between the two but that’s going to give you just an instant performance boost, a reduce in file size and you can keep doing what you were doing. The way that I think is a little bit more future-proof and interesting, you grab a tool like Astro, which allows you to literally use React to author your code and then it’s going to compile that out ... Excuse me, into mostly HTML, some JavaScript, it’s going to strip out React proper and just add the little interactivy bits that you need.

Chris: I saw a tweet a year or two ago from Jason Lengstorf from the Netlify developer relations team about how he took a next app that he had built, kept 90% of the code, he just made a few changes to make it fit into the way Astro hooks into things, ran the Astro compiler and he ended up having the same exact site with almost all of the same code but the shipped JavaScript was 90% smaller than what he had put in. And you get all the performance and resilience wins that come with that just automatically, just by slapping a compiler on top of what you already have.

Chris: So, I’m really excited about a tool like Astro for that reason. I’m also a little bit worried that a tool like Astro becomes a band-aid that stops us from addressing some of the real systemic issues of always reaching for these tools. Because you can just keep doing what you’re doing and not really make any meaningful changes and temporarily reduce the impact of them, I don’t know that it really puts us in a better place as an industry in the long run. Especially since tools like Svelte and Astro are now working towards this idea that, rather than shipping multi-page apps, they’re going to ship multi-page apps that just progressively enhance themselves into single page apps with hydration and now we’re right back to we’ve got an SPA.

Chris: So, I mentioned some stuff has changed, I recently saw a talk from Rich Harris, who’s the creator of Svelte and SvelteKit, about this very thing and he’s very strongly of the belief that SPAs are better for users because you’re not fetching and rerunning all of the JavaScript every time the page loads. And I get that argument and SvelteKit does it in a really cool way where, rather than having a link element like you might get in Next.js or something like that, a React router or whatever, they just intercept traditional hyperlinks and do some checking to see if they point to your current page or an external site and behave accordingly.

Chris: The thing that nobody ever talks about when they talk about SPAs are better is all of the accessibility stuff that they tend to break that you then need to bolt back in. So, even if you’re like, "Okay, this library is going to handle intercepting the links and finding the page and doing all the rendering and figuring out what needs to change and what stays the same," there’s this often missed piece around how do you let someone who’s using a screen reader know that the UI has changed and how do you do it in a way that’s not absolutely obnoxious. You don’t want to read the entire contents of the page so you can’t just slap an ARIA live attribute on there.

Chris: Do you shift focus to the H1 element on the page? What happens if the user didn’t put an H1 element on the page? Do you have some visually hidden element that you drop some text in saying page loaded so that they know? Do you make sure you shift focus back to the top so they’re not stranded halfway down the page if they’re a keyboard user? It’s one of those things where how you handle is very it depends, contextual. And I think it’s really tough for a library to implement a solution that works for all use cases. I think it’s optimistic to assume the developers will always do the right thing.

Chris: I mentioned at the very start that I’m excited about these tools but I also see them doing that let’s repeat the same mistakes all over again and this feels like that to me. I absolutely understand why, on certain very heavy sites, you might want to shift to an SPA model but there are also just so many places you can really do real harm to yourself or your users when going down that path. And so, I worry that these tools came up to solve a bunch of UI or UX and performance related issues with state-based UI just to then re-implement them in a different way eventually. That’s my soapbox on that. If you have any questions or comments, I’m happy to hear them.

Drew: So, as often happens when we talk, we get all the way to the end and conclude that we’re doomed.

Chris: We’re not. I think it’s mostly we’re headed in a right direction, Drew. I’m a little less doom and gloom than I was a few years ago. And as much as I just ragged on tools like Astro and Svelte, I think they’re going to do a lot of good for the industry. I just love the move to mostly HTML, sprinkle in some JavaScript, progressively enhance some things, that’s a beautiful thing. And even though I was just ragging on the whole SPA thing that these tools are doing, one of the things they also do that’s great is, if that JavaScript to enhance it into an SPA doesn’t load or fails for some reason, Astro and SvelteKit fall back to a multi-page app with server side HTML. So, I think that promise of, what was it, isomorphic apps they used to call them a while ago, it may be closer to that vision being realized than we’ve ever gotten before. I still personally think that just building multi-page apps is often better but I’m probably in the minority here, I often feel like I’m the old man shouting at the cloud.

Drew: And yes, as often happens, it all comes round to progressive enhancement being a really great solution to all of our problems. Maybe not all of our problems but some of them around the web.

Chris: It’s going to cure global hunger, you watch.

Drew: So, I’ve been learning all about being web platform native. What have you been learning about lately, Chris?

Chris: I’ve been trying to finally dig into ESBuild, the build tool/compiler I’ve been using, Rollup, and a separate NPM SaaS compiler thing and my own cobbled together build tool for years. And then Rollup V3 came out and broke a lot of my old stuff if I upgrade to it so I’m still on Rollup two and this was the motivation for me to finally start looking at ESBuild which also has the ability, I learned, to not just compile JavaScript but also CSS and will take nasty CSS imports and concatenate them all into one file for you just like ES modules would.

Chris: So, now I’m over here thinking like, "Oh, is it finally time to drop SaaS for native CSS?" and, "Oh, all these old SaaS variables I have, I should probably convert those over to CSS variables." And so, it’s created this whole daisy chain of rabbit hole for me in a very good way because this is the kind of thing that keeps what we do professionally interesting is learning new things.

Drew: You’ll be the Vanilla CSS guy before we know it.

Chris: That’s Steph Eckles. She is much better at that than I am. I reach for her stuff all the time but, yeah, maybe a little bit.

Drew: If you, dear Listener, would like to hear more from Chris, you can find his social links, blog posts, developer tips newsletter and more at gomakethings.com. And you can check out his podcast at vanillajspodcast.com or wherever you’re listening to this. Thanks for joining us today, Chris. Did you have any parting words?

Chris: No, Drew, just thank you so much for having me. I always enjoy our chat so it was great to be here.

]]>
hello@smashingmagazine.com (Drew McLellan)
<![CDATA[How To Become A Better Speaker At Conferences]]> https://smashingmagazine.com/2023/07/become-better-speaker-conferences/ https://smashingmagazine.com/2023/07/become-better-speaker-conferences/ Tue, 11 Jul 2023 07:00:00 GMT During my time curating the UX London and Leading Design events, I used to watch a few hundred presentations each year. I’d be looking at a range of things, including the speaker’s domain experience and credibility, their stage presence and ability to tell a good story, and whether their topic resonated with the current industry zeitgeist.

When you watch that many presentations, you start to notice patterns that can either contribute to an absolutely amazing talk or leave an audience feeling restless and disengaged. But before you even start worrying about a delivery, you need to secure yourself a spot on the stage. How? Follow me along, and let’s find out!

Choosing What To Speak About

I think one of the biggest misunderstandings people have about public speaking is the belief that you need to come up with a totally new and unique concept — one that nobody has spoken about before. As such, potentially amazing speakers will self-limit because they don’t have “something new to share.” While discovering a brand new concept at a conference is always great, I can literally count the number of times this has happened to me on the one hand. This isn’t because people aren’t constantly exploring new approaches.

However, in our heavily connected world, ideas tend to spread faster than a typical conference planning cycle, and the type of people who attend conferences are likely to be taped into the industry zeitgeist already. So even if the curator does find somebody with a groundbreaking new idea, by the time they finally get on stage, they’ve likely already tweeted about it, blogged about it, and potentially spoken about it at several other events.

I think the need to create something unique comes from an understandable sense of insecurity.

“Why would anybody want to listen to me unless I have something groundbreaking to share?”

The answer is actually more mundane than you might think. It’s the personal filter you bring to the topic that counts. Let’s say you want to do a talk about OKR’s (Objectives and Key Results) or Usability Testing — two topics which you might imagine have been “talked to death” over the years.

However, people don’t know the specific way you tackled these subjects, the challenges you personally faced, and the roadblocks you overcame. There’s a good chance that people in your audience will already have some awareness of these techniques. Still, there’s also a good chance that they’ve been facing their own challenges and want nothing more than to hear how you navigate your way around them, hopefully in an interesting and engaging manner.

Also, let’s not forget that there are new people entering our industry every day. There are so many techniques I’ve made the mistake of taking for granted, only to realize that the people I’m talking to have not only never practiced them before but might not have even come across them; or if they have, they might have only the scantest knowledge about them, gleaned from social media and a couple of poorly written opinion pieces.

In fact, I think our industry is starting to atrophy as techniques we once thought were core ten years ago barely get a mention these days. So just because you think a subject is obvious doesn’t mean everybody feels the same, and there isn’t room for new voices or perspectives on the subject.

Another easy way to break into public speaking is to do some kind of case study. So think about an interesting project you did recently. What techniques did you use, what approach did you take, what problems did you encounter, and how did you go about solving them? The main benefit of a case study type talk is that you’ll know the subject extremely well, which also helps with the nerves (more on this later).

During the past few years, there were published many excellent, very detailed case studies on Smashing Magazine — take a look at this list for some inspiration.

Invest The Right Amount of Time Doing Prep

Another thing people get wrong about public speaking is feeling the need to write a new talk every time. This also comes from insecurity (and maybe a little bit of ego as well). We feel like once our talk is out in the world, everybody will have seen it. However, the sad reality is that the vast majority of people won’t be rushing to view your talk when the video comes online, and even if they do, there’s a good chance they’ll only have taken in a fraction of what you said if they remember any of it at all.

It’s also worth noting that talks are super context-sensitive. I remember watching a talk from former Adaptive Path founder Jeff Veen at least five times. I enjoyed every single outing because while the talk hadn’t changed, I had. I was in a different place in my career, having different conversations and struggling with different things. As such, the talk sparked whole new trains of thought, as well as reminded me of things I knew but had forgotten.

It should be mentioned that, like music or stand-up comedy, talks get better with practice. I generally find that it takes me three or four outings before the talk I’m giving really hits its stride. Only then have I learnt which parts resonate with the audience and which parts need more work; how to improve the structure and cadence, moving sections around for a better flow, and I’ve learnt the bits which people find funny (some international and some not), and how to use pacing and space to make the key ideas land. If you only give a talk once, you’ll be missing out on all this useful feedback and delivering something that’s, at best, 60% of what it could potentially be.

On a practical level, a 45-minute talk can take a surprisingly long time to put together. I reckon it takes me at least an hour of preparation for every minute of content. That’s at least a week’s worth of work, so throwing that away after a single outing is a huge waste. Of course, that’s not what people do. If the talk is largely disposable, they’ll put a lot less effort in, often writing their talk “the night before the event.”

Unless you’re some sort of wunderkind, this will result in a mediocre talk, a mediocre performance, and a low chance of being asked to come back and speak again. Sadly this is one of the reasons we see a lot of the same faces on the speaker circuit. They’re the ones who put the effort in, deliver a good performance, and are rewarded with more invites. Fortunately, the quality bar at most conferences is so low that putting a little extra time into your prep can pay plenty of dividends.

Nailing The Delivery

As well as 45 minutes being a lot of content to create, it’s also a lot of content to sit through. No matter how interesting the subject is, a monotone delivery will make it very hard for your audience to stay engaged. As such, nailing the delivery is key. One way to do this is to see public speaking for what it is — a performance — and as the performer, you have a number of tools at your disposal.

First of all, you can use your voice as an instrument and try varying things like speed, pitch, and volume. Want to get people excited? Use a fast and excitable tone. Want people to lean in and pay attention? Slow down and speak quietly. Varying the way you speak gives your talks texture and can help you hold people’s attention for longer.

Another thing you can use is the physical space. While most people (including myself) feel safe and comfortable behind the lectern, the best speakers use the entire stage to good effect, walking to the front of the stage to address the audience in a more human way or using different sides of the stage to indicate different timelines or parts of a conversation.

Storytelling is an art, so consider starting your talk in a way that grabs your audience’s attention.

This generally isn’t a 20-minute bio of who you are and why you deserve to be on the stage. One of the most interesting talks I ever saw started with one such lengthy bio causing a third of the audience to get up and walk out. I felt really bad for the speaker — who was visibly knocked — so I stuck with it, and I’m so glad I did! The talk turned out to be amazing once all the necessary cruft was removed.

“When presenting at work or [at a conference or] anywhere else, never assume the audience has pledged their undivided attention. They have pledged maybe 60 seconds and will divide their attention as they see fit after that. Open accordingly.”

Mike Davidson

A little trick I like to use is to start my talk in the middle of the story: “So all of a sudden, my air cut out. I was in a cave, underwater, in the pitch black, and with less than 20 seconds of air in my lungs.” Suddenly the whole audience will stop looking at their phones. “Wait, what?” they’ll think. What’s happening? Who is this person? Where are they? How did he get there? And what the hell does this have to do with design? You’ve suddenly created a whole series of open questions which the audience desperately wants to be closed, and you’ve just bought yourself five minutes of their undivided attention where you can start delivery.

Taking too long to get to the meat of the talk is a common problem. In fact, I regularly see speakers who have spent so long on the preamble that they end up rushing the truly helpful bits. One of the reasons folks get stuck like this is that they feel the need to bring everybody up to the same base level of knowledge before they jump into the good stuff.

Instead, it’s much better to assume a base level of knowledge. If the talk stretches your audience’s knowledge, that’s fine. If it goes over some people’s heads, it might encourage them to look stuff up after the event. However, if you find yourself teaching people the absolute basics, there’s a good chance the more experienced members of the audience will zone out, and capturing their interest will become that much harder later on.

When speakers don’t give themselves enough time to prepare a good narrative, it’s easy to fall back on tried and tested patterns. One of these is the “listicle talk” where the speaker explains, “Here are twelve things I think are important, and I’m going to go through them one by one.” It’s a handy formula, but it makes people super-conscious of the time. (“Crikey, they’re still only at number five! I‘m not sure I’m going to make it through another seven of these points.”)

In a similar vein are the talks, which are little more than a series of bullet points that the speaker reads through. The problem is that the audience is likely to read through them much quicker than the speaker, so people basically know what you’re going to say in advance. As such, keep these sorts of lists in your speaker notes and pick some sort of title or image that illustrates the points you’re about to make instead.

Tame The Nerves

Public speaking is unnatural for us, so everybody feels some level of stress. I have one friend who is an absolutely amazing speaker on stage — funny, charming, and confident — but an absolute wreck moments before. In fact, it’s fairly common before going on stage to think, “Why the hell am I doing this to myself?” only to come off the stage 45 or 60 minutes later thinking, “That was great. When can I do it again!”.

One way to minimize these nerves is to memorize the first five minutes of your talk. If you can go on the stage with the first five minutes in the bag, the nerves will quickly subside, and you’ll be able to ease into your presentation some more. This is another reason why starting with a story can be helpful, as they’re easy to remember and will give you a reasonable amount of creative license.

A sure way to tame the nerves is to feel super-prepared and practiced; as such, it’s worth reading your talk out loud a bunch of times before you deliver it to an audience. It’s amazing how often something sounds logical when you say it in your head, but it doesn’t quite flow properly when said out loud. Practicing in front of people is very helpful, so consider asking friends or colleagues if you could practice on them. Also, consider doing a few dry runs with a local group before getting on a bigger stage. The more you know your material, the less nervous you’ll feel.

While some speakers like to brag about how little prep they’ve done or how little sleep they’ve got the night before, don’t get tricked into thinking that this is the standard approach. Often these folks are actually very nervous and are saying things like this in an attempt to preempt or excuse potential poor performance.

It reminds me of the kids at school who used to claim they didn’t study and revise the material and ended up getting a B-. They almost certainly did some revision, albeit probably not enough. But this posturing was actually a way for them to manage their own shortcomings. “I bet I could have gotten an A if I’d put some extra work in.” Instead, make sure you’re well prepared, well rested, and set yourself up for success.

It’s worth mentioning that most people get nervous during public speaking, even if they like to tell you otherwise. As such, nerves are something you just need to get better at managing. One way to do this is to re-frame “nerves” (which have negative connotations) to something more positive like “excitement.” That feeling of excitement you get before giving a talk can actually be a positive thing if you don’t let it get out of hand. It’s basically your body’s way of getting you ready to perform.

However, this excess energy can bleed out in some less helpful ways, such as the “speaker square dance.” This is where speakers either shift their weight from one foot to the other or take a step forwards, a step to the side, and a step back, like some sort of a caged zoo animal. Unfortunately, this constant shifting can feel very unsettling and distracting for the audience, so if you can, try to plant your feet firmly and just move with deliberate intent when you want to make a point.

It’s also great if you can try to minimize the “ums” and “ahs.” People generally do this to give themselves pause while they’re thinking about the next thing they want to say. However, it can come across as if you are a little unprepared. Instead, do take actual breaks between concepts and sentences. At first, it can feel a bit weird doing this on stage, but think of it as an aural whitespace, making it easier for your audience to take in one concept before transitioning on to the next.

Note: I feel comfortable calling these behaviors out as they’re both things I personally do, and I’m working on fixing them — with mixed results so far.

Avoid The Clichés

At least once during every conference I attend, a speaker will say something jokingly along the lines of “I’m the only one standing between you and tea/lunch/beer.” It’s meant as a wry apology, and the first time I heard it, I gave a gentle chuckle.

However, I’ve been at some conferences where three speakers in a row had made the same joke. Apart from a lack of originality, this also shows that the speakers haven’t actually been listening to the other talks, probably because they’ve been in their room or backstage, tweaking their slides. Sometimes this is necessary, but I always appreciate speakers who have been engaged with the content, make references to earlier talks, and don’t trot out the same old clichés as the previous speakers.

Note: I should also add that personally, I find the joke (“the only thing between you and beer”) by the last speaker of the day somewhat problematic, as it implies that people are here for the drink rather than the conference content, and because it also somewhat normalizes overconsumption.

One thing some speakers do in order to calm their nerves (and also to increase people’s engagement, as a side-effect) is through audience participation. If you get people from the audience interacting with each other for five minutes, it takes some of the pressure and focus off of you. It’s also five fewer minutes of content you need to prepare.

However, I see audience participation go wrong far more often than it goes right. This happens especially in front of Brits and Northern Europeans who would rather curl up into a ball and die rather than risk the social awkwardness of talking to their neighbors.

I remember seeing one American speaker walk up the aisle at a European conference encouraging the audience to whoop and cheer while they high-fived everybody or at least attempted to high-five everybody. Although this sort of hype-building might have worked well in Vegas, the assembled audience of Northern Europeans found the whole episode deeply embarrassing, and the speaker never truly recovered for the rest of the talk.

And, on the opposite side of things, if you do get your audience interacting, it can be quite hard to get them to stop a few minutes later! I have seen far too many speakers asking people to introduce themselves to their neighbors, only to cut them off 30 or so seconds later. So if you have such an activity planned, make sure you leave enough time for it to become a meaningful connection, and have a strategy on how you’re going to bring people’s focus back to you.

Another (negative) thing I see a lot of speakers do is make jokes about how they didn’t write their talk till last night or didn’t get to bed till late because they were out drinking. While it’s good to appear to be vulnerable and human, if played wrong, the message you might actually be sending is that you don’t really care about the audience, so be careful.

How To Get Invited Back?

Organizing conferences can be stressful work. You’re trying to coordinate with a bunch of different people with widely different workloads, communication styles, and response times. People are super quick to say “Yes!” to a speaking gig, but then they might go dark for months on end. This is really difficult if you’re trying to get enough information to launch your event site and start selling tickets. It’s even harder if you’re trying to organize things like travel and accommodation and you are seeing the price of everything going up.

As such, you can make conference organizers’ lives a lot easier by responding to their emails in a timely manner, sending them your talk descriptions, bio information, and headshots when asked, and confirming or booking your travel details enough time in advance so that the prices don’t double in size.

Speakers who are a pleasure to work with get recommended and invited back. Speakers who don’t respond to emails, send in overly-short descriptions or leave booking travel to the last minute often don’t. In fact, I’m a member of several conference organiser Slack groups where this sort of behaviour is regularly talked about, causing invites for certain speakers to dry up quickly.

This is, sadly, another reason why you see the same speakers talking at events time and again. Not necessarily because they’re the best speakers, but because they’re reliable and don’t give the organizers a heart attack.

Conclusion

I know this article has covered a lot of public speaking do’s and don’t’s. But before wrapping things up, it’s worth mentioning that speaking is also a lot of fun. It’s a great way to attend conferences you might not have otherwise been able to afford; you get to meet speakers whose work you might have been following for years and learn a ton of new things. It also provides a great sense of personal accomplishment, being able to share what you’ve learnt with others and “pay it forward.”

While it’s easy to assume that all speakers are extroverts, (the art of) speaking is actually surprisingly good for introverts, too. A lot of people find it quite awkward to navigate conferences, go up to strangers, and make small talk. Speaking pretty much solves that problem as people immediately have something they can talk to you about, so it’s super fun walking around after your talk and chatting with people.

All that being said, please don’t feel pressured into becoming a speaker. I think a lot of people think that they need to add a “conference speaker” to their LinkedIn bio in order to advance their careers. However, some of the best, most successful people I know in our industry don’t speak at public events at all, so it’s definitely not an impediment.

But if you do want to start sharing your experience with others, now is a good time. Sure, the number of in-person conferences has dropped since the start of the pandemic, but the ones that are still around are desperate to find new, interesting voices from a diverse set of backgrounds. So if it’s something you’re keen to explore, why not put yourself out there? You’ll have nothing to lose but potentially a lot to gain as a result.

Further Reading

Here are a few additional resources on the topic of speaking at conferences. I hope you will find something useful there, too.

  • Breaking into the speaker circuit,” by Andy Budd
    Here are some more thoughts on breaking into public speaking by yours truly. As somebody who both organizes and often speaks at events, I’ve got a good insight into the workings of the conference circuit. This is probably why I regularly get emails from people looking for advice on breaking into the speaking circuit. So rather than repeating the same advice via email, I thought I’d write a quick article I could point people to.
  • Confessions of a Public Speaker,” by Scott Berkun
    In this highly practical book, author and professional speaker Scott Berkun reveals the techniques behind what great communicators do and shows how anyone can learn to use them well. For managers and teachers — and anyone else who talks and expects someone to listen — the Confessions of a Public Speaker provides an insider’s perspective on how to effectively present ideas to anyone. It’s a unique and instructional romp through the embarrassments and triumphs Scott has experienced over fifteen years of speaking to audiences of all sizes.
  • Demystifying Public Speaking,” by Lara Hogan (A Book Apart), with foreword by Eric Meyer
    Whether you’re bracing for a conference talk or a team meeting, Lara Hogan helps you identify your fears and face them so that you can make your way to the stage, big or small.
  • Slide:ology: The Art and Science of Presentation Design,” by Nancy Duarte
    No matter where you are on the organizational ladder, the odds are high that you’ve delivered a high-stakes presentation to your peers, your boss, your customers, or the general public. Presentation software is one of the few tools that requires professionals to think visually on an almost daily basis. But unlike verbal skills, effective visual expression is not easy, natural, or actively taught in schools or business training programs. This book fills that void.
  • Presentation Zen: Simple Ideas on Presentation Design and Delivery,” by Garr Reynolds
    A best-selling author and popular speaker, Garr Reynolds, is back in this newly revised edition of his classic, best-selling book in which he showed readers there is a better way to reach the audience through simplicity and storytelling and gave them the tools to confidently design and deliver successful presentations.
  • How To — Public Speaking,” a video talk by Ze Frank
    You may benefit a lot from this video that Ze Frank made several years ago.
  • Getting Started In Public Speaking: Global Diversity CFP Day,” by Rachel Andrew (Smashing Magazine)
    The Global Diversity CFP Day (Call For Proposals, sometimes also known as a Call For Papers) is aimed to help more people submit their ideas to conferences and get into public speaking. In this article, Rachel Andrew rounds up some of the best takeaways along with other useful resources for new speakers.
  • Getting The Most Out Of Your Web Conference Experience,” by Jeremy Girard (Smashing Magazine)
    To be a web professional is to be a lifelong learner, and the ever-changing landscape of our industry requires us to continually update and expand our knowledge so that our skills do not become outdated. One of the ways we can continue learning is by attending professional web conferences. But with so many seemingly excellent events to choose from, how do you decide which is right for you?
  • Don’t Pay To Speak At Commercial Events,” by Vitaly Friedman (Smashing Magazine)
    The state of commercial web conferences is utterly broken. What lurks behind the scenes of such events is a widely spread, toxic culture despite the hefty ticket price. And more often than not, speakers bear the burden of all of their conference-related expenses, flights, and accommodation from their own pockets. This isn’t right and shouldn’t be acceptable in our industry.
  • How to make meaningful connections at in-person conferences,” by Grace Ling (Figma Config)
    This is an excellent Twitter post about how to make meaningful connections at in-person conferences — a few concise, valuable, and practical tips.
]]>
hello@smashingmagazine.com (Andy Budd)
<![CDATA[Shines, Perspective, And Rotations: Fancy CSS 3D Effects For Images]]> https://smashingmagazine.com/2023/07/shines-perspective-rotations-css-3d-effects-images/ https://smashingmagazine.com/2023/07/shines-perspective-rotations-css-3d-effects-images/ Fri, 07 Jul 2023 11:00:00 GMT We all agree that 3D effects are cool, right? I think so, especially when they are combined with subtle animations. In this article, we will explore a few CSS tricks to create stunning 3D effects!

“Why do we need another article about CSS 3D effects… aren’t there already a million of those?” Yes, but this one is a bit special because we are going to work with the smallest amount of HTML possible. In fact, this is the only markup we will use to craft some pretty amazing CSS effects for images:

<img src="" alt="">

That’s it! All we need is an <img> tag. Everything else will be done in CSS.

Here’s how it’s going to work. We are going to explore three different effects that are not linked to each other but might borrow a little from one another. You don’t need to read the entire article in one sitting. Actually, I suggest reading one section at a time, taking time to understand the concepts and what the underlying code is doing before moving on to another effect.

Table Of Contents

CSS 3D Shine

For the first effect, we are going to add a shine animation to the image, as well as a slight rotation when hovered.

The green box illustrates the gradient where the blue lines define the color stops we used. Initially, it’s placed at 100% 100%, and on hover, we slide it to 0 0. The slide effect will move the diagonal part of the gradient (the opaque part) along the image to create the shine effect.

Here is the full demo again. I’m even including a second variation for you to tear apart and investigate how it works.

The clip-path defines the clipped area, and we need that area to remain fixed. That’s why we added a translation on hover to move the image in the opposite direction of the clip-path.

Here’s how that works. First, we add some padding to the top and the bottom of the image and apply an outline that is semi-transparent black.

Second, we apply a negative outline-offset so that the outline covers the image on the left and right sides but leaves the top and bottom alone:

img {
  --d: 18px;  /* depth */

  padding-block: var(--d);
  outline: var(--d) solid #0008;
  outline-offset: calc(-1 * var(--d));
}

Notice that I have created a variable, --d, that controls the thickness of the outline. This is what gives the image depth.

The last step is to add the clip-path. We need a polygon with eight points for that.

The red points are fixed, and the green points are ones that we will animate to reveal the depth. I know it’s far from a 3D box, but this next visual, where we add the rotation, gives a better illustration.

Initially, the image is rotated with some perspective. The green points on the right are aligned with the red ones. Thus, we hide the outline on that side to keep it visible only on the left side. We have our 3D box with the depth on the left.

On hover, we move the green points on the left while rotating the image. Halfway through the animation, all the green points are aligned with the red ones, and the rotation angle is equal to 0deg, hiding the outline and giving the image a flat appearance.

Then, we continue the rotation, and the green points on the right move while the left ones remain in place. We get the same 3D effect but with the depth on the right side.

Bear with me because the next block of code is going to look really confusing at first. That’s due to a few new variables and the eight-point polygon we’re drawing on the clip-path property.

@property --_l {
  syntax: "<flength>";
  initial-value: 0px;
  inherits: true;
}
@property --_r {
  syntax: "<length>";
  initial-value: 0px;
  inherits: true;
}

img {
  --d: 18px;  /* depth */
  --a: 20deg; /* angle */
  --x: 10px;

  --_d: calc(100% - var(--d));
  --_l: 0px;
  --_r: 0px;

  clip-path: polygon(
    /* The two green points on the left */
    var(--_l) calc(var(--_d) - var(--x)),
    var(--_l) calc(var(--d)  + var(--x)),

    /* The two red points on the top */
    var(--d) var(--d),var(--_d) var(--d),

    /* The two green points on the right */
    calc(var(--_d) + var(--_r)) calc(var(--d)  + var(--x)),
    calc(var(--_d) + var(--_r)) calc(var(--_d) - var(--x)),

    /* The two red points on the bottom */
    var(--_d) var(--_d),var(--d) var(--_d)

    );
  transition: transform .3s, --_r .15s, --_l .15s .15s;
}

/* Update the points of the polygon on hover */
img:hover{
  --_l: var(--d);
  --_r: var(--d);
  --_i: -1;
  transition-delay: 0s, .15s, 0s;
}

I’ve used comments to help explain what the code is doing. Notice I am using the variables --_l and --_r to define the position of the green points. I animate those variables from 0 to the depth (--d) value. The @property declarations at the top allow us to animate the variables by specifying the type of values they are (<length>).

Note: Not all browsers currently support @property. So, I’ve added a fallback in the demo with a slightly different animation.

After the polygon is drawn on the clip-path property, the next thing the code does is apply a transition that handles the rotation. The full rotation lasts .3s, so the green points need to transition at half that duration (.15s). On hover, the polygon points on the left move immediately (0s) while the right points move at half the duration (courtesy of a .15s delay). When we leave the hovered state, we use different delays because we need the right points to move immediately (0s) while the left points move at half the duration.

What’s up with that --x variable, right? If you check the first image that I provided to illustrate the clip-path points, you will notice that the green points are slightly shifted from the top and bottom edges, which is logical to simulate the 3D effect. The --x variable controls how much shifting takes place, but the math behind it is a bit complex and not easy to express in CSS. So, we update it manually based on each case until we get a value that feels right.

That gives us our final result!

See the Pen 3D images with hover effect by Temani Afif.

Wrapping Up

I hope you enjoyed — and perhaps were even challenged by — this exploration of CSS 3D image effects. We worked with a whole bunch of advanced CSS features, including masks, clipping, gradients, transitions, and calculations, to make some pretty incredible hover effects for images that you certainly don’t see every day.

And we did it in a way that only needed one line of HTML. No divs. No classes or IDs. No pseudo-elements. Just a single <img> tag is all we need. Yes, it’s true that more markup may have made the CSS less complex, but the fact that it relies on a plain HTML element means the CSS can be used more broadly. CSS is powerful enough to do all of this on a single element!

I’ve written extensively about advanced CSS styles for images. If you’re looking for more ideas and inspiration, I encourage you to check out the following articles I’ve published:

I also run a site called CSS Tip that explores even more fancy effects — subscribe to the RSS feed to keep up with the experiments I do over there!

Further Reading On SmashingMag

]]>
hello@smashingmagazine.com (Temani Afif)
<![CDATA[The UX Of Flight Searches: How We Challenged Industry Standards]]> https://smashingmagazine.com/2023/07/reimagining-flight-search-ux/ https://smashingmagazine.com/2023/07/reimagining-flight-search-ux/ Thu, 06 Jul 2023 10:00:00 GMT The topic of flight search has been on our workbench before. Back in 2015, part of us worked on the design strategy of Lufthansa Group. In 2017, airberlin became one of our first clients. Together with the team, we redesigned their digital world from scratch: flight search, booking process, homepage, and much more.

What was considered too progressive in 2016 celebrated its first successes in 2017. Six years later, in 2023, it is now being expanded as a case study by DUMBO.

Note: This is a fictitious case study undertaken on our own initiative and was neither developed nor launched. With this study, we want to question habits, break down barriers and offer new food for thought to improve interactions.

A Flight Search Observation

If you, like most, have searched for a flight at some point, you are familiar with the usual song and dance involved with playing with the search criteria in order to score an optimal search result. If I change the travel date, will it be cheaper? If I depart from a different airport, will the flights be less-full? As a result, the hunt is a never-ending combination of viewing results, making further refinements, and constantly changing the search criteria. So, do you see yourself here? Twenty-nine students who took part in the study “Online Search Behaviour in the Air Travel Market: Reconsidering the Consideration Set and Customer Journey Concepts” certainly did.

According to the 2017 study, flight search algorithms cover a dynamic solution space with often more than a thousand possibilities that can change rapidly, cutting it down based on user criteria. Users, on the other hand, go through three rounds of refinement on average, filtering and refining search criteria, comparing possibilities, and making trade-off judgments. The analyzed flight searches are insufficient to support the final decision: users must still make judgments and trade-offs depending on their personal preferences and priorities. The study emphasizes the importance of improved interfaces with decision support up to the final decision in order to improve the flight search experience.

Frontstage: What We Can See

When we observe people booking flights, we notice unpleasant side effects and interesting user hacks.

  • Searching for the right flight is extremely stressful.
    High prices, limited availability, artificial scarcity, a plenitude of options, as well as an ingrained penchant for cost traps and loopholes.
  • The flight gets more expensive with every search.
    Opaque pricing and the feeling of being on the airline’s hook make travelers suspicious of cookies and tracking.
  • Flights are like looking for a needle in a haystack.
    Alone, the route from Frankfurt to Honolulu offers 8,777 different flight combinations. To get a handle on what’s on offer, travelers turn to third-party providers like Swoodoo to combine different routes or Google to find offers from the surrounding area and many more.
  • Long waiting times are nerve-wracking.
    Prices are recalculated, and availability is checked for every search query. In our test, a query usually takes 10 seconds. This always leads to long waiting times in the observed search behavior.
  • The quest for the best flight deal.
    The most important decision criterion for a flight is still the price. But every search parameter influences it. The lack of price communication reinforces the feeling of intransparency.
  • The feeling of having paid too much for the flight.
    When it comes to flights, most travelers are confronted with “from” prices. However, these are only available on certain flights and in limited numbers. What if such flights are not available? This leads to negative anchoring: what seemed affordable at the beginning now seems all the more expensive.
Backstage: What We Don’t See

It takes a look behind the curtain to find out that there are numerous technical and business constraints that have an enormous impact on flight search. Rather than years of usability engineering, the search experience is largely determined by third-party booking systems, dynamic pricing, and cost-per-request mechanics.

  • 3rd Party Booking System.
    Behind most flight searches is a reservation system running called Amadeus. This is where millions of customers purchase their tickets. Amadeus is mostly responsible for which data points are available and how the interface is designed. Airlines use those systems and can only exert limited influence on a better solution.
  • Dynamic Pricing.
    Dynamic pricing is used to set the price of a product based on current market conditions. Prices fluctuate in real-time based on current data. This includes data on customer booking behavior, competitor airline prices, popular events, and a variety of other factors that affect product demand and necessitate price adjustments.
  • Cost per request.
    In most cases, searches are charged per request. To keep costs down, airlines want to reduce search requests. This leads to avoiding both pre-emptive and iterative queries.
Reframing The Problem

The classic flight search pattern inevitably leads to a frustrating trial-and-error loop.

Flight searches are structured in such a way that it is highly unlikely that a customer can find a suitable flight straight away because it presupposes that the traveler has entered all price-relevant information before submitting the search query.

The dilemma: this price-relevant information affects availability, travel time, and service. At the same time, they are factors for the traveler that can be changed depending on the result and personal preferences and flexibility. As a result, travelers develop their own user hacks to compare different search parameters and weigh the trade-off between price and convenience.

How can we give travelers a better flight search experience? Our pitch is The Balancing Act: a guided dialogue between traveler and airline. Strap in — we’re taking a deep dive.

The Flight Search Redesign: Introducing The “Balancing Act”

What makes a search successful? It’s an increasingly important question in the age of global travel and its limitless possibilities. We focus on finding your personal solution. It puts the traveler, their occasion, and their budget at the center of the interaction and looks at how well the flight offer fits. To do this, we fundamentally change the tailoring of the interaction with travelers. We break down the search form into individual tasks and change the sequence of interactions. This allows a more balanced approach between friction and progress.

We Will Take You There. But Where To?

Let’s start the flight search with the only question whose answer is not up for discussion: Where to? Knowing where you want to go, we might be able to help you to weigh up every further detail in terms of cost and convenience. This will allow you to make conscious decisions.

DUMBO (2023). Enter flight search by entering the destination. [Design Mockup] (Large preview)

Find The Perfect Connection

We will find the best departure point for you. Depending on where you want to go, where you plan to stay, and at what prices and conditions, we might be able to offer alternative routes that are easy on your wallet and get you to your destination comfortably and quickly.

DUMBO (2023). Origin airport selection, including alternatives. [Design Mockup] (Large preview)

Times That Suit You

Airplanes are almost always in the air, but they are not always the same. For some journeys, you are time-bound; for others — not. Best-price calendars, travel times, expected load factors (and much more) might help you to find the best flight for your journey.

DUMBO (2023). The date picker includes different views to highlight data according to personal preferences. [Design Mockup] (Large preview)

Without Getting In Your Way

We will react as quickly as possible. Even before we talk about the number of passengers, deposit access codes, or create multi-stop flights, you should have an idea of whether there is a suitable flight for you.

DUMBO (2023). Flight plan with prices being subsequently loaded. [Design Mockup] (Large preview) This Is How We Get There: Step By Step

To redesign the interface, we need to uncover the structure of the interaction moment. For this, we use the Interaction Archetypes framework to help us align our design with the underlying usage intention — the strongest driver for user interaction.

Task

The task is to find a suitable flight. We see that this usually takes several attempts and is achieved with the help of different search platforms and flight brokers. This shows that we are clearly in a weighing phase when searching for a flight. Different flights, routes, and times are weighed against travel planning criteria as well as personal preferences and limiting factors of the traveler.

Intention Of Use

The intention of use is a key determinant of interaction. The better we tailor our interface to the intention of use, the higher the probability that the interaction will be successful. Research findings show that usage intentions for digital applications can be assigned to three categories: “Act,” “Understand,” and “Explore.”

In our case, we can clearly attribute the flight search to the “Act” usage intention: users have a specific task and want to make progress in completing that task as quickly as possible. Flight search is characterized by a clear goal. Travelers want to get an overview of the available flights to find the best option for their specific solution space. They take a structured approach and selectively change search parameters to uncover inconsistencies and explore the limits of what is available.

Success

Changing various parameters shows that the solution space for this task is multi-dimensional. And not just that, on closer inspection, it becomes clear that a flight search is a hierarchical step process: a so-called “Analytic Hierarchy Process.” We assume that decision-making tasks are sequential. The traveler works his way from decision level to decision level. All levels of the flight search are causally related.

Goal

Flight search is inextricably linked to flight booking, which in turn is linked to travel to and from the destination. The primary goal of travelers is “to arrive.” Here, we observe the same causal relationship that we have already seen with the success factors. We are also dealing with a hierarchical step process.

This means that before they start looking for a flight, travelers have already considered the destination, the time of travel, the duration of the trip, and the travel costs. Travelers, therefore, usually have a kind of hidden agenda, which they consciously or subconsciously review in the course of their flight search.

Hypotheses

If we consider the problem and the context in which the interface is used, three hypotheses emerge. They open up a solution space for the flight search:

  • If we design the search along the decision levels, travelers can make faster and more confident decisions at each stage of decision-making.
  • If travelers can already weigh their options in terms of price and convenience at the moment of entry, the first search results are likely to be suitable, thereby reducing the re-submission of search queries.
  • If we show partial information as soon as it is available, travelers can quickly scan for suitable flight results, thereby reducing friction and the abandonment rate.
The Challenge

What we currently see in the airline and flight industry space is that search now assumes that its parameters meet fixed criteria. Accordingly, a “successful” search is given if it is able to deliver a result based on the ten declarations. In interaction, however, travelers behave in a way that contradicts this assumption.

The decision-making levels through which travelers approach their destination provide information about it. They are hierarchical and causally related.

Each individual decision is the result of a trade-off between price and convenience. A successful search is, therefore, the smallest compromise.

So if we create space for trade-offs through interaction, we should be able to make the flight search more targeted to the traveler’s needs. This raises three major design opportunities:

  1. How might we utilize travelers’ decision-making levels to speed up the process?
  2. How might we help travelers balance price and convenience to reduce search queries?
  3. How might we deliver results to travelers faster to reduce friction and abandonment rates?
Solutions

From Static To Sequential

We say goodbye to the predominant route indication of a flight search and ask in the first step: Where do you want to travel to? We quickly realized that the “where to” question fits the mental model of travelers and can serve as a springboard for goal-oriented interaction. Only if we bring travelers closer to their destination can the airline make a relevant offer.

Four steps

But that’s not all: We completely remove the search form and lead travelers to their flight in a dialogue. Following the decision-making levels, we ask for four pieces of information one after the other, on the basis of which we can generate a suitable flight plan:

  1. Specify the destination.
  2. Specify the origin.
  3. Select the departure time.
  4. Select the return time.

Four Moments Of Success

Each entry is given our full attention. This reduces the cognitive load and creates space for content, even on small devices. This, in turn, is only possible if the effort per input is less for the user than the added value generated in each case.

If we orchestrate this information along the decision-making levels of travelers and understand their causality, we can consciously bring about partial decisions. Meanwhile, on the way to the individual solution space, we create four moments of success:

  1. Can we fly to our destination? Check!
  2. Can we fly from a suitable departure point? Check!
  3. Can we fly out at the right time? Check!
  4. Can we fly back at the right time? Check!

More Later

Further queries are refrained from in favor of the offer. All other criteria can be used to adjust the results while maintaining the flight schedule. These criteria are preselected based on the most frequent flight bookings or personal flight behavior.

  • Number of adults,
  • Number of children,
  • Number of infants,
  • Access codes to selected flights,
  • Selection of class.

From Passive To Proactive

To make well-informed decisions, travelers need to be aware of the consequences of their choices in the flight booking process. This means they need to understand the impact of their partial decision (date, departure location, airport) on the expected outcome. The better they can do this, the easier it is for them to weigh up. Ultimately, the best flight is the result of a personal trade-off between convenience and cost.

The Best Departure Point For You

If you live in western Germany, there are five possible departure points within a 90-minute radius. Frankfurt and Düsseldorf are two major hubs among them. So the departure airport is extremely flexible and raises questions:

  • Which departure airport comes into question?
  • Which airline is preferred?
  • What is an acceptable price range?
  • How mobile is one on the way to the airport?

Based on geolocation and the route network, conclusions can be drawn about a suitable departure airport depending on the destination. To do this, we look at nearby airports and rate them according to comfort and price. In addition, the travel time and the airline also play a role.

And that’s not all. We could place targeted offers, which could allow the airline to drive competition or control the load factor across the organization. In this way, attractive incentives can be created with the help of discounts, therefore positively influencing the actions of travelers.

The Best Time To Fly For You

The travel period is probably the most obscure and yet most important parameter for travelers, yet it is also the most essential factor in determining airfare and availability. A single day earlier or later can quickly add up to several hundred euros. This can have a critical impact on travel planning.

We don’t want travelers to have to correct their search later, so we add additional indicators to the date selection. First and foremost, there is a price display that is broken down daily for outbound and return flights.

Travel planning does not always leave room for maneuver. Therefore, early indicators of availability are all the more important. For this purpose, we mark days on which the destination is not served as well as days with particularly high load factors. In this way, we can set impulses at an early stage of travel planning to avoid negative booking experiences.

The Best Ticket For You?

A “One-Way Flight” can be more expensive than a “Return Flight.” We consider the option “One-Way Flight” within the date selection. This is because it is an alternative to a return flight. And the associated price is an important piece of information to consider when travelers are weighing options.

Even before the flight plan has been loaded, we put all options on the table. This is how we offer maximum price transparency.

Disclaimer: Multi-stop flights were not considered in this case study.

From Accurate To Instant

If we communicate flights and their prices prior to checking whether the flight is not yet fully booked or the prices have changed, there is a risk that the offer will have to be corrected. Usually, all of us want to make statements that can be fulfilled. But it needs a tolerance for errors in communication in order to provide volatile flight data as quickly as possible.

The following example: A flight is supposed to have a price of 300 euros, or so it was the day before. In the meantime, the prices have changed, and the flight costs 305 euros. As a result, the assumption based on the information was wrong and had to be corrected to the disadvantage of the customer.

Stupid. But: One was already in a situation to give a price indication. After all, the flight before and after might cost 600 euros and is therefore even more irrelevant than a flight for 305 euros if one had assumed 300 euros.

The communication error is less important than the added value at the moment of interaction. We can only overcome technical and business constraints with the help of estimates and assumptions.

Caching

In order to achieve price transparency, we have to refrain from requesting price calculations. Due to the costs per request and the loading times, it is not possible for us to communicate prices as they currently are. Therefore, we have to cache prices from previous searches, at least until a flight selection can be made.

This could also mean that we know that prices may change once the final flights are selected. The requirement for accurate price communication is sacrificed in favor of relevant selection criteria and fast landing times. After all, price is typically the most important factor in weighing any partial decision.

Flight Plan

To speed up the interaction, we need to put the availability check at the end. The route network has been determined; the flight plan has been drawn up. With the route information and travel times, we should have the corresponding flight plan immediately available. The availability check can be either downstream or simultaneous. In this way, we enable systems to communicate without being a hindrance to travelers.

Geolocation

Geolocation data can be used to draw conclusions about the departure airport. We do not necessarily have to use the geolocation API for this. It should also be possible to achieve sufficient localization with the help of IP address search so that we can immediately create added value. Once we have identified the airports in the vicinity, we can evaluate potential connections in terms of cost and convenience.

Overcoming Limits

Anyone who has ever had to search for and book a flight surely knows: it is nerve-wracking and time-consuming. Before you can even start the search, you have to enter ten details in the search form. This means that in the very first stage of the search, you’ve already had to make ten decisions. Unfortunately, and often, only one thing is certain: the destination of the journey.

Along our thought process, we have shown that the classic flight search pattern is broken, often because external factors such as technical and business constraints influence the flight search experience. However, we have shown that airlines and searches can break the pattern. This can be achieved by entering into a dialogue with the user and leading them from one decision level to another to eventually fit their specific needs and goals.

If you found this approach useful or interesting, I recommend our guide to developing your own Interaction Blueprint. It is based on our “Interaction Archetypes” framework that allows you to strategically illuminate a user’s behavioral patterns, as well as their interactions with digital interfaces. It has greatly transformed and improved our design process. We hope that it could transform your design process, too.

Further Reading On SmashingMag

]]>
hello@smashingmagazine.com (Robert Goesch)
<![CDATA[Sustainable Design Toolkits And Frameworks]]> https://smashingmagazine.com/2023/07/sustainable-design-toolkits-and-resources/ https://smashingmagazine.com/2023/07/sustainable-design-toolkits-and-resources/ Tue, 04 Jul 2023 09:00:00 GMT “Sustainable” design is a paradigm that emphasizes the impact that design practices and workflows have on the environment with the goal of reducing carbon emissions. The design decisions we make are reflected in our planet’s climate, from the energy consumption of the tools we use to how the products we build interact with the environment and plenty of other things in between. In this collection, we compiled resources to help you understand the principles of sustainable design and how to integrate them into the way we work and the things we make.

Design For Sustainability

The EU Science Hub’s Sustainable Product Policy estimates that over 80% of all product-related environmental impacts are determined during the design phase. But how can design teams ensure that sustainability is at the core of every design choice they make? To help their designers develop design habits about sustainability, IBM published “IBM Design for sustainability.”

At the heart of the framework is the idea that the user, community, and social value should outweigh any negative environmental and social impact in the present and the future. To achieve this vision, experiences need to be inclusive, easy to learn and use, and efficient for both users and overall power consumption.

The sustainability checklist is part of the framework and it gives practical tips for optimizing designs to meet these goals. It’s no rocket science, but the checklist does offer useful considerations that will help improve performance, speed, and responsiveness.

Sustainability Methods and Design Principles

The Sustainability Guide from SVID is an overarching framework for sustainable design and development practices that contains sections wholly dedicated to methods and design principles that are centered around sustainable practices.

The design section illustrates the system-wide lifecycle of the design process, describing it as a circular system where everything in a product design is interconnected and linked by environmental criteria that is embedded at all stages.

The methods section is an archive of tools, resources, case studies, and expert advice that can be used to educate a team, as well as kickstart a team into sustainable environmental practices.

Sustainable Design Strategies

The crux of sustainable design strategies, according to Leyla Acaroglu, is ensuring that the tools we use in a design workflow and how we use them today do not have a negative impact on the planet in the future.

What Leyla does in this extensive Medium post is curate an ecodesign strategy set that covers core considerations for product design that build sustainability into the process, from manufacturing and recyclability to efficiency and modularity. By including these considerations into the design lifecycle, it is possible to develop products and services that reflect sustainable practices, such as a product’s ability to dematerialize, how easily it can be recycled, how long it lasts, whether it can be dissembled by customers, and to what extent it can be repurposed for other uses.

Sustainable Web Design Practices

Is the admin experience as easy and intuitive as the front-end experience? Is the message useful for your target audience? Could a Progressive Web App be an efficient solution? A lot of questions need to be asked when you want to deliver digital products and services that respect the principles of the Sustainable Web Manifesto. The site Sustainable Web Design helps you find the right sustainability strategy for your project.

The strategies are divided into different categories: design, client and project ethos, content and marketing, development, hosting, and business operations. In each category, you’ll find questions worth considering and an explanation of why it matters. Links to further reading resources let you dive deeper into each aspect. A helpful guide that supports you on every step of the design process.

Sustainable Web Hosting Companies

According to some estimates, the impact of the Internet and our gadgets on global greenhouse emissions is similar to that of the airline industry. To speed up the transition towards a green, fossil-free Internet, there’s a question we all can ask ourselves: Are our websites hosted green?

The Green Web Foundation built a checker to help you quickly find out if your hosting provider is using green energy or compensating for its services. All you need to do is enter the URL. If you want to make the switch to a green hosting provider, the foundation also published a directory of 478 green hosting companies in 35 countries. A small step that makes a difference.

Sustainability Score Calculator

So, just how large is the carbon footprint of your website? The Internet uses electricity, of course, but it also relies on data centers that distribute information, and the energy to power each and every device that receives that information. Even a small website has a carbon footprint.

The Sustainability Score Calculator is one way to find out. Employing a methodology that takes energy-consuming attributes into account, this free calculator estimates the amount of carbon dioxide produced by a particular website. It looks at the weight of images on a page, whether web fonts are integrated, and any front-end frameworks in use, among other considerations, to inform its calculations.

The exact amount of carbon dioxide produced by a website can probably be evaluated in a number of ways, and this specific calculator makes its own assumptions. Regardless of the exact inputs used in the results, the fact that the Sustainability Score Calculator can come up with a rough estimate for a website’s carbon dioxide output on a per page view basis is a reasonable starting point for determining just how big of a footprint a site has on the environment.

Sustainable UX Design Toolkit

The Sustainable UX Design Toolkit is a resource produced by the Sustainable UX Network, a non-profit organization that has established a community of designers around sustainable environmental design practices.

The organization developed the toolkit as a Miro board that is freely available to clone into your own Miro account. Not a Miro user? You can still reference the embedded board and zoom into it to view the four-step process that walks you through concept to presentation, providing useful considerations, best practices, and even templates you can use right away.

Sustainability Nudges in UX

In the last few years, customers have become more and more aware of how important environmental friendliness and social responsibility are when making a purchase. But even with increased awareness, businesses still play a key role in informing, enabling, and encouraging sustainable behavior. In his post “7 behavioural UX approaches encouraging sustainable purchases,” Damien Lutz takes a closer look at how e-commerce businesses encourage sustainable purchases and what we can learn from them.

From Zalando’s sustainability filters and Amazon’s Climate Pledge Friendly Hub to Qantas’ Green Tier membership and sustainable shopping assistants, Damien analyzes different strategies of nudging customers towards more sustainable decisions. Based on his observations from these real-life examples, he summarizes practical behavioral UX tips that help everyone create experiences that promote sustainability. Interesting insights are guaranteed.

Green the Web Podcast

Since 2019, UX/UI designer Sandy Dähnert shares her passion for a sustainable web on her site Green the Web. Last year, she started the Green the Web Podcast on all things sustainable design best practices, ecological and social user research, information architecture, user interface design, and more.

Whether it’s sustainability-infused user journey maps, UX/UI factors for a lightweight website, or approaches for greener checkout, in the podcast Sandy shares her deep love of sustainable UX and UI design to encourage everyone to step into green design and play an active role in shaping this new design philosophy. You can listen to the podcast on Spotify or Apple Podcasts.

Sustainable UX Playbook

The Sustainable UX Playbook is a yet-to-be-released work in progress by the same folks who maintain the Sustainable UX Manifesto. The playbook is set to provide guidelines, best practices, and examples to help you and your team adopt an environmentally-centered design approach.

The exact date of when the Sustainable UX Playbook will be available is to be determined, but it will be published to SustainableUXPlaybook.com (which currently redirects to the Sustainable UX Manifesto) when it is ready.

Sustainability Figma Kit

The Sustainability Figma Kit that Elisa Fabbian, Rachele Pedol, and Margherita Troilo created helps digital designers move from human-centered design to a more sustainable life-centered design approach. It consists of a learning guide, 23 action cards, and a flowchart.

The learning guide introduces you to the broader context and importance of designing products and services with a reduced environmental impact. The action cards explore problems you might encounter in different phases of the design process and how to solve them. Last but not least, the flowchart helps you find out which sustainability actions can be applied to the specific type of project you are working on by providing useful tips for designing in a more conscious way.

Sustainability Innovation Framework

Sustainability Innovation Framework is an effort from Sebastian Gier that is all about the planning phase of an effort to scope work for projects aimed at reducing carbon emissions.

The process is mapped to traditional design thinking, helping you start work by aligning objectives and documenting assertions before tying them into user needs. What makes this framework particularly useful is that it helps prioritize the ideas generated by the process by their environmental impact.

The entire framework is available as a collaborative FigJam board that can be cloned to your own Figma account.

EcoCards Game Workshop Toolkit

One of the most difficult hurdles to adopting a sustainable design process is figuring out how to discuss the topic as a team. Getting everyone on the same page about what it means to design sustainably and how to establish a process for it are paramount for any team.

That’s what makes the EcoCards Game Workshop Toolkit such a valuable resource. The toolkit is a collection of three card-based games designed to facilitate team discussion on sustainable design practices. Each game is framed as a “workshop” meant to take place at different stages in the design process, detailing the game rules with a series of steps using a plain deck of playing cards to move the discussion forward.

The EcoCards are created as a FigJam board that can be cloned to your Figma account. They are available in English and French translations.

Team Sustainability Retrospective

OK, so perhaps your team has adopted a sustainable design process that aims to reduce carbon emissions. How do you know it’s working? That’s the purpose of the Team Sustainability Retrospective, a Miro template produced by Paddy Dhanda.

Rather than high-fiving your team for implementing a sustainable system, this set of templates will help you assess whether or not your efforts are paying off in a streamlined five-step process. This way, your team can re-group after the implementation of the design process and properly measure its impact with data that form actionable insights for improving the process even further.

World Wide Waste Book

World Wide Waste is a book by Gerry McGovern, aiming to debunk the perception that being “digital” is akin to being “green.” It provides a healthy dose of statistics about the impact of digital products and services and details the crisis of energy consumption in the world.

For example, McGovern attempts to clear up the misunderstanding that cloud technologies are somehow ethereal elements that are carbon-free, but rather physical data centers that result in large quantifiable emissions. If nothing else, this book will equip you with the information you might need to help convince your team to adopt more sustainable practices with statistics and case studies to make the case.

Sustainable Web Design Book

If the World Wide Waste book is all about defining and diagnosing unsustainable design practices, then this offering from A Book Apart is aimed at curing those symptoms. Written by lead author of the “Sustainable Web Manifesto” Tom Greenwood, Sustainable Web Design is a collection of practical web design advice for everything from how to measure a website’s environmental impact and identifying low-carbon design practices to creating energy-efficient development processes and creating a hosting environment that helps reduce climate costs.

Like all A Book Apart publications, Sustainable Web Design is available in print and digital editions — just remember that the digital copy is not a carbon-free option, as many of the resources in this roundup have noted. Then again, the printed copy also has climate considerations due to the costs of transporting the book to your front door. Just buying the book is an excellent example of the conundrums of sustainable design.

Climate Tech Guide For Designers

If you’re looking for help establishing yourself in a career in sustainable design, Enrique Allen and the Designer Fund team offer the Climate Tech Guide for Designers.

This guide is less about teams adopting sustainable design standards than it is a resource for helping you make a decision about where you work and who you work for. How passionate is the company about climate? What problems is the company trying to solve, and are the solutions based on climate technology and considerations? These are the types of questions that will allow you to find the right fit for your career.

What makes this Climate Tech Guide for Designers especially useful is that it goes beyond company considerations by offering advice for how to position yourself for a career in climate technology, capping things off with an extensive list of companies that demonstrate sustainable practices.

Ethical Design Handbook

The Ethical Design Handbook is a book we offer right here at Smashing Magazine. Written by authors Trine Falbe, Martin Michael Frederiksen, and Kim Andersen, these guidelines serve as a roadmap to learn about adopting and integrating ethical design practices into a business model.

Wait, why are we talking about “ethical design” when we’ve been sharing resources on “sustainable design”? Ethical and sustainable design work hand-in-hand, as ethical design relies on sustainable digital business practices in addition to a slew of larger concepts that determine an organization’s ethical practices, from transparency in how data is collected to how inclusiveness is built into a design. In other words, ethical and sustainable design are united by a cause to prevent harm to people. A sustainable design process supports a healthy environment that, in turn, supports an ethical responsibility to care about the impact we have on the planet.

All in all, the Ethical Design Handbook is about leveraging ethical business practices as a market differentiator that can be used as a competitive advantage. Sustainable design principles are part of that matrix, demonstrating that sustainable practices can be aligned to — and even enhance — business objectives.

Ethical Design Resources

Another useful resource to help designers and developers live up to the responsibility of causing no harm and ensure that the experiences they build are inclusive, honest, and safe are the Ethical Design Resources which Lexi Namer maintains in collaboration with the Ethical Design Network and Kate Every.

On Ethical Design Resources, you’ll find articles, books, courses, frameworks, tools, talks, videos, podcasts, and more covering different aspects of ethical design. They help you assess the impact of your design decisions, uncover harmful practices, and support you in making design choices that respect your users.

And if you need more resources, take a look at Ethical Design Guide and Humane By Design.

Wrapping Up

There you have it, a deep collection of toolkits, frameworks, and resources you can use to learn about sustainable design practices and how to adopt them into your own design process. Notice how the collection reveals that sustainable design is a multifaceted topic that considers everything from how we work to the specific tools we use to work. It even covers product design as a whole and the decisions that impact the sustainability of a product, not to mention how business objectives influence environmental objectives.

There may not be a single silver bullet or resource that immediately aligns you and your work with sustainable design practices. That said, the resources provided in this roundup can help you make big and small gains alike, whether it’s reflected in something as seemingly small as the hosting provider you decide to use for your website or something more involved such as integrating environmental considerations at every stage of the design process.

]]>
hello@smashingmagazine.com (Cosima Mielke & Geoff Graham)
<![CDATA[Off To New Adventures (July 2023 Wallpapers Edition)]]> https://smashingmagazine.com/2023/06/desktop-wallpaper-calendars-july-2023/ https://smashingmagazine.com/2023/06/desktop-wallpaper-calendars-july-2023/ Fri, 30 Jun 2023 10:30:00 GMT Often, it’s the little things that inspire us and that we treasure most. The sky shining in the most beautiful colors at the end of a seemingly endless summer day, riding your bike through a light rain shower on a hot afternoon — or maybe it’s a scoop of your favorite ice cream that refuels your batteries? No matter what big and small adventures July will have in store for you this year, our new collection of wallpapers is bound to cater for some inspiration along the way.

More than twelve years ago, we started this monthly wallpapers series to bring you a variety of beautiful, unique, and inspiring wallpapers every month. It’s a community effort made possible by artists and designers from around the globe who challenge their creative skills to cater for some good vibes on your screens. And, well, it wasn’t any different this time around.

In this post, you’ll find their wallpaper designs for July 2023. All of them come in versions with and without a calendar and can be downloaded for free. To make the month even more colorful, we also compiled a selection of July favorites from our wallpapers archives at the end of this post. A huge thank-you to everyone who submitted their artwork — this post wouldn’t exist without you!

  • You can click on every image to see a larger preview,
  • We respect and carefully consider the ideas and motivation behind each and every artist’s work. This is why we give all artists the full freedom to explore their creativity and express emotions and experience through their works. This is also why the themes of the wallpapers weren’t anyhow influenced by us but rather designed from scratch by the artists themselves.
  • Submit a wallpaper!
    Did you know that you could get featured in our next wallpapers post, too? We are always looking for creative talent.
Under The Enchanting Moonlight

“Two friends sat under the enchanting moonlight, enjoying the serene ambiance as they savoured their cups of tea. It was a rare and precious connection that transcended the ordinary, kindled by the magic of the moonlight. Eventually, as the night began to wane, they reluctantly stood, their empty cups in hand. They carried with them the memories and the tranquility of the moonlit tea session, knowing that they would return to this special place to create new memories in the future.” — Designed by Bhabna Basak from India.

DJ Little Bird

Designed by Ricardo Gimenes from Sweden.

In Space

Designed by Lieke Dol from the Netherlands.

Unleash Your Inner Grandmaster

“Hey there, chess champs and rook-ies! Today, we’re rolling out the red carpet for the grandest celebration in the chess universe. It’s World Chess Day, where we celebrate the brain-bending battles, knightly maneuvers, and epic pawn sacrifices that keep us coming back for more! Step into the realm of kings and queens, where the fate of nations is decided over a checkered battlefield. Chess, the ultimate game of mental gymnastics, proves that you don’t need biceps of steel to flex your strategic muscles!” — Designed by PopArt Studio from Serbia.

Cross The Bridge

“On this trip around the world, we return to Europe, specifically to London. We walked through its streets and decided to cross the bridge to enjoy both sides of the city. We may take one of its famous red buses or take a walk along the Thames. In any case, we have a whole month to become true Londoners.” — Designed by Veronica Valenzuela Jimenez from Spain.

Swim Swim

Designed by Rebecca Curiel.

Flat Design ’s-Hertogenbosch

“I admire artwork that is made using simple shapes and colors in Illustrator, also known as flat design. The amazing things you can make with these simple shapes are just mind-blowing. The buildings in the artwork come from my hometown ’s-Hertogenbosch in the Netherlands. I am most proud of the great cathedral on the left. The number of hours I’ve put into it is not normal.” — Designed by Mitch van Trigt from the Netherlands.

Motion Sickness

Designed by Ricardo Gimenes from Sweden.

Underneath The Banana Tree

“July is the time to relax. What about having a rest underneath a… banana tree, lala la la? You know this song? Yes it’s about a mango tree, but never mind.” — Designed by Philippe Brouard from France.

Book Imagination

“Everyone’s imagination when reading books is different. One person thinks of a village and another of a city. That’s the beauty of reading.” — Designed by Britt van Falier from the Netherlands.

Oldies But Goodies

Our wallpapers archives are full of timeless treasures that are just too good to be forgotten. So here’s a small selection of favorites from past July editions. Please note that these designs don’t come with a calendar.

Melting July

“Welcome to the sweltering July — the month when it’s so hot that even the fruits are edgy. Our ice-creamy, vibrantly-colored monthly calendar is melting as the temperature rises, so make sure to download it as quickly as possible!” — Designed by PopArt Studio from Serbia.

Hotdog

Designed by Ricardo Gimenes from Sweden.

Meeting Mary Poppins

“This month, we travel to London with Mary Poppins to discover the city. We will have great adventures!” — Designed by Veronica Valenzuela from Spain.

Birdie July

Designed by Lívi Lénárt from Hungary.

Summer Season

“I’m an avid runner, and I have some beautiful natural views surrounding my city. The Smoky Mountains are a bit further east, so I took some liberties, but Tennessee’s nature is nothing short of beautiful and inspiring.” — Designed by Cam Elliott from Memphis, TN.

The Ancient Device

Designed by Ricardo Gimenes from Sweden.

Summer Cannonball

“Summer is coming in the northern hemisphere and what better way to enjoy it than with watermelons and cannonballs.” — Designed by Maria Keller from Mexico.

Eternal Summer

“And once you let your imagination go, you find yourself surrounded by eternal summer, unexplored worlds, and all-pervading warmth, where there are no rules of physics and colors tint the sky under your feet.” — Designed by Ana Masnikosa from Belgrade, Serbia.

A Flamboyance Of Flamingos

“July in South Africa is dreary and wintery so we give all the southern hemisphere dwellers a bit of color for those gray days. And for the northern hemisphere dwellers a bit of pop for their summer!” — Designed by Wonderland Collective from South Africa.

Riding In The Drizzle

“Rain has come, showering the existence with new seeds of life. Everywhere life is blooming, as if they were asleep and the falling music of raindrops have awakened them. Feel the drops of rain. Feel this beautiful mystery of life. Listen to its music, melt into it.” — Designed by DMS Software from India.

Less Busy Work, More Fun!

Designed by ActiveCollab from the United States.

Taste Like Summer

“In times of clean eating and the world of superfoods there is one vegetable missing. An old, forgotten one. A flower actually. Rare and special. Once it had a royal reputation (I cheated a bit with the blue). The artichocke — this is my superhero in the garden! I am a food lover — you too? Enjoy it — dip it!” — Designed by Alexandra Tamgnoué from Germany.

Day Turns To Night

Designed by Xenia Latii from Germany.

Heated Mountains

“Warm summer weather inspired the color palette.” — Designed by Marijana Pivac from Croatia.

Tropical Lilies

“I enjoy creating tropical designs. They fuel my wanderlust and passion for the exotic, instantaneously transporting me to a tropical destination.” — Designed by Tamsin Raslan from the United States.

Sweet Summer

“In summer everything inspires me.” — Designed by Maria Karapaunova from Bulgaria.

Summermoon

Designed by Erik Neumann from Germany.

Fire Camp

“What’s better than a starry summer night with an (unexpected) friend around a fire camp with some marshmallows? Happy July!” — Designed by Etienne Mansard from the UK.

Island River

“Make sure you have a refreshing source of ideas, plans and hopes this July. Especially if you are to escape from urban life for a while.” — Designed by Igor Izhik from Canada.

Captain Amphicar

“My son and I are obsessed with the Amphicar right now, so why not have a little fun with it?” — Designed by 3 Bicycles Creative from the United States.

It’s Getting Hot

Designed by Ricardo Gimenes from Sweden.

Alentejo Plain

“Based in the Alentejo region, in the south of Portugal, where there are large plains used for growing wheat. It thus represents the extensions of the fields of cultivation and their simplicity. Contrast of the plain with the few trees in the fields. Storks that at this time of year predominate in this region, being part of the Alentejo landscape and mentioned in the singing of Alentejo.” — Designed by José Guerra from Portugal.

An Intrusion Of Cockroaches

“Ever watched Joe’s Apartment when you were a kid? Well, that movie left a soft spot in my heart for the little critters. Don’t get me wrong: I won’t invite them over for dinner, but I won’t grab my flip flop and bring the wrath upon them when I see one running in the house. So there you have it… three roaches… bringing the smack down on that pesky human… ZZZZZZZAP!!” — Designed by Wonderland Collective from South Africa.

Heat Wave

Designed by Ricardo Gimenes from Sweden.

My July

Designed by Cátia Pereira from Portugal.

July Rocks!

Designed by Joana Moreira from Portugal.

Frogs In The Night

“July is coming and the nights are warmer. Frogs look at the moon while they talk about their day.” — Designed by Veronica Valenzuela from Spain.

]]>
hello@smashingmagazine.com (Cosima Mielke)
<![CDATA[What’s The Perfect Design Process?]]> https://smashingmagazine.com/2023/06/perfect-design-process/ https://smashingmagazine.com/2023/06/perfect-design-process/ Thu, 29 Jun 2023 10:00:00 GMT Design process is messy. You might be following a structured approach, but with all the last-minute changes and overlooked details, too often, it takes a life of its own. And before you know it, you are designing in a chaotic environment full of refinements, final-final deliverables, and missed deadlines.

This article is part of our ongoing series on design patterns. It’s an upcoming part of the video library on Smart Interface Design Patterns 🍣 and is a part of the live UX training as well.

What’s The “Right” Design Process?

Of course, there is no “right-and-only” way to frame a design process. It’s defined by whatever works well for you and for your team. Personally, I tend to rely on 4 design models that seem to fit well with my design work:

  • Double Diamond Process for its comprehensive and reliable methodology for solving problems. In this guide, Dan Nessler breaks down the entire Double-Diamond process into single parts, explaining how exactly it works, step-by-step, in all fine details.

  • Triple Diamond Process for its more realistic approach to the designer’s input across the product’s life cycle. That’s a piece by Adam Gray on why bringing flexibility to the messy reality of the design process is critical to improving planning and involving design work as prototypes are being built.

  • Enterprise Design Thinking Model by IBM for its focus on design maturity and scale, which really helps large organizations. A useful model that helps argue for user research, user-centricity, and rapid low-fidelity prototyping — and how to transfer ownership to design teams at scale.

  • Hot Potato process, for its simplicity in bridging design and development across the entire product lifecycle. Designers and developers throw ideas, mock-ups, and prototypes to each other permanently. Sometimes there are more involved design phases than dev phases, but there is no hand-off, and the entire process is driven by continuous collaboration.

These ways of thinking about the design process translated into a process that works well for me but has to be adjusted for every project that I’m working on. In a nutshell, here’s how it would work.

A Process That Works For Me

There is no such thing as enough user research. In every project, I start with involving users as early as possible. I explore all the data we have, interview customer support and the service desk, check for technical debt and design issues, backlog items, and dismissed ideas. I explore organizational charts to understand layers of management. I set the right expectations and seek allies.

From there, I would typically spend weeks or even months in diagrams and spreadsheets and endless docs before drawing a single pixel on the screen. I try to get developers on board, so they can start setting up the dev environment already.

I bring in stakeholders and people who have a vested interest in contributing to the success of the project. Voices that need to be heard but are often forgotten. I see my role as a person who needs to bridge the gap between business requirements and user needs through the lens of design.

Then I take a blank piece of paper and start sketching. I sketch ideas. I sketch customer journey maps. I sketch content boxes. I write down components that we will surely need in the product — the usual suspects. I set up a workshop with designers and developers to decide on names. Then developers can go ahead and prototype while designers focus on UI and interaction design.

To make sure I get both sides of the equation right, I draft customer journey maps, brainstorm ideas and prioritize them with the Kano model and Impact ÷ Effort matrix (with developers, PMs, and stakeholders).

I don’t want to waste time designing and building the wrong thing, so I establish design KPIs and connect them with business goals using KPI trees. I get a sign-off on those, and then the interface design starts.

I develop hypotheses. Low-fidelity mock-ups. Speak to developers. Get their feedback. Refine. Throw the mock-ups to developers. Bring them into HTML and CSS. Test hypotheses in usability sessions until we get to an 80% success rate for top tasks. Designers keep refining, and developers keep building out.

Establish a process to continuously measure the quality of design. Track task completion rates. Track task completion times. Track error rates. Track error recovery rates. Track accessibility. Track sustainability. Track performance. In a B2B setting, we track the time customers need to complete their tasks and try to minimize it.

Make them visible to the entire organization to show the value of design and its impact on business KPIs. Explain that the process isn’t based on hunches. It’s an evidence-driven design.

Establish ownership and governance. The search team must be measured by the quality of search results for the top 100 search queries over the last two months. People who publish content are owners of that content. It’s their responsibility to keep it up-to-date, rewrite, archive, or delete it.

Refine, refine, refine. Keep throwing new components and user journeys to developers. Stop. Test with users to check how we are doing. Keep going and refine in the browser. Continuously and rigorously test. Launch and keep refining. Measure the KPIs and report to the next iteration of the design.

Admittedly, it is a bit messy. But it helps me stay on track when navigating a complex problem space in a way that delivers measurable results, removes bias and subjectivity from design decisions, and helps deliver user-centric designs that also address business needs.

Wrapping Up

Of course, there is no “right-and-only” way to frame a design process. It’s defined by whatever works well for you and for your team. Explore options and keep them in mind when designing your design process. Whatever you choose, don’t follow it rigidly just for the sake of it, and combine bits from all models to make it right for you.

As long as it works well for you, it’s right. And that’s the only thing that matters.

You can find more details on design patterns in the video library on Smart Interface Design Patterns 🍣 — with a live UX training that’s coming up in September this year.

Further Reading on Smashing Magazine

]]>
hello@smashingmagazine.com (Vitaly Friedman)
<![CDATA[Useful DevTools Tips and Tricks]]> https://smashingmagazine.com/2023/06/popular-devtools-tips/ https://smashingmagazine.com/2023/06/popular-devtools-tips/ Tue, 27 Jun 2023 12:00:00 GMT When it comes to browser DevTools, we all have our own preferences and personal workflows, and we pride ourselves in knowing that “one little trick” that makes our debugging lives easier.

But also — and I know this from having worked on DevTools at Mozilla and Microsoft for the past ten years — most people tend to use the same three or four DevTools features, leaving the rest unused. This is unfortunate as there are dozens of panels and hundreds of features available in DevTools across all browsers, and even the less popular ones can be quite useful when you need them.

As it turns out, I’ve maintained the DevTools Tips website for the past two years now. More and more tips get added over time, and traffic keeps growing. I recently started tracking the most popular tips that people are accessing on the site, and I thought it would be interesting to share some of this data with you!

So, here are the top 15 most popular DevTools tips from the website.

If there are other tips that you love and that make you more productive, consider sharing them with our community in the comments section!

Let’s count down, starting with…

15: Zoom DevTools

If you’re like me, you may find the text and buttons in DevTools too small to use comfortably. I know I’m not alone here, judging by the number of people who ask our team how to make them bigger!

Well, it turns out you can actually zoom into the DevTools UI.

DevTools’ user interface is built with HTML, CSS, and JavaScript, which means that it’s rendered as web content by the browser. And just like any other web content in browsers, it can be zoomed in or out by using the Ctrl+ and Ctrl- keyboard shortcuts (or Cmd+ and Cmd- on macOS).

So, if you find the text in DevTools too small to read, click anywhere in DevTools to make sure the focus is there, and then press Ctrl+ (or Cmd+ on macOS).

Chromium-based browsers such as Chrome, Edge, Brave, or Opera can also display the font used by an element that contains the text:

  • Select an element that only contains text children.
  • Open the Computed tab in the sidebar of the Elements tool.
  • Scroll down to the bottom of the tab.
  • The rendered fonts are displayed.

Note: To learn more, see “List the fonts used on a page or an element.”

12: Measure Arbitrary Distances On A Page

Sometimes it can be useful to quickly measure the size of an area on a webpage or the distance between two things. You can, of course, use DevTools to get the size of any given element. But sometimes, you need to measure an arbitrary distance that may not match any element on the page.

When this happens, one nice way is to use Firefox’s measurement tool:

  1. If you haven’t done so already, enable the tool. This only needs to be done once: Open DevTools, go into the Settings panel by pressing F1 and, in the Available Toolbox Buttons, check the Measure a portion of the page option.
  2. Now, on any page, click the new Measure a portion of the page icon in the toolbar.
  3. Click and drag with the mouse to measure distances and areas.

Note: To learn more, see “Measure arbitrary distances in the page.”

11: Detect Unused Code

One way to make a webpage appear fast to your users is to make sure it only loads the JavaScript and CSS dependencies it truly needs.

This may seem obvious, but today’s complex web apps often load huge bundles of code, even when only a small portion is needed to render the first page.

In Chromium-based browsers, you can use the Coverage tool to identify which parts of your code are unused. Here is how:

  1. Open the Coverage tool. You can use the Command Menu as a shortcut: press Ctrl+Shift+P (or Cmd+Shift+P on macOS), type “coverage” and then press Enter.)
  2. Click Start instrumenting coverage and refresh the page.
  3. Wait for the page to reload and for the coverage report to appear.
  4. Click any of the reported files to open them in the Sources tool.

The file appears in the tool along with blue and red bars that indicate whether a line of code is used or unused, respectively.

Note: To learn more, see “Detect unused CSS and JavaScript code.”

10: Change The Playback Rate Of A Video

Usually, when a video appears on a webpage, the video player that displays it also provides buttons to control its playback, including a way to speed it up or slow it down. But that’s not always the case.

In cases when the webpage makes it difficult or impossible to control a video, you can use DevTools to control it via JavaScript istead.

  1. Open DevTools.
  2. Select the <video> element in the Elements tool (called Inspector in Firefox).
  3. Open the Console tool.
  4. Type the following: $0.playbackRate = 2; and press Enter.

The $0 expression is a shortcut that refers to whatever element is currently selected in DevTools; in this case, it refers to the <video> HTML element.

By using the playbackRate property of the <video> element, you can speed up or slow down the video. Note that you could also use any of the other <video> element properties or methods, such as:

  • $0.pause() to pause the video;
  • $0.play() to resume playing the video;
  • $0.loop = true to repeat the video in a loop.

Note: To learn more, see “Speed up or slow down a video.”

9: Use DevTools In Another Language

If, like me, English isn’t your primary language, using DevTools in English might make things harder for you.

If that’s your case, know that you can actually use a translated version of DevTools that either matches your operating system, your browser, or a language of your choice.

The procedure differs per browser.

In Safari, both the browser and Web Inspector (which is what DevTools is called in Safari) inherit the language of the operating system. So if you want to use a different language for DevTools, you’ll need to set it globally by going into System preferencesLanguage & RegionApps.

In Firefox, DevTools always matches the language of the browser. So, if you want to use DevTools in, say, French, then download Firefox in French.

Finally, in Chrome or Edge, you can choose to either match the language of the browser or set a different language just for DevTools.

To make your choice:

  1. Open DevTools and press F1 to open the Settings.
  2. In the Language drop-down, choose either Browser UI language to match the browser language or choose another language from the list.

Note: To learn more, see “Use DevTools in another language.”

8: Disable Event Listeners

Event listeners can sometimes get in the way of debugging a webpage. If you’re investigating a particular issue, but every time you move your mouse or use the keyboard, unrelated event listeners are triggered, this could make it harder to focus on your task.

A simple way to disable an event listener is by selecting the element it applies to in the Elements tool (or Inspector in Firefox). Once you’ve found and selected the element, do either of the following:

  • In Firefox, click the event badge next to the element, and in the popup that appears, uncheck the listeners you want to disable.
  • In Chrome or Edge, click the Event Listeners tab in the sidebar panel, find the listener you want to remove, and click Remove.

Note: To learn more, see “Remove or disable event listeners.”

7: View Console Logs On Non-Safari Browsers On iOS

As you might know, Safari isn’t the only browser you can install and use on an iOS device. Firefox, Chrome, Edge, and others can also be used. Technically, they all run on the same underlying browser rendering engine, WebKit, so a website should more or less look the same in all of these browsers in iOS.

However, it’s possible to have bugs on other browsers that don’t replicate in Safari. This can be quite tricky to investigate. While it’s possible to debug Safari on an iOS device by attaching the device to a Mac with a USB cable, it’s impossible to debug non-Safari browsers.

Thankfully, there is a way to at least see your console logs in Chrome and Edge (and possibly other Chromium-based browsers) when using iOS:

  1. Open Chrome or Edge on your iOS device and go to the special about:inspect page.
  2. Click Start Logging.
  3. Keep this tab open and then open another one.
  4. In the new tab, go to the page you’re trying to debug.
  5. Return to the previous tab. Your console logs should now be displayed.

Note: To learn more, see “View console logs from non-Safari browsers on an iPhone.”

6: Copy Element Styles

Sometimes it’s useful to extract a single element from a webpage, maybe to test it in isolation. To do this, you’ll first need to extract the element’s HTML code via the Elements tool by right-clicking the element and choosing CopyCopy outer HTML.

Extracting the element’s styles, however, is a bit more difficult as it involves going over all of the CSS rules that apply to the element.

Chrome, Edge, and other Chromium-based browsers make this step a lot faster:

  1. In the Elements tool, select the element you want to copy styles from.
  2. Right-click the selected element.
  3. Click CopyCopy styles.
  4. Paste the result in your text editor.

You now have all the styles that apply to this element, including inherited styles and custom properties, in a single list.

Note: To learn more, see “Copy an element’s styles.”

5: Download All Images On The Page

This nice tip isn’t specific to any browser and can be run anywhere as long as you can execute JavaScript. If you want to download all of the images that are on a webpage, open the Console tool, paste the following code, and press Enter:

$$('img').forEach(async (img) => {
 try {
   const src = img.src;
   // Fetch the image as a blob.
   const fetchResponse = await fetch(src);
   const blob = await fetchResponse.blob();
   const mimeType = blob.type;
   // Figure out a name for it from the src and the mime-type.
   const start = src.lastIndexOf('/') + 1;
   const end = src.indexOf('.', start);
   let name = src.substring(start, end === -1 ? undefined : end);
   name = name.replace(/[^a-zA-Z0-9]+/g, '-');
   name += '.' + mimeType.substring(mimeType.lastIndexOf('/') + 1);
   // Download the blob using a <a> element.
   const a = document.createElement('a');
   a.setAttribute('href', URL.createObjectURL(blob));
   a.setAttribute('download', name);
   a.click();
 } catch (e) {}
});

Note that this might not always succeed: the CSP policies in place on the web page may cause some of the images to fail to download.

If you happen to use this technique often, you might want to turn this into a reusable snippet of code by pasting it into the Snippets panel, which can be found in the left sidebar of the Sources tool in Chromium-based browsers.

In Firefox, you can also press Ctrl+I on any webpage to open Page Info, then go to Media and select Save As to download all the images.

Note: To learn more, see “Download all images from the page.”

4: Visualize A Page In 3D

The HTML and CSS code we write to create webpages gets parsed, interpreted, and transformed by the browser, which turns it into various tree-like data structures like the DOM, compositing layers, or the stacking context tree.

While these data structures are mostly internal in-memory representations of a running webpage, it can sometimes be helpful to explore them and make sure things work as intended.

A three-dimensional representation of these structures can help see things in a way that other representations can’t. Plus, let’s admit it, it’s cool!

Edge is the only browser that provides a tool dedicated to visualizing webpages in 3D in a variety of ways.

  1. The easiest way to open it is by using the Command Menu. Press Ctrl+Shift+P (or Cmd+Shift+P on macOS), type “3D” and then press Enter.
  2. In the 3D View tool, choose between the three different modes: Z-Index, DOM, and Composited Layers.
  3. Use your mouse cursor to pan, rotate, or zoom the 3D scene.

The Z-Index mode can be helpful to know which elements are stacking contexts and which are positioned on the z-axis.

The DOM mode can be used to easily see how deep your DOM tree is or find elements that are outside of the viewport.

The Composited Layers mode shows all the different layers the browser rendering engine creates to paint the page as quickly as possible.

Consider that Safari and Chrome also have a Layers tool that shows composited layers.

Note: To learn more, see “See the page in 3D.”

3: Disable Abusive Debugger Statements

Some websites aren’t very nice to us web developers. While they seem normal at first, as soon as you open DevTools, they immediately get stuck and pause at a JavaScript breakpoint, making it very hard to inspect the page!

These websites achieve this by adding a debugger statement in their code. This statement has no effect as long as DevTools is closed, but as soon as you open it, DevTools pauses the website’s main thread.

If you ever find yourself in this situation, here is a way to get around it:

  1. Open the Sources tool (called Debugger in Firefox).
  2. Find the line where the debugger statement is. That shouldn’t be hard since the debugger is currently paused there, so it should be visible right away.
  3. Right-click on the line number next to this line.
  4. In the context menu, choose Never pause here.
  5. Refresh the page.

Note: To learn more, see “Disable abusive debugger statements that prevent inspecting websites.”

2: Edit And Resend Network Requests

When working on your server-side logic or API, it may be useful to send a request over and over again without having to reload the entire client-side webpage and interact with it each time. Sometimes you just need to tweak a couple of request parameters to test something.

One of the easiest ways to do this is by using Edge’s Network Console tool or Firefox’s Edit and Resend feature of the Network tool. Both of them allow you to start from an existing request, modify it, and resend it.

In Firefox:

  • Open the Network tool.
  • Right-click the network request you want to edit and then click Edit and Resend.
  • A new sidebar panel opens up, which lets you change things like the URL, the method, the request parameters, and even the body.
  • Change anything you need and click Send.

In Edge:

  • First, enable the Network Console tool by going into the Settings panel (press F1) → ExperimentsEnable Network Console.
  • Then, in the Network tool, find the request you want to edit, right-click it and then click Edit and Resend.
  • The Network Console tool appears, which lets you change the request just like in Firefox.
  • Make the changes you need, and then click Send.

Here is what the feature looks like in Firefox:

Note: To learn more, see “Edit and resend faulty network requests to debug them.”

If you need to resend a request without editing it first, you can do so too. (See: Replay a XHR request)

And the honor of being the Number One most popular DevTools tip in this roundup goes to… 🥁

1: Simulate Devices

This is, by far, the most widely viewed DevTools tip on my website. I’m not sure why exactly, but I have theories:

  • Cross-browser and cross-device testing remain, to this day, one of the most important pain points that web developers face, and it’s nice to be able to simulate other devices from the comfort of your development browser.
  • People might be using it to achieve non-dev tasks. For example, people use it to post photos on Instagram from their laptops or desktop computers!

It’s important to realize, though, that DevTools can’t simulate what your website will look like on another device. Underneath it, it is all still the same browser rendering engine. So, for example, when you simulate an iPhone by using Firefox’s Responsive Design Mode, the page still gets rendered by Firefox’s rendering engine, Gecko, rather than Safari’s rendering engine, WebKit.

Always test on actual browsers and actual devices if you don’t want your users to stumble upon bugs you could have caught.

That being said,

Simulating devices in DevTools is very useful for testing how a layout works at different screen sizes and device pixel ratios. You can even use it to simulate touch inputs and other user agent strings.

Here are the easiest ways to simulate devices per browser:

  • In Safari, press Ctrl+Cmd+R, or click Develop in the menu bar and then click Enter Responsive Design Mode.
  • In Firefox, press Ctrl+Shift+M (or Cmd+Shift+M), or use the browser menu → More toolsResponsive design mode.
  • In Chrome or Edge, open DevTools first, then press Ctrl+Shift+M (or Cmd+Shift+M), or click the Device Toolbar icon.

Here is how simulating devices looks in Safari:

Note: To learn more, see “Simulate different devices and screen sizes.”

Finally, if you find yourself simulating screen sizes often, you might be interested in using Polypane. Polypane is a great development browser that lets you simulate multiple synchronized viewports at the same time, so you can see how your website renders at different sizes at the same time.

Polypane comes with its own set of unique features, which you can also find on DevTools Tips.

Conclusion

I’m hoping you can see now that DevTools is very versatile and can be used to achieve as many tasks as your imagination allows. Whatever your debugging use case is, there’s probably a tool that’s right for the job. And if there isn’t, you may be able to find out what you need to know by running JavaScript in the Console!

If you’ve discovered cool little tips that come in handy in specific situations, please share them in the comments section, as they may be very useful to others too.

Further Reading on Smashing Magazine

]]>
hello@smashingmagazine.com (Patrick Brosset)
<![CDATA[Behind The Curtains Of Wikipedia Redesign]]> https://smashingmagazine.com/2023/06/behind-curtains-wikipedia-redesign/ https://smashingmagazine.com/2023/06/behind-curtains-wikipedia-redesign/ Mon, 26 Jun 2023 08:00:00 GMT Wikipedia is more than a website — it’s perhaps a cornerstone of the World Wide Web. For decades, the site has provided a model for collaborating online, designing long-form content layouts, and supporting internationalization.

One of the more endearing qualities of Wikipedia is its design, which is known for its utilitarian aesthetics that have stuck around since its 2001 inception. The site has undergone redesigns before, but they are rare and often introduce subtle updates.

This year, 2023, marks the first Wikipedia redesign since 2014. Alex Hollender and Jon Robson led the effort and were kind enough to discuss it with us. The following is an interview that delves into what changed in this latest design, getting into the process as well as design and development details that we all can learn from.

Interview

Geoff Graham: When I think of Wikipedia as a website, I think about the design first and foremost. It’s classic for its focus on function over aesthetics, yet often considered a relic along the same lines as Craigslist. How was it decided that “now” is the right time for a redesign?

Alex Hollender: You know, it’s funny, I think people sometimes assume that organizations make these super-calculated, methodical decisions, and maybe some do. What I’ve experienced more often are opportunistic decisions resulting from some combination of intuition and relationships. Nirzar Pangakar, the design director back in 2019, knew what the organization was hoping to accomplish in the coming years and understood that media and content on the internet were changing rapidly. He saw that we needed to set ourselves up with a better foundation to iterate on top of going forward. He also imagined how the website looked to newcomers and thought that making it a bit more familiar to them would offer a more inclusive experience. And I think he also sensed that in terms of the culture of the Wikipedia community, if we let any more time pass before making some changes, the conservativism and ossification would grow more and more intense, and projects like this would only become more difficult down the road.

So it’s not like something was severely broken, or data was pointing us towards a specific problem or opportunity. There were a few concrete things we knew could be improved, but the driving force was Nirzar’s intuition regarding some of these larger things. He had a great relationship with the Chief Product Officer, Toby Negrin, and our team’s Product Manager, Olga Vasileva, and found an opportunity to get the project started. And because it can be somewhat difficult to articulate these sorts of intuitions, Nirzar, Olga, and I made a little design sprint to help others envision and understand the types of changes we could start with and where they might lead us.

Geoff: Wikipedia is more than just a website, right? It’s more like 300 sites where each instance is a different language. How do you approach a design system for a large network of sites like that? Is there a single, centralized source of truth, or is it something looser, depending on the locale?

Alex: Right, so there’s Wikipedia in over 300 languages, then there’s also a bunch of sister projects, including WikiData, Commons, WikiQuote, WikiSource, and others — all of which use the same interface. I’d say the needs are maybe 80-ish percent the same across all of the experiences. Then you’ve got things where specific languages need special functionality, or the WikiData search bar needs something extra, or the WikiSource “article” page has different needs from the Wikipedia one.

There’s, unfortunately, no single source of truth — we don’t even have all of the customizations and variations documented. A big part of being a designer here is just building a catalog in your mind over time. Different people know about different little nooks and crannies and would remind us like, “Hey, if you want to put a button there, you’re going to have to figure out something for project X in language Y because they’ve got a custom feature living in that spot currently.” It’s this very organic, emergent kind of thing where it’s just grown to fit people’s needs in a very unstructured, decentralized way. Super cool but quite difficult when you want to tweak some of the more fundamental/foundational parts of the experience.

Jon Robson: Before I worked on Wikipedia, I’d never worked on multilingual sites. There’s such a fascinating depth to it, for example, how numbering systems differ in different languages, how quotation marks should be considered translated content, how certain projects have content in two scripts, and how some projects add their own cultural flavor to the design. If you look at the Navajo Wikipedia website, they use a Navajo rug pattern which they’ve had since at least 2005.

It was fascinating how during this redesign, every release risked disrupting something small, as it was impossible to audit everything that was happening in all those projects. We had to make peace with the fact that we might not be able to retain them all and that things would break, and we’d iterate and find a happy medium. Often it’s unclear who to talk to about these things within the organization. Some projects just notice our changes and adapt, while other communities are more vocal. So we have to work together to reconcile these extremes. I’ve been impressed with how Alex has remained so stoic as a designer despite the curve balls the project has thrown at him.

Geoff: I imagine there’s a fine balance when working on a redesign for a site that’s as ubiquitous and that has as a long legacy as Wikipedia. How important was maintaining a sense of familiarity with the design for users? And how constraining was that for introducing new design elements?

Alex: Ultimately, we were focused on delivering the best reading and editing experience we could, somewhat regardless of familiarity for experienced users. For example, moving the table of contents from being inline below the lead section to being a sidebar, from a familiarity perspective, was a huge shift, and a lot of experienced users couldn’t get past that. For them, it violated the platonic form of a Wikipedia article or something, like if the table of contents wasn’t inline, then the article wasn’t a Wikipedia article. And while they tried to justify that preference from a functionality standpoint, their reasons weren’t strong, and I think it was mostly about them being uncomfortable with the unfamiliar. Meanwhile, all of the testing and the functional justifications we, and some community members, put forth made it super clear that the sidebar was the better approach. So, that’s how we made that particular decision.

Jon: The table of contents going from within the article to outside the article also uncovered a lot of interesting innovations our community had made for certain articles. For example, in some articles, they’d converted the standard table of contents to a horizontal layout using some inline styles or only listed the top-level headings using display: none in CSS to hide the rest. These customizations were broken when we implemented our redesign, which has opened up interesting discussions about whether customizations should be core parts of the software and how they should work in the new design.

Alex: I think the question of familiarity came into play more in terms of the rollout and how much we could change at once. We were sensitive to the risk of upsetting this very small part of the community that has an outsized influence on our decisions. Our fear was they would try to shut the project down, which has happened with other projects, big and small, in the past. So, for example, we didn’t include an increased font size in the first version of the new interface, even though we (and many community members) strongly believed it would be a significant improvement. We know from past projects that typography is a particularly hot-button topic.

Geoff: Who else was involved in the redesign? What roles did they play, and how did you manage all the work?

Alex: As far as our team goes, it’s about 5-6 Engineers, a Product Manager, a Community Specialist, and someone on Quality Assurance. Pretty much everyone was involved in a meaningful way in terms of exploring design challenges and weighing in on various options. Olga, the Product Manager, and several of the Engineers are better than I am when it comes to thinking about certain challenges. One clear example is accessibility.

There were several community members who were close collaborators and hundreds of others who were more casually involved. The majority of that collaboration happens on Phabricator, which is our task-tracking system. Of course, the timing gets tricky because community members might jump in with ideas or concerns as we’re finishing up a feature, maybe just because they weren’t aware that the conversation had started a few months back or whatever.

And then there’s the Wikimedia Foundation (WMF) design team. Each member of the design team has their own product team they belong to, so involvement, for the most part, happens via design reviews. There was a bunch of overlap, particularly between the work we were doing and the stuff the editing team worked on, so I got to collaborate closely with that designer. Also, each designer is assigned a design mentor. So, Rita, who is my design mentor — and who also happens to be an incredible designer and person — was behind the scenes all along, helping me figure everything out.

To me, the whole process felt pretty inclusive. A lot of the time, it felt like the process and the conversations were guiding things more than any one individual, which is both cool and a little scary.

Geoff: Wikipedia has been used to study online text legibility (PDF) because of its heavy focus on content. Yet, there have been so many advances in web fonts and typography since the last significant Wikipedia redesign in 2004, from variable font formats and fluid typography to even newer stuff in CSS from this past year, like the super new text-wrap: balance and a new line height (lh) unit. What design considerations went into the text in the latest redesign?

Alex: As far as I understand, there was a typography refresh back in 2014, which succeeded in some ways but was also super contentious. In terms of design ownership, there’s an unwritten understanding that the volunteer community owns the content, and WMF owns the interface. And while the typography is clearly a fundamental part of the overall user experience of the site, it’s definitely on the content side of the content-interface divide, which makes it more difficult for us to work on.

Prior to this project, a lot of great work had already been done by the Design Systems Team regarding the font stack (which is critical, given all of the different language editions of Wikipedia), how the type sizing is declared (which has a big impact on the experience if you manually change the font size), and other things like that.

For this project, from a sort of 80/20 perspective, I think 80% of the room for improvement was managing the line length by adding a max-width, and increasing the base font-size value (which is hopefully coming soon). We did spend a bunch of time looking into other refinements that are forthcoming.

Jon: I actually worked on that typography refresh early in my career at the Wikimedia Foundation. It was contentious for two reasons. First, we added a limited container width for the content and used Helvetica Neue for the font. The latter was a problem due to the “open source” nature of the project, which the community felt strongly about. We compromised by preferring an open font when available, which I think was Linux Libertine at the time.

That project was a lot shorter in terms of time, and we had more important problems to solve, such as having a functioning mobile site and a WYSIWYG editor. So, no compromise could be found on the limited width front. But I was glad we finally got that in with this redesign, even if it came eight years later. Free knowledge is more a marathon than a sprint.

Alex: I do think it’s ironic that Wikipedia, one of the most popular text-based websites on the internet, doesn’t necessarily have a super strong typography practice, at least from a design perspective. Maybe a lot of that has to do with how varied the content is, how many different templates we have, and all of the different languages we need to support. Maybe it would have to almost be a language-by-language endeavor if we were ever to pull it off. I’m not sure.

Editor’s Note: The main discussion and prototype for the project’s typography efforts are available to view.

Geoff: Speaking of the differences in web design since 2004, the term “responsive web design” was also coined in that span of time. Wikipedia has no doubt had a mobile presence for some time, but were there any new challenges to make the site more responsive, given how best practices have evolved?

Alex: We set a soft goal of delivering a great experience down to a 500px browser width. I think it’s fairly uncommon for people to be using desktop or laptop devices with browsers that narrow. But these days, it’s pretty easy to achieve a fully-responsive site with CSS alone, so there didn’t seem to be much of a tradeoff there. Plus, we heard from a few editors that they often tile two or three browser windows side-by-side, so it can get narrow in those cases. The updated interface does feature three menus that can be pinned open as sidebars or collapsed as dropdowns, which is a configuration mainly for logged-in users in order to give them more control over their workstations. And the state of those menus is managed by JavaScript, which presented a slight challenge. Jon wrote a great article a few years ago about why we still have separate mobile and desktop sites.

I think another aspect of making things work well down to 500px was that we wanted to push ourselves to see how close we might be able to get to have one site for all devices, though we’re not quite there yet.

Jon: If I remember correctly, Alex and I had a good back-and-forth about that 500px threshold. In theory, we could have supported a breakpoint below that, and Alex had the mockups ready, but I was concerned that it would slow down development. Plus, the use case was not there as most of our users were resizing browsers, and we could back that up with data.

In fact, during the redesign, vocal members of our community pushed us to introduce an explicit viewport size in our markup because they were annoyed that the table of contents component was collapsing inconsistently in browsers. If you view the source, you’ll now see <meta name="viewport" content="width=1000">.

Note: You can even read the entire discussion about the change.

Geoff: I know front-end nerds will want to know how CSS is written and managed in this latest design because, well, I’m one of them! What does the process look like to make an edit to the styles?

Jon: You have to remember that Wikipedia — and the MediaWiki software that provides it — is quite old and very large, and some of our technology stack reflects that.

MediaWiki is primarily a progressively enhanced web page written in PHP, so we tend to ship HTML with vanilla JavaScript and CSS that enhances it. Our front end is really unusual in that we have no build scripts for our JavaScript and CSS. We write ES6 code without transpiling it, and we use LESS compiled at runtime in PHP, with heavy caching, for our CSS. HTML is provided by Mustache templates.

We are very conservative about what libraries and technologies we use, particularly if they are likely to have an impact on others in the stack. We use TypeScript in the project to validate our code using JSDoc blocks but do not write our code in TypeScript as many of our volunteers do not know the language, and we don’t want to alienate them.

There was talk about replacing LESS with a different CSS preprocessor, but we decided to retain the status quo we’ve used since 2013 because we don’t want to fragment our codebase. We currently use Mustache templates because that’s what we’ve used since 2014, but we hope to eventually phase those out and replace them with Vue.js templates.

All our code is open-sourced, which is pretty unusual and cool! So, if you ever see some visual thing that looks off or could be improved, we’re always happy to take PRs with CSS that fix it.

Geoff: Another nerdy but key question for you: how important were performance considerations to the redesign? What specific things do you look for in Wikipedia’s performance, and what tools do you use to measure them?

Jon: Performance is really important to us, as Wikipedia is global, and we have many projects growing in areas with slower internet connections. We have a performance dashboard that we monitor where we collect global data from our users using the NavigationTiming API. And we run automated synthetic performance tests using Sitespeed.io. This is all public, and anyone can dig into the data!

One of the biggest concerns for this redesign project was how replacing the internal search feature might lose users if it became too slow or unresponsive. We added instrumentation specifically designed to monitor this, and there’s a detailed write-up on how we analyzed the findings with synthetic performance tests.

Besides thinking about performance for specific features, we monitor bundle sizes of our render-blocking CSS assets, and our CI pipeline blocks anything that goes over our performance budget. We also run spikes to see if there are additional ways to improve performance. For example, in a quiet period, we ran a spike, which made our mobile site 300ms faster.

Given that we have hundreds of volunteers and staff collaborating on the codebase,

It’s a challenge to uphold our own high-performance standards. We’re currently working on implementing a performance budget across all our projects to formally enforce this and share the knowledge more widely for everyone to reference.

Geoff: Alex, you’ve noted that one of the goals you defined for the project was to “develop a more flexible interface with an eye towards future features.” What makes the new interface more flexible compared to how it was before, and what future features are you anticipating?

Alex: A small example of a new feature is the sticky header, which is currently only available when you are logged into the site. We built it knowing that for different types of pages, like article pages versus discussion pages versus help pages, et cetera, we would want to put different types of tools in the sticky header. That forethought can save a lot of time and complexity in terms of development.

Another aspect of flexibility, or maybe more specifically, extensibility, is information architecture. The previous interface had two different places for page tools: in the sidebar menu on the left and then above the article title. So, whenever we worked on a new page tools feature, we had to decide where it would go. Creating a clearer and more structured information architecture for the site means there’s one place for page tools, one for global navigation, and so on. I think this will make it easier for us to design new features in the future.

In terms of future features, we’re thinking about reading settings: dark mode, the ability to increase and decrease the font size and line height more easily, and maybe even themes like the Wikipedia apps have. We’re also thinking about ways to help people discover more knowledge related to what they are reading. Other things we might consider are reading features, like the ability to take notes and create collections of articles.

Geoff: Thanks so much to you both for spending some time to share your work with us! Is there anything especially interesting about the design or the work it took to make it that might not be immediately obvious but that you are proud of?

Alex: I think it’s cool to think about super small things that have a big impact. Links are a critical part of the reading experience, and following from that, knowing which links you’ve already visited is important. Previously, there was so little contrast between visited links and black text that this whole sort of navigational wayfinding benefit was missing from experience. Changing the color of visited links was about as simple as a change can be from a technical perspective, with an outsized impact on the user experience.

Another thing I’m interested in and excited about is prototyping, specifically how additional fidelity in prototypes affects the design process. I reached a point where I was predominantly making prototypes with HTML, CSS, and JavaScript to work through design challenges rather than relying on mockups. It’s maybe impossible to know what impact that had in terms of the ability for us to have discussions about the designs, evaluate them, and include community members across many languages, among other things. There’s no way for us to know how the project would have turned out or how much longer it would have taken us to arrive at certain decisions if I hadn’t taken that approach, but my inclination is that it was super helpful.

Jon: The thing I’m most excited about is that the redesign project gave us the time to really pull apart a system that was 21 years old and build the foundation for something more sustainable. Fundamental things like introducing design tokens across the entire software stack are going to be powerful tools that we can use to support user customizations that allow people to change font size and enable a dark mode, the latter of which has been a popular request. So hopefully, we can finally deliver that.

]]>
hello@smashingmagazine.com (Geoff Graham)
<![CDATA[Penpot’s Flex Layout: Building CSS Layouts In A Design Tool]]> https://smashingmagazine.com/2023/06/penpot-flex-layout-building-css-layouts-design-tool/ https://smashingmagazine.com/2023/06/penpot-flex-layout-building-css-layouts-design-tool/ Fri, 23 Jun 2023 13:00:00 GMT This article is a sponsored by Penpot

Among design tools, Penpot holds a special place. It is an open-source design tool, meant for designers and developers to work together and help them speak the same language. It’s also the first design tool out there to be fully open-source and based on open web standards.

That’s a perfect choice for designers and developers working closely together as Penpot’s approach can help to radically improve design to development processes and effortlessly make them seamless and faster.

As open-source software, Penpot also evolves blazingly fast, fueled by the support of the community. When I was first writing about Penpot a few months ago, I shared my excitement about the app’s layout features that finally bring parity between design and code and follow the same rules as CSS does. Since then, the team behind Penpot has made creating layouts even better, so they deserve another look. I really enjoyed playing with the new Penpot’s features, and I believe you might want to give them a try too.

Designing Layouts Done Right

If you ever wrote or read CSS code, there are high chances you have already stumbled upon Flexbox. It’s a cornerstone of building layouts for the modern web, and quite likely, every single website you visit on an everyday basis uses it.

Flexbox is the bread and butter of creating simple, flexible layouts. It’s the most common way of positioning elements: stacking them in rows and columns and deciding how they are supposed to be aligned and distributed.

Therefore, creating Flexbox layouts is a vital part of most web hand-off processes. And not rarely time-consuming and causing friction between design and development. Usually, developers try to translate static mockups into code by rebuilding layouts made by designers from scratch. As most designers don’t write CSS code and most design tools follow a different logic than CSS does, lots can go wrong or get lost in translation.

This is where Penpot’s Flex Layout comes into play. Layouts built-in Penpot don’t need tedious translating into code. Even though designers can build them using a familiar visual interface, they come as production-ready code out-of-the-box. And even if they need tweaking, they can still save developers plenty of time and guesswork as they follow a logic that is already familiar and understandable to them.

So at the bottom line, it benefits everyone. It’s less work for developers as they get the code they need straight away. It’s better for designers as they have finer control over the final effect and a better understanding of the technologies they are designing for. And finally, it’s good for business as it saves everyone’s time.

All of that without making the designer's job an inch harder or forcing them to write a single line of code. Now, let’s take a look at what building designs with Flex Layout look like in practice!

Getting Started With Flex Layout

As mentioned before, Flexbox can be understood as a toolkit for building layout and positioning elements.

Each Flex Layout is generally an array, a list of elements. This list can be sorted from left to write, right to left, top to bottom, or bottom to top.

Flex Layout allows you to control how elements in these lists are aligned against each other.

You can also control how elements are laid out within containers.

Flex layouts can wrap into multiple lines too. You can also nest them indefinitely to create as complex layouts as you wish.

And that’s just the beginning. There are many more options to explore. As you can see, Flex layout gives you much more possibilities and precision than most design tools do. Creating with it is not only a better process but a more powerful one.

To explore all the possible features of Flex Layout, Penpot’s team created a comprehensive Playground template for you to try. If you don’t have a Penpot account yet, go ahead and create one now. Then, duplicate the file and try to play with it yourself! The file will take you on a journey through each and every Flex layout feature, with clear examples and definitions, so you can start building complex, robust layouts in no time.

Building An Example Together

To give you an even better understanding of what working with Flex Layout is in practice, let’s look at a practical example. In the next few steps, we will dig into the structure of this little mockup and rebuild each and every part of it with Flex Layout.

For the first elements, we can use Flex Layout for our buttons. With a few clicks, we can make sure they are responsive to the size of the icon and the label inside, and set paddings and distances between the children elements.

We can also use Flex Layout for the avatars stack. To make the images overlap, a negative gap between the elements does the trick. We also have full control over the order of elements. We can lay out the stack in any direction. We can also control the stack order of each element individually. That’s thanks to Penpot’s support for z-index, another useful CSS property.

Flex layouts can be nested, creating more complex layouts and dependencies. In this case, we’ll create a separate Flex Layout for the navbar and another for the tiles grid below.

Remember that elements in Flex layouts can be wrapped? Let’s see this in action. In this case, we can create a flexible multi-dimensional layout of elements that’s responsive to the parent container and fill it with blocks both vertically and horizontally, just as CSS would do.

But what if some of the elements don’t belong to the grid? Alongside Flexbox, Penpot provides support for absolute positioning. This means that any element can be picked up from the Flex Layout to still leave in the same container but ignore the layout rules. That’s exactly what we need for the ‘Edit’ button.

Eventually, we can transform the whole board into a Flex Layout. Now, we have a design that is not only easy to work with and edit but is also fully flexible. Wondering how your design would perform on a smaller or bigger screen? All you have to do is to resize the board.

Next Steps

If you’d like to take a look at the source file of the layout we’ve just built, go ahead and duplicate this file.

To dig deeper and learn more about how to use Flex Layout, don’t forget to try the Flex Layout template.

In case you get stuck or have some questions, Penpot Community would be the best place to look for help.

There is also a great video tutorial that explains how designers and developers can work together using Flex Layout.

Summary

As you can see, with Flex Layout, the possibilities for structuring your designs are endless. I believe that features like this are a welcome change in the design tools scene and a shift in the right direction. Helping designers to take more control over their work and helping developers to work as efficiently as possible.

Coming Soon: Support For CSS Grid

Maybe you’re now thinking the same as I am: CSS layouts are not only Flexbox, are they? If you work with CSS, you probably know that Flexbox alone is not enough. More complex layouts are often better built using CSS Grid. Flexbox and Grid work best when combined together — combined to create precise yet complex and fully responsive websites.

Penpot doesn’t support CSS Grid just yet, but that is about to change! You can learn more about it at the upcoming Penpot Fest. During the event, Penpot’s team will share their plan and a demo of the upcoming Grid Layout feature. Don’t hesitate to join (virtually or in person), if you’d like to learn more about the next steps for Penpot.

]]>
hello@smashingmagazine.com (Mikołaj Dobrucki)
<![CDATA[Using AI To Detect Sentiment In Audio Files]]> https://smashingmagazine.com/2023/06/ai-detect-sentiment-audio-files/ https://smashingmagazine.com/2023/06/ai-detect-sentiment-audio-files/ Thu, 22 Jun 2023 08:00:00 GMT I don’t know if you’ve ever used Grammarly’s service for writing and editing content. But if you have, then you no doubt have seen the feature that detects the tone of your writing.

It’s an extremely helpful tool! It can be hard to know how something you write might be perceived by others, and this can help affirm or correct you. Sure, it’s some algorithm doing the work, and we know that not all AI-driven stuff is perfectly accurate. But as a gut check, it’s really useful.

Now imagine being able to do the same thing with audio files. How neat would it be to understand the underlying sentiments captured in audio recordings? Podcasters especially could stand to benefit from a tool like that, not to mention customer service teams and many other fields.

An audio sentiment analysis has the potential to transform the way we interact with data.

That’s what we are going to accomplish in this article.

The idea is fairly straightforward:

  • Upload an audio file.
  • Convert the content from speech to text.
  • Generate a score that indicates the type of sentiment it communicates.

But how do we actually build an interface that does all that? I’m going to introduce you to three tools and show how they work together to create an audio sentiment analyzer.

But First: Why Audio Sentiment Analysis?

By harnessing the capabilities of an audio sentiment analysis tool, developers and data professionals can uncover valuable insights from audio recordings, revolutionizing the way we interpret emotions and sentiments in the digital age. Customer service, for example, is crucial for businesses aiming to deliver personable experiences. We can surpass the limitations of text-based analysis to get a better idea of the feelings communicated by verbal exchanges in a variety of settings, including:

  • Call centers
    Call center agents can gain real-time insights into customer sentiment, enabling them to provide personalized and empathetic support.
  • Voice assistants
    Companies can improve their natural language processing algorithms to deliver more accurate responses to customer questions.
  • Surveys
    Organizations can gain valuable insights and understand customer satisfaction levels, identify areas of improvement, and make data-driven decisions to enhance overall customer experience.

And that is just the tip of the iceberg for one industry. Audio sentiment analysis offers valuable insights across various industries. Consider healthcare as another example. Audio analysis could enhance patient care and improve doctor-patient interactions. Healthcare providers can gain a deeper understanding of patient feedback, identify areas for improvement, and optimize the overall patient experience.

Market research is another area that could benefit from audio analysis. Researchers can leverage sentiments to gain valuable insights into a target audience’s reactions that could be used in everything from competitor analyses to brand refreshes with the use of audio speech data from interviews, focus groups, or even social media interactions where audio is used.

I can also see audio analysis being used in the design process. Like, instead of asking stakeholders to write responses, how about asking them to record their verbal reactions and running those through an audio analysis tool? The possibilities are endless!

The Technical Foundations Of Audio Sentiment Analysis

Let’s explore the technical foundations that underpin audio sentiment analysis. We will delve into machine learning for natural language processing (NLP) tasks and look into Streamlit as a web application framework. These essential components lay the groundwork for the audio analyzer we’re making.

Natural Language Processing

In our project, we leverage the Hugging Face Transformers library, a crucial component of our development toolkit. Developed by Hugging Face, the Transformers library equips developers with a vast collection of pre-trained models and advanced techniques, enabling them to extract valuable insights from audio data.

With Transformers, we can supply our audio analyzer with the ability to classify text, recognize named entities, answer questions, summarize text, translate, and generate text. Most notably, it also provides speech recognition and audio classification capabilities. Basically, we get an API that taps into pre-trained models so that our AI tool has a starting point rather than us having to train it ourselves.

UI Framework And Deployments

Streamlit is a web framework that simplifies the process of building interactive data applications. What I like about it is that it provides a set of predefined components that works well in the command line with the rest of the tools we’re using for the audio analyzer, not to mention we can deploy directly to their service to preview our work. It’s not required, as there may be other frameworks you are more familiar with.

Building The App

Now that we’ve established the two core components of our technical foundation, we will next explore implementation, such as

  1. Setting up the development environment,
  2. Performing sentiment analysis,
  3. Integrating speech recognition,
  4. Building the user interface, and
  5. Deploying the app.

Initial Setup

We begin by importing the libraries we need:

import os
import traceback
import streamlit as st
import speech_recognition as sr
from transformers import pipeline

We import os for system operations, traceback for error handling, streamlit (st) as our UI framework and for deployments, speech_recognition (sr) for audio transcription, and pipeline from Transformers to perform sentiment analysis using pre-trained models.

The project folder can be a pretty simple single directory with the following files:

  • app.py: The main script file for the Streamlit application.
  • requirements.txt: File specifying project dependencies.
  • README.md: Documentation file providing an overview of the project.

Creating The User Interface

Next, we set up the layout, courtesy of Streamlit’s framework. We can create a spacious UI by calling a wide layout:

st.set_page_config(layout="wide")

This ensures that the user interface provides ample space for displaying results and interacting with the tool.

Now let’s add some elements to the page using Streamlit’s functions. We can add a title and write some text:

// app.py
st.title("🎧 Audio Analysis 📝")
st.write("[Joas](https://huggingface.co/Pontonkid)")

I’d like to add a sidebar to the layout that can hold a description of the app as well as the form control for uploading an audio file. We’ll use the main area of the layout to display the audio transcription and sentiment score.

Here’s how we add a sidebar with Streamlit:

// app.py
st.sidebar.title("Audio Analysis")
st.sidebar.write("The Audio Analysis app is a powerful tool that allows you to analyze audio files and gain valuable insights from them. It combines speech recognition and sentiment analysis techniques to transcribe the audio and determine the sentiment expressed within it.")

And here’s how we add the form control for uploading an audio file:

// app.py
st.sidebar.header("Upload Audio")
audio_file = st.sidebar.file_uploader("Browse", type=["wav"])
upload_button = st.sidebar.button("Upload")

Notice that I’ve set up the file_uploader() so it only accepts WAV audio files. That’s just a preference, and you can specify the exact types of files you want to support. Also, notice how I added an Upload button to initiate the upload process.

Analyzing Audio Files

Here’s the fun part, where we get to extract text from an audio file, analyze it, and calculate a score that measures the sentiment level of what is said in the audio.

The plan is the following:

  1. Configure the tool to utilize a pre-trained NLP model fetched from the Hugging Face models hub.
  2. Integrate Transformers’ pipeline to perform sentiment analysis on the transcribed text.
  3. Print the transcribed text.
  4. Return a score based on the analysis of the text.

In the first step, we configure the tool to leverage a pre-trained model:

// app.py
def perform_sentiment_analysis(text):
  model_name = "distilbert-base-uncased-finetuned-sst-2-english"

This points to a model in the hub called DistilBERT. I like it because it’s focused on text classification and is pretty lightweight compared to some other models, making it ideal for a tutorial like this. But there are plenty of other models available in Transformers out there to consider.

Now we integrate the pipeline() function that does the sentiment analysis:

// app.py
def perform_sentiment_analysis(text):
  model_name = "distilbert-base-uncased-finetuned-sst-2-english"
  sentiment_analysis = pipeline("sentiment-analysis", model=model_name)

We’ve set that up to perform a sentiment analysis based on the DistilBERT model we’re using.

Next up, define a variable for the text that we get back from the analysis:

// app.py
def perform_sentiment_analysis(text):
  model_name = "distilbert-base-uncased-finetuned-sst-2-english"
  sentiment_analysis = pipeline("sentiment-analysis", model=model_name)
  results = sentiment_analysis(text)

From there, we’ll assign variables for the score label and the score itself before returning it for use:

// app.py
def perform_sentiment_analysis(text):
  model_name = "distilbert-base-uncased-finetuned-sst-2-english"
  sentiment_analysis = pipeline("sentiment-analysis", model=model_name)
  results = sentiment_analysis(text)
  sentiment_label = results[0]['label']
  sentiment_score = results[0]['score']
  return sentiment_label, sentiment_score

That’s our complete perform_sentiment_analysis() function!

Transcribing Audio Files

Next, we’re going to transcribe the content in the audio file into plain text. We’ll do that by defining a transcribe_audio() function that uses the speech_recognition library to transcribe the uploaded audio file:

// app.py
def transcribe_audio(audio_file):
  r = sr.Recognizer()
  with sr.AudioFile(audio_file) as source:
    audio = r.record(source)
    transcribed_text = r.recognize_google(audio)
  return transcribed_text

We initialize a recognizer object (r) from the speech_recognition library and open the uploaded audio file using the AudioFile function. We then record the audio using r.record(source). Finally, we use the Google Speech Recognition API through r.recognize_google(audio) to transcribe the audio and obtain the transcribed text.

In a main() function, we first check if an audio file is uploaded and the upload button is clicked. If both conditions are met, we proceed with audio transcription and sentiment analysis.

// app.py
def main():
  if audio_file and upload_button:
    try:
      transcribed_text = transcribe_audio(audio_file)
      sentiment_label, sentiment_score = perform_sentiment_analysis(transcribed_text)

Integrating Data With The UI

We have everything we need to display a sentiment analysis for an audio file in our app’s interface. We have the file uploader, a language model to train the app, a function for transcribing the audio into text, and a way to return a score. All we need to do now is hook it up to the app!

What I’m going to do is set up two headers and a text area from Streamlit, as well as variables for icons that represent the sentiment score results:

// app.py
st.header("Transcribed Text")
st.text_area("Transcribed Text", transcribed_text, height=200)
st.header("Sentiment Analysis")
negative_icon = "👎"
neutral_icon = "😐"
positive_icon = "👍"

Let’s use conditional statements to display the sentiment score based on which label corresponds to the returned result. If a sentiment label is empty, we use st.empty() to leave the section blank.

// app.py
if sentiment_label == "NEGATIVE":
  st.write(f"{negative_icon} Negative (Score: {sentiment_score})", unsafe_allow_html=True)
else:
  st.empty()

if sentiment_label == "NEUTRAL":
  st.write(f"{neutral_icon} Neutral (Score: {sentiment_score})", unsafe_allow_html=True)
else:
  st.empty()

if sentiment_label == "POSITIVE":
  st.write(f"{positive_icon} Positive (Score: {sentiment_score})", unsafe_allow_html=True)
else:
  st.empty()

Streamlit has a handy st.info() element for displaying informational messages and statuses. Let’s tap into that to display an explanation of the sentiment score results:

// app.py
st.info(
  "The sentiment score measures how strongly positive, negative, or neutral the feelings or opinions are."
  "A higher score indicates a positive sentiment, while a lower score indicates a negative sentiment."
)

We should account for error handling, right? If any exceptions occur during the audio transcription and sentiment analysis processes, they are caught in an except block. We display an error message using Streamlit’s st.error() function to inform users about the issue, and we also print the exception traceback using traceback.print_exc():

// app.py
except Exception as ex:
  st.error("Error occurred during audio transcription and sentiment analysis.")
  st.error(str(ex))
  traceback.print_exc()

This code block ensures that the app’s main() function is executed when the script is run as the main program:

// app.py
if __name__ == "__main__": main()

It’s common practice to wrap the execution of the main logic within this condition to prevent it from being executed when the script is imported as a module.

Deployments And Hosting

Now that we have successfully built our audio sentiment analysis tool, it’s time to deploy it and publish it live. For convenience, I am using the Streamlit Community Cloud for deployments since I’m already using Streamlit as a UI framework. That said, I do think it is a fantastic platform because it’s free and allows you to share your apps pretty easily.

But before we proceed, there are a few prerequisites:

  • GitHub account
    If you don’t already have one, create a GitHub account. GitHub will serve as our code repository that connects to the Streamlit Community Cloud. This is where Streamlit gets the app files to serve.
  • Streamlit Community Cloud account
    Sign up for a Streamlit Cloud so you can deploy to the cloud.

Once you have your accounts set up, it’s time to dive into the deployment process:

  1. Create a GitHub repository.
    Create a new repository on GitHub. This repository will serve as a central hub for managing and collaborating on the codebase.
  2. Create the Streamlit application.
    Log into Streamlit Community Cloud and create a new application project, providing details like the name and pointing the app to the GitHub repository with the app files.
  3. Configure deployment settings.
    Customize the deployment environment by specifying a Python version and defining environment variables.

That’s it! From here, Streamlit will automatically build and deploy our application when new changes are pushed to the main branch of the GitHub repository. You can see a working example of the audio analyzer I created: Live Demo.

Conclusion

There you have it! You have successfully built and deployed an app that recognizes speech in audio files, transcribes that speech into text, analyzes the text, and assigns a score that indicates whether the overall sentiment of the speech is positive or negative.

We used a tech stack that only consists of a language model (Transformers) and a UI framework (Streamlit) that has integrated deployment and hosting capabilities. That’s really all we needed to pull everything together!

So, what’s next? Imagine capturing sentiments in real time. That could open up new avenues for instant insights and dynamic applications. It’s an exciting opportunity to push the boundaries and take this audio sentiment analysis experiment to the next level.

Further Reading on Smashing Magazine

]]>
hello@smashingmagazine.com (Joas Pambou)
<![CDATA[Visual Editing Comes To The Headless CMS]]> https://smashingmagazine.com/2023/06/visual-editing-headless-cms/ https://smashingmagazine.com/2023/06/visual-editing-headless-cms/ Tue, 20 Jun 2023 09:00:00 GMT A couple of years ago, my friend Maria asked me to build a website for her architecture firm. For projects like this, I would normally use a headless content management system (CMS) and build a custom front end, but this time I advised her to use a site builder like Squarespace or Wix.

Why a site builder? Because Maria is a highly visual and creative person and I knew she would want everything to look just right. She needed the visual feedback loop of a site builder and Squarespace and Wix are two of the most substantial offerings in the visual editing space.

In my experience, content creators like Maria are much more productive when they can see their edits reflected on their site in a live preview. The problem is that visual editing has traditionally been supported only by site-builders, and they are often of the “low” or “no” code varieties. Visual editing just hasn’t been the sort of thing you see on a more modern stack, like a headless CMS.

Fortunately, this visual editing experience is starting to make its way to headless CMSs! And that’s what I want to do in this brief article: introduce you to headless CMSs that currently offer visual editing features.

But first…

What Is Visual Editing, Again?

Visual editing has been around since the early days of the web. Anyone who has used Dreamweaver in the past probably experienced an early version of visual editing.

Visual editing is when you can see a live preview of your site while you’re editing content. It gives the content creator an instantaneous visual feedback loop and shows their changes in the context of their site.

There are two defining features of visual editing:

  • A live preview so content creators can see their changes reflected in the context of their site.
  • Clickable page elements in the preview so content creators can easily navigate to the right form fields.

Visual editing has been standard among no-code and low-code site-builders like Squarespace, Wix, and Webflow. But those tools are not typically used by developers who want control over their tech stack. Fortunately, now we’re seeing visual editing come to headless CMSs.

Visual Editing In A Headless CMS

A headless CMS treats your content more like a database that's decoupled from the rendering of your site.

Until recently, headless CMSs came with a big tradeoff: content creators are disconnected from the front end, making it difficult to preview their site. They can't see updates as they make them.

A typical headless CMS interface just provides form fields for editing content. This lacks the context of what content looks like on the page. This UX can feel archaic to people who are familiar with real-time editing experiences in tools like Google Docs, Wix, Webflow, or Notion.

Fortunately, a new wave of headless CMSs is offering visual editing in a way that makes sense to developers. This is great news for anyone who wants to empower their team with an editing experience similar to Wix or Squarespace but on top of their own open-source stack.

Let’s compare the CMS editing experience with and without visual editing on the homepage of Roev.com.

You can see that the instant feedback from the live preview combined with the ability to click elements on the page makes the visual editing experience much more intuitive. The improvements are even more dramatic when content is nested deep inside sections on the page, making it hard to locate without clicking on the page elements.

Headless CMSs That Support Visual Editing

Many popular headless CMS offerings currently support visual editing. Let’s look at a few of the more popular options.

Tina

TinaCMS was built from the ground up for visual editing but also offers a “basic editing” mode that’s similar to traditional CMSs. Tina has an open-source admin interface and headless content API that stays synced with files in your Git repository (such as Markdown and JSON).

Storyblok

Storyblok is a headless CMS that was an early pioneer in visual editing. Storyblok stores your content in its database and makes it available via REST and GraphQL APIs.

Sanity.io (via their iframe plugin)

Sanity is a traditional headless CMS with an open-source admin interface. It supports visual editing through the use of its Iframe Pane plugin. Sanity stores your content in its database and makes it available via API.

Builder.io

Builder.io is a closed-source, visual-editing-first headless CMS that stores content in Builder.io’s database and makes it available via API.

Stackbit

Stackbit is a closed-source editing interface that’s designed to be complementary to other headless CMSs. With Stackbit, you can use another headless CMS to store your content and visually edit that content with Stackbit.

Vercel

Although it’s not a CMS, Vercel’s Deploy Previews can show an edit button in the toolbar. This edit button overlays a UI that helps content creators quickly navigate to the correct location in the CMS.

Conclusion

Now that developers are adding visual editing to their sites, I’m seeing content creators like Maria become super productive on a developer-first stack. Teams that were slow to update content before switching to visual editing are now more active and efficient.

There are many great options to build visual editing experiences without compromising developer-control and extensibility. The promise of Dreamweaver is finally here!

]]>
hello@smashingmagazine.com (Scott Gallant)
<![CDATA[Gatsby Headaches And How To Cure Them: i18n (Part 2)]]> https://smashingmagazine.com/2023/06/gatsby-headaches-i18n-part-2/ https://smashingmagazine.com/2023/06/gatsby-headaches-i18n-part-2/ Mon, 19 Jun 2023 17:00:00 GMT In Part 1 of this series, we peeked at how to add i18n to a Gatsby blog using a motley set of Gatsby plugins. They are great if you know what they can do, how to use them, and how they work. Still, plugins don’t always work great together since they are often written by different developers, which can introduce compatibility issues and cause an even bigger headache. Besides, we usually use plugins for more than i18n since we also want to add features like responsive images, Markdown support, themes, CMSs, and so on, which can lead to a whole compatibility nightmare if they aren’t properly supported.

How can we solve this? Well, when working with an incompatible, or even an old, plugin, the best solution often involves finding another plugin, hopefully one that provides better support for what is needed. Otherwise, you could find yourself editing the plugin’s original code to make it work (an indicator that you are in a bad place because it can introduce breaking changes), and unless you want to collaborate on the plugin’s codebase with the developers who wrote it, it likely won’t be a permanent solution.

But there is another option!

Table of Contents

Note: Here is the Live Demo.

The Solution: Make Your Own Plugin!

Sure, that might sound intimidating, but adding i18n from scratch to your blog is not so bad once you get down to it. Plus, you gain complete control over compatibility and how it is implemented. That’s exactly what we are going to do in this article, specifically by adding i18n to the starter site — a cooking blog — that we created together in Part 1.

The Starter

You can go ahead and see how we made our cooking blog starter in Part 1 or get it from GitHub.

This starter includes a homepage, blog post pages created from Markdown files, and blog posts authored in English and Spanish.

What we will do is add the following things to the site:

  • Localized routes for the home and blog posts,
  • A locale selector,
  • Translations,
  • Date formatting.

Let’s go through each one together.

Create Localized Routes

First, we will need to create a localized route for each locale, i.e., route our English pages to paths with a /en/ prefix and the Spanish pages to a path with a /es/ prefix. So, for example, a path like my-site.com/recipes/mac-and-cheese/ will be replaced with localized routes, like my-site.com/en/recipes/mac-and-cheese/ for English and my-site.com/recipes/es/mac-and-cheese/ for Spanish.

In Part 1, we used the gatsby-theme-i18n plugin to automatically add localized routes for each page, and it worked perfectly. However, to make our own version, we first must know what happens underneath the hood of that plugin.

What gatsby-theme-i18n does is modify the createPages process to create a localized version of each page. However, what exactly is createPages?

How Plugins Create Pages

When running npm run build in a fresh Gatsby site, you will see in the terminal what Gatsby is doing, and it looks something like this:

success open and validate gatsby-configs - 0.062 s
success load plugins - 0.915 s
success onPreInit - 0.021 s
success delete html and css files from previous builds - 0.030 s
success initialize cache - 0.034 s
success copy gatsby files - 0.099 s
success onPreBootstrap - 0.034 s
success source and transform nodes - 0.121 s
success Add explicit types - 0.025 s
success Add inferred types - 0.144 s
success Processing types - 0.110 s
success building schema - 0.365 s
success createPages - 0.016 s
success createPagesStatefully - 0.079 s
success onPreExtractQueries - 0.025 s
success update schema - 0.041 s
success extract queries from components - 0.333 s
success write out requires - 0.020 s
success write out redirect data - 0.019 s
success Build manifest and related icons - 0.141 s
success onPostBootstrap - 0.164 s
⠀
info bootstrap finished - 6.932 s
⠀
success run static queries - 0.166 s — 3/3 20.90 queries/second
success Generating image thumbnails — 6/6 - 1.059 s
success Building production JavaScript and CSS bundles - 8.050 s
success Rewriting compilation hashes - 0.021 s
success run page queries - 0.034 s — 4/4 441.23 queries/second
success Building static HTML for pages - 0.852 s — 4/4 23.89 pages/second
info Done building in 16.143999152 sec

As you can see, Gatsby does a lot to ship your React components into static files. In short, it takes five steps:

  1. Source the node objects defined by your plugins on gatsby-config.js and the code in gatsby-node.js.
  2. Create a schema from the nodes object.
  3. Create the pages from your /src/page JavaScript files.
  4. Run the GraphQL queries and inject the data on your pages.
  5. Generate and bundle the static files into the public directory.

And, as you may notice, plugins like gatsby-theme-i18n intervene in step three, specifically when pages are created on createPages:

success createPages - 0.016 s

How exactly does gatsby-theme-i18n access createPages? Well, Gatsby exposes an onCreatePage event handler on the gatsby-node.js to read and modify pages when they are being created.

Learn more about creating and modifying pages and the Gatsby building process over at Gatsby’s official documentation.

Using onCreatePage

The createPages process can be modified in the gatsby-node.js file through the onCreatePage API. In short, onCreatePage is a function that runs each time a page is created by Gatsby. Here’s how it looks:

// ./gatsby-node.js
exports.onCreatePage = ({ page, actions }) => {
  const { createPage, deletePage } = actions;
  // etc.
};

It takes two parameters inside an object:

  • page holds the information of the page that’s going to be created, including its context, path, and the React component associated with it.
  • actions holds several methods for editing the site’s state. In the Gatsby docs, you can see all available methods. For this example we’re making, we will be using two methods: createPage and deletePage, both of which take a page object as the only parameter and, as you might have deduced, they create or delete the page.

So, if we wanted to add a new context to all pages, it would translate to deleting the pages being created and replacing them with new ones that have the desired context:

exports.onCreatePage = ({ page, actions }) => {
  const { createPage, deletePage } = actions;

  deletePage(page);

  createPage({
    ...page,
    context: {
      ...page.context,
      category: `vegan`,
    },
  });
};

Creating The Pages

Since we need to create English and Spanish versions of each page, it would translate to deleting every page and creating two new ones, one for each locale. And to differentiate them, we will assign them a localized route by adding the locale at the beginning of their path.

Let’s start by creating a new gatsby-node.js file in the project’s root directory and adding the following code:

// ./gatsby-node.js

const locales = ["en", "es"];

exports.onCreatePage = ({page, actions}) => {
  const {createPage, deletePage} = actions;

  deletePage(page);

  locales.forEach((locale) => {
    createPage({
      ...page,
      path: `${locale}${page.path}`,
    });
  });
};

Note: Restarting the development server is required to see the changes.

Now, if we go to http://localhost:8000/en/ or http://localhost:8000/es/, we will see all our content there. However, there is a big caveat. Specifically, if we head back to the non-localized routes — like http://localhost:8000/ or http://localhost:8000/recipes/mac-and-cheese/ — Gatsby will throw a runtime error instead of the usual 404 page provided by Gatsby. This is because we deleted our 404 page in the process of deleting all of the other pages!

Well, the 404 page wasn’t exactly deleted because we can still access it if we go to http://localhost:8000/en/404 or http://localhost:8000/es/404. However, we deleted the original 404 page and created two localized versions. Now Gatsby doesn’t know they are supposed to be 404 pages.

To solve it, we need to do something special to the 404 pages at onCreatePage.

Besides a path, every page object has another property called matchPath that Gatsby uses to match the page on the client side, and it is normally used as a fallback when the user reaches a non-existing page. For example, a page with a matchPath property of /recipes/* (notice the wildcard *) will be displayed on each route at my-site.com/recipes/ that doesn’t have a page. This is useful for making personalized 404 pages depending on where the user was when they reached a non-existing page. For instance, social media could display a usual 404 page on my-media.com/non-existing but display an empty profile page on my-media.com/user/non-existing. In this case, we want to display a localized 404 page depending on whether or not the user was on my-site.com/en/not-found or my-site.com/es/not-found.

The good news is that we can modify the matchPath property on the 404 pages:

// gatsby-node.js

const locales = [ "en", "es" ];

exports.onCreatePage = ({ page, actions }) => {
  const { createPage, deletePage } = actions;
  deletePage(page);
  locales.forEach((locale) => {
    const matchPath = page.path.match(/^\/404\/$/) ? (locale === "en" ? /&#42; : /${locale}/&#42;) : page.matchPath;
    createPage({
      ...page,
      path: ${locale}${page.path},
      matchPath,
    });
  });
};

This solves the problem, but what exactly did we do in matchpath? The value we are assigning to the matchPath is asking:

  • Is the page path /404/?
    • No: Leave it as-is.
    • Yes:
      • Is the locale in English?
        • Yes: Set it to match any route.
        • No: Set it to only match routes on that locale.

This results in the English 404 page having a matchPath of /*, which will be our default 404 page; meanwhile, the Spanish version will have matchPath equal /es/* and will only be rendered if the user is on a route that begins with /es/, e.g., my-site.com/es/not-found. Now, if we restart the server and head to a non-existing page, we will be greeted with our usual 404 page.

Besides fixing the runtime error, doing leave us with the possibility of localizing the 404 page, which we didn’t achieve in Part 1 with the gatsby-theme-i18n plugin. That’s already a nice improvement we get by not using a plugin!

Querying Localized Content

Now that we have localized routes, you may notice that both http://localhost:8000/en/ and http://localhost:8000/es/ are querying English and Spanish blog posts. This is because we aren’t filtering our Markdown content on the page’s locale. We solved this in Part 1, thanks to gatsby-theme-i18n injecting the page’s locale on the context of each page, making it available to use as a query variable on the GraphQL query.

In this case, we can also add the locale into the page’s context in the createPage method:

// gatsby-node.js

const locales = [ "en", "es" ];

exports.onCreatePage = ({page, actions}) => {
  const { createPage, deletePage } = actions;
  deletePage(page);
  locales.forEach((locale) => {
    const matchPath = page.path.match(/^\/404\/$/) ? (locale === "en" ? /&#42; : /${locale}/&#42;) : page.matchPath;
    createPage({
      ...page,
      path: ${locale}${page.path},
      context: {
        ...page.context,
        locale,
      },
      matchPath,
    });
  });
};

Note: Restarting the development server is required to see the changes.

From here, we can filter the content on both the homepage and blog posts, which we explained thoroughly in Part 1. This is the index page query:

query IndexQuery($locale: String) {
  allMarkdownRemark(filter: {frontmatter: {locale: {eq: $locale}}}) {
    nodes {
      frontmatter {
        slug
        title
        date
        cover_image {
          image {
            childImageSharp {
              gatsbyImageData
            }
          }
          alt
        }
      }
    }
  }
}

And this is the {markdownRemark.frontmatter__slug}.js page query:

query RecipeQuery($frontmatter__slug: String, $locale: String) {
  markdownRemark(frontmatter: {slug: {eq: $frontmatter__slug}, locale: {eq: $locale}}) {
    frontmatter {
      slug
      title
      date
      cover_image {
        image {
          childImageSharp {
            gatsbyImageData
          }
        }
        alt
      }
    }
    html
  }
}

Now, if we head to http://localhost:8000/en/ or http://localhost:8000/es/, we will only see our English or Spanish posts, depending on which locale we are on.

Creating Localized Links

However, if we try to click on any recipe, it will take us to a 404 page since the links are still pointing to the non-localized recipes. In Part 1, gatsby-theme-i18n gave us a LocalizedLink component that worked exactly like Gatsby’s Link but pointed to the current locale, so we will have to create a LocalizedLink component from scratch. Luckily is pretty easy, but we will have to make some preparation first.

Setting Up A Locale Context

For the LocalizedLink to work, we will need to know the page’s locale at all times, so we will create a new context that holds the current locale, then pass it down to each component. We can implement it on wrapPageElement in the gatsby-browser.js and gatsby-ssr.js Gatsby files. The wrapPageElement is the component that wraps our entire page element. However, remember that Gatsby recommends setting context providers inside wrapRootElement, but in this case, only wrapPageEement can access the page’s context where the current locale can be found.

Let’s create a new directory at ./src/context/ and add a LocaleContext.js file in it with the following code:

// ./src/context/LocaleContext.js

import * as React from "react";
import { createContext } from "react";

export const LocaleContext = createContext();
export const LocaleProvider = ({ locale, children }) => {
  return <LocaleContext.Provider value={locale}>{children}</LocaleContext.Provider>;
};

Next, we will set the page’s context at gatsby-browser.js and gatsby-ssr.js and pass it down to each component:

// ./gatsby-browser.js & ./gatsby-ssr.js

import * as React from "react";
import { LocaleProvider } from "./src/context/LocaleContext";

export const wrapPageElement = ({ element }) => {
  const {locale} = element.props.pageContext;
  return <LocaleProvider locale={locale}>{element}</LocaleProvider>;
};

Note: Restart the development server to load the new files.

Creating LocalizedLink

Now let’s make sure that the locale is available in the LocalizedLink component, which we will create in the ./src/components/LocalizedLink.js file:

// ./src/components/LocalizedLink.js

import * as React from "react";
import { useContext } from "react";
import { Link } from "gatsby";
import { LocaleContext } from "../context/LocaleContext";

export const LocalizedLink = ({ to, children }) => {
  const locale = useContext(LocaleContext);
  return <Link to={`/${locale}${to}`}>{children}</Link>;
};

We can use our LocalizedLink at RecipePreview.js and 404.js just by changing the imports:

// ./src/components/RecipePreview.js

import * as React from "react";
import { LocalizedLink as Link } from "./LocalizedLink";
import { GatsbyImage, getImage } from "gatsby-plugin-image";

export const RecipePreview = ({ data }) => {
  const { cover_image, title, slug } = data;
  const cover_image_data = getImage(cover_image.image.childImageSharp.gatsbyImageData);

  return (
    <Link to={/recipes/${slug}}>
      <h1>{title}</h1>
      <GatsbyImage image={cover_image_data} alt={cover_image.alt} />
    </Link>
  );
};
// ./src/pages/404.js

import * as React from "react";
import { LocalizedLink as Link } from "../components/LocalizedLink";

const NotFoundPage = () => {
  return (
    <main>
      <h1>Page not found</h1>
      <p>
        Sorry 😔 We were unable to find what you were looking for.
        <br />
        <Link to="/">Go Home</Link>.
      </p>
    </main>
  );
};

export default NotFoundPage;
export const Head = () => <title>Not Found</title>;
Redirecting Users

As you may have noticed, we deleted the non-localized pages and replaced them with localized ones, but by doing so, we left the non-localized routes empty with a 404 page. As we did in Part 1, we can solve this by setting up redirects at gatbsy-node.js to take users to the localized version. However, this time we will create a redirect for each page instead of creating a redirect that covers all pages.

These are the redirects from Part 1:

// ./gatsby-node.js

exports.createPages = async ({ actions }) => {
  const { createRedirect } = actions;

  createRedirect({
    fromPath: `/*`,
    toPath: `/en/*`,
    isPermanent: true,
  });

  createRedirect({
    fromPath: `/*`,
    toPath: `/es/*`,
    isPermanent: true,
    conditions: {
      language: [`es`],
    },
  });
};

// etc.

These are the new localized redirects:

// ./gatsby-node.js

exports.onCreatePage = ({ page, actions }) => {
  // Create localize version of pages...
  const { createRedirect } = actions;

  createRedirect({
    fromPath: page.path,
    toPath: `/en${page.path}`,
    isPermanent: true,
  });

  createRedirect({
    fromPath: page.path,
    toPath: `/es${page.path}`,
    isPermanent: true,
    conditions: {
      language: [`es`],
    },
  });
};

// etc.

We won’t see the difference right away since redirects don’t work in development, but if we don’t create a redirect for each page, the localized 404 pages won’t work in production. We didn’t have to do this same thing in Part 1 since gatsby-theme-i18n didn’t localize the 404 page the way we did.

Changing Locales

Another vital feature to add is a language selector component to toggle between the two locales. However, making a language selector isn’t completely straightforward because:

  1. We need to know the current page’s path, like /en/recipes/pizza,
  2. Then extract the recipes/pizza part, and
  3. Add the desired locale, getting /es/recipes/pizza.

Similar to Part 1, we will have to access the page’s location information (URL, HREF, path, and so on) in all of our components, so it will be necessary to set up another context provider at the wrapPageElement function to pass down the location object through context on each page. A deeper explanation can be found in Part 1.

Setting Up A Location Context

First, we will create the location context at ./src/context/LocationContext.js:

// ./src/context/LocationContext.js

import * as React from "react";
import { createContext } from "react";

export const LocationContext = createContext();
export const LocationProvider = ({ location, children }) => {
  return <LocationContext.Provider value={location}>{children}</LocationContext.Provider>;
};

Next, let’s pass the page’s location object to the provider’s location attribute on each Gatsby file:

// ./gatsby-ssr.js & ./gatsby-browser.js

import * as React from "react";
import { LocaleProvider } from "./src/context/LocaleContext";
import { LocationProvider } from "./src/context/LocationContext";

export const wrapPageElement = ({ element, props }) => {
  const { location } = props;
  const { locale } = element.props.pageContext;

  return (
    <LocaleProvider locale={locale}>
      <LocationProvider location={location}>{element}</LocationProvider>
    </LocaleProvider>
  );
};

Creating An i18n Config

For the next step, it will come in handy to create a file with all our i18n details, such as the locale code or the local name. We can do it in a new config.js file in a new i18n/ directory in the root directory of the project.

// ./i18n/config.js

export const config = [
  {
    code: "en",
    hrefLang: "en-US",
    name: "English",
    localName: "English",
  },

  {
    code: "es",
    hrefLang: "es-ES",
    name: "Spanish",
    localName: "Español",
  },
];

The LanguageSelector Component

The last thing is to remove the locale (i.e., es or en) from the path (e.g., /es/recipes/pizza or /en/recipes/pizza). Using the following simple but ugly regex, we can remove all the /en/ and /es/ at the beginning of the path:

/(\/e(s|n)|)(\/*|)/

It’s important to note that the regex pattern only works for the en and es combination of locales.

Now we can create our LanguageSelector component at ./src/components/LanguageSelector.js:

// ./src/components/LanguageSelector.js

import * as React from "react";
import { useContext } from "react";
// 1
import { config } from "../../i18n/config";
import { Link } from "gatsby";
import { LocationContext } from "../context/LocationContext";
import { LocaleContext } from "../context/LocaleContext";

export const LanguageSelector = () => {
// 2
  const locale = useContext(LocaleContext);
// 3
  const { pathname } = useContext(LocationContext);
// 4
  const removeLocalePath = /(\/e(s|n)|)(\/*|)/;
  const pathnameWithoutLocale = pathname.replace(removeLocalePath, "");
// 5
  return (
    <div>
      { config.map(({code, localName}) => {
        return (
          code !== locale && (
            <Link key={code} to={`/${code}/${pathnameWithoutLocale}`}>
              {localName}
            </Link>
          )
        );
      }) }
    </div>
);
};

Let’s break down what is happening in that code:

  1. We get our i18n configurations from the ./i18n/config.js file instead of the useLocalization hook that was provided by the gatsby-theme-i18n plugin in Part 1.
  2. We get the current locale through context.
  3. We find the page’s current pathname through context, which is the part that comes after the domain (e.g., /en/recipes/pizza).
  4. We remove the locale part of the pathname using the regex pattern (leaving just recipes/pizza).
  5. We render a link for each available locale except the current one. So we check if the locale is the same as the page before rendering a common Gatsby Link to the desired locale.

Now, inside our gatsby-ssr.js and gatsby-browser.js files, we can add our LanguageSelector, so it is available globally on the site at the top of all pages:

// ./gatsby-ssr.js & ./gatsby-browser.js

import * as React from "react";
import { LocationProvider } from "./src/context/LocationContext";
import { LocaleProvider } from "./src/context/LocaleContext";
import { LanguageSelector } from "./src/components/LanguageSelector";

export const wrapPageElement = ({ element, props }) => {
  const { location } = props;
  const { locale } = element.props.pageContext;

  return (
    <LocaleProvider locale={locale}>
      <LocationProvider location={location}>
        <LanguageSelector />
        {element}
      </LocationProvider>
    </LocaleProvider>
  );
};
Localizing Static Content

The last thing to do would be to localize the static content on our site, like the page titles and headers. To do this, we will need to save our translations in a file and find a way to display the correct one depending on the page’s locale.

Page Body Translations

In Part 1, we used the react-intl package for adding our translations, but we can do the same thing from scratch. First, we will need to create a new translations.js file in the /i18n folder that holds all of our translations.

We will create and export a translations object with two properties: en and es, which will hold the translations as strings under the same property name.

// ./i18n/translations.js

export const translations = {
  en: {
    index_page_title: "Welcome to my English cooking blog!",
    index_page_subtitle: "Written by Juan Diego Rodríguez",
    not_found_page_title: "Page not found",
    not_found_page_body: "😔 Sorry, we were unable find what you were looking for.",
    not_found_page_back_link: "Go Home",
  },
  es: {
    index_page_title: "¡Bienvenidos a mi blog de cocina en español!",
    index_page_subtitle: "Escrito por Juan Diego Rodríguez",
    not_found_page_title: "Página no encontrada",
    not_found_page_body: "😔 Lo siento, no pudimos encontrar lo que buscabas",
    not_found_page_back_link: "Ir al Inicio",
  },
};

We know the page’s locale from the LocaleContext we set up earlier, so we can load the correct translation using the desired property name.

The cool thing is that no matter how many translations we add, we won’t bloat our site’s bundle size since Gatsby builds the entire app into a static site.

// ./src/pages/index.js

// etc.

import { LocaleContext } from "../context/LocaleContext";
import { useContext } from "react";
import { translations } from "../../i18n/translations";

const IndexPage = ({ data }) => {
  const recipes = data.allMarkdownRemark.nodes;
  const locale = useContext(LocaleContext);

  return (
    <main>
      <h1>{translations[locale].index_page_title}</h1>
      <h2>{translations[locale].index_page_subtitle}</h2>
      {recipes.map(({frontmatter}) => {
        return <RecipePreview key={frontmatter.slug} data={frontmatter} />;
      })}
    </main>
  );
};

// etc.
// ./src/pages/404.js

// etc.

import { LocaleContext } from "../context/LocaleContext";
import { useContext } from "react";
import { translations } from "../../i18n/translations";

const NotFoundPage = () => {
  const locale = useContext(LocaleContext);

  return (
    <main>
      <h1>{translations[locale].not_found_page_title}</h1>
      <p>
        {translations[locale].not_found_page_body} <br />
        <Link to="/">{translations[locale].not_found_page_back_link}</Link>.
      </p>
    </main>
  );
};

// etc.

Note: Another way we can access the locale property is by using pageContext in the page props.

Page Title Translations

We ought to localize the site’s page titles the same way we localized our page content. However, in Part 1, we used react-helmet for the task since the LocaleContext isn’t available at the Gatsby Head API. So, to complete this task without resorting to a third-party plugin, we will take a different path. We’re unable to access the locale through the LocaleContext, but as I noted above, we can still get it with the pageContext property in the page props.

// ./src/page/index.js

// etc.

export const Head = ({pageContext}) => {
  const {locale} = pageContext;
  return <title>{translations[locale].index_page_title}</title>;
};

// etc.
// ./src/page/404.js

// etc.

export const Head = ({pageContext}) => {
  const {locale} = pageContext;
  return <title>{translations[locale].not_found_page_title}</title>;
};

// etc.
Formatting

Remember that i18n also covers formatting numbers and dates depending on the current locale. We can use the Intl object from the JavaScript Internationalization API. The Intl object holds several constructors for formatting numbers, dates, times, plurals, and so on, and it’s globally available in JavaScript.

In this case, we will use the Intl.DateTimeFormat constructor to localize dates in blog posts. It works by creating a new Intl.DateTimeFormat object with the locale as its parameter:

const DateTimeFormat = new Intl.DateTimeFormat("en");

The new Intl.DateTimeFormat and other Intl instances have several methods, but the main one is the format method, which takes a Date object as a parameter.

const date = new Date();
console.log(new Intl.DateTimeFormat("en").format(date)); // 4/20/2023
console.log(new Intl.DateTimeFormat("es").format(date)); // 20/4/2023

The format method takes an options object as its second parameter, which is used to customize how the date is displayed. In this case, the options object has a dateStyle property to which we can assign "full", "long", "medium", or "short" values depending on our needs:

const date = new Date();

console.log(new Intl.DateTimeFormat("en", {dateStyle: "short"}).format(date)); // 4/20/23
console.log(new Intl.DateTimeFormat("en", {dateStyle: "medium"}).format(date)); // Apr 20, 2023
console.log(new Intl.DateTimeFormat("en", {dateStyle: "long"}).format(date)); // April 20, 2023
console.log(new Intl.DateTimeFormat("en", {dateStyle: "full"}).format(date)); // Thursday, April 20, 2023

In the case of our blog posts publishing date, we will set the dateStyle to "long".

// ./src/pages/recipes/{markdownRemark.frontmatter__slug}.js

// etc.

const RecipePage = ({ data, pageContext }) => {
  const { html, frontmatter } = data.markdownRemark;
  const { title, cover_image, date } = frontmatter;
  const { locale } = pageContext;
  const cover_image_data = getImage(cover_image.image.childImageSharp.gatsbyImageData);

  return (
    <main>
      <h1>{title}</h1>
      <p>{new Intl.DateTimeFormat(locale, { dateStyle: "long" }).format(new Date(date))}</p>
      <GatsbyImage image={cover_image_data} alt={cover_image.alt} />
      <p dangerouslySetInnerHTML={{__html: html}}></p>
    </main>
  );
};

// etc.
Conclusion

And just like that, we reduced the need for several i18n plugins to a grand total of zero. And we didn’t even lose any functionality in the process! If anything, our hand-rolled solution is actually more robust than the system of plugins we cobbled together in Part 1 because we now have localized 404 pages.

That said, both approaches are equally valid, but in times when Gatsby plugins are unsupported in some way or conflict with other plugins, it is sometimes better to create your own i18n solution. That way, you don’t have to worry about plugins that are outdated or left unmaintained. And if there is a conflict with another plugin, you control the code and can fix it. I’d say these sorts of benefits greatly outweigh the obvious convenience of installing a ready-made, third-party solution.

]]>
hello@smashingmagazine.com (Juan Diego Rodríguez)
<![CDATA[What Was SmashingConf In San Franciso Like?]]> https://smashingmagazine.com/2023/06/smashingconf-sanfrancisco2023-recap/ https://smashingmagazine.com/2023/06/smashingconf-sanfrancisco2023-recap/ Fri, 16 Jun 2023 08:00:00 GMT “Give them sweet memories.”

It was an unexpected suggestion from one of the Smashing event organizers when I asked for guidance on this article. But then, so much of the week had been unexpected. As a baby dev, volunteering at industry events is a no-brainer; I’ve been to nine this year, and this Smashing Conference has definitely been the standout.

There was none of the frenzied desperation that characterizes so many conferences; rather, the atmosphere was relaxed and casual. I talked to anyone who didn’t actively flee my approach, so by the end of the week, I’d spoken with guests, speakers, sponsors, fellow volunteers, catering staff, and the bouncer at the afterparty. Most people described the week as “fun” and “intimate,” not what one usually expects from a tech conference, although returning guests clearly did expect it.

I believe this pleasant expectation, this trust in Smashing to create something good, was the foundation of the get-together-with-friends vibe at the event. First-timers (myself included) were welcomed and soon made a happy part of the community. Many solo attendees ended the week with the intention of returning next year with their entire team in tow.

A significant reason for this welcoming feeling was the schedule. Speakers were arranged in a single track, on a single stage, thus avoiding the dreaded either/or dilemma and relieving guests and speakers alike of the need to rush around in search of their next session. Breaks were long enough to enjoy lunch at a relaxed pace and to socialize — I even spotted a couple of impromptu chess matches in the lobby.

For those who wished to continue learning over sandwiches and orzo, there were optional lunch sessions held in the workshop building. These sessions were well-attended, and it was heartening to see such honest enthusiasm for the subject matter.

The speakers were very accessible — everyone loved how they were happy to meet, not just for fist bumps but for meaningful conversations. I overheard a group squealing like K-Pop fans about the excellent chat they’d had with their favorite speaker.

As a volunteer, it wasn’t always feasible to sit in the theatre and enjoy the talks in person, but it turned out that missing content wasn’t a concern: presentations were streamed live in the lobby, complete with closed captioning.

Presentation topics seemed to have been thoughtfully curated, such that hardly anyone could settle on a single favorite. For the familiar topics, there was professional eagerness. For the unfamiliar ones, there was first polite interest, then appreciation. The crowd always emerged for caffeine and snacks eager to gather and talk about their recent revelations.

I’ve personally heard from several people who are already trying out ideas they haven’t heard of before.

“I didn’t know [frequently-used tech] could do all that!”

As for the hands-on workshops, I actually heard someone describe these deep dive sessions as “magic.” Workshop topics were practical and, one could argue, essential, including accessibility, flexibility, performance, and more. The breakroom chatter sounded like a huge improv troupe riffing on the theme of shameless plugs for workshops.

“I can’t wait to use this at work — this is going to make [task I don’t understand yet] so much faster!”
“I can’t believe how much I’m learning in just a few hours!”

It was amusing, and exciting.

If the speaker presentations, lunch sessions, and full-day workshops weren’t enough for the lifelong learners in attendance, the conference also featured Jam Sessions — an evening of dinner, drinks, and “lightning talks” designed to spark curiosity and interest in fascinating mini-topics. I’m grateful to have been able to present the closing talk on “Developing Emotional Resilience” that night, and if you’re wondering whether you should give a talk of your own next time, the answer is a resounding YES.

Beyond all this quality content, the event organizers had also planned a number of purely fun activities. A Golden Gate 5k kicked off each morning and attracted a dozen of the cheeriest faces I’ve seen on this side of the bridge at any hour. Alcatraz, sailboats, and sea lion pups completed the quintessential San Francisco summer scene (the freezing winds were also quintessential San Francisco summer).

As the only Bay Area native volunteer, I had the honor of leading the photo walk around the picturesque Presidio neighborhood. I’d been expecting a group size comparable to the morning jogs, but over thirty determined photographers showed up for the tour. Together, we visited several popular destinations and braved the famous Lyon Street steps, but the crowd favorite had to be the Yoda fountain at Lucasfilm. Nerds.

After the first conference day, a good crowd met up for the afterparty at Emporium, where drink tickets and game tokens were plentiful. Between pinball, arcade games, and seemingly endless other entertainments, the party was a hit with the night owls.

The Smashing organizers really wanted people to enjoy themselves, and even a bookish misanthrope like me couldn’t help but have a great time. Many of the chattiest people I met that week later confessed, in nearly the exact same words:

“You know, I’m actually an introvert. I usually dread social events — but it feels so comfortable here!”

I had to agree. Thanks to early access to the Smashing Slack channel, we were able to get acquainted in advance and meet in person as not-quite-strangers. More than that, the emphasis on kindness and open-mindedness seemed to attract the loveliest people.

I made more friends in those few days than I had in my whole adult life in the same city. In the week following the conference, I’ve had brunch with an East Coast engineer, lunch and an office tour with a San Francisco team, a laugh-filled hour-long video call with an exec in Uruguay, and I’ve been invited to a group project with an energetic pack of devs dispersed across the country, but connected by our love of coding and cats. I’ve exchanged recipes with a Senior Engineer, book recommendations with an Engineering Manager, and Instagram handles with enough people to start our own mid-sized company. I wonder what kinds of connections others were able to make!

In terms of networking, Smashing was unparalleled, yet it felt like we didn’t “network” at all. We certainly learned a lot, and we have some new LinkedIn connections, but unexpectedly, we made honest-to-goodness friends. As far as I’m concerned, that’s more than a sweet memory. It’s a sweet beginning!

If you’d like to join the SmashingConf team next time, feel free to apply as a volunteer yourself anytime. There are even discounts for students and non-profits available — all you need to do is reach out to the team!

  • SmashingConf Freiburg 🇩🇪 (in-person + online, Sep 4–6) with adventures into design systems, accessibility, CSS, JS and web performance.
  • SmashingConf Antwerp 🇧🇪 (Oct 9–11), on design systems, usability, product design and complex UI challenges.
]]>
hello@smashingmagazine.com (Ren Chen)
<![CDATA[Meet Codux: The React Visual Editor That Improves Developer Experience]]> https://smashingmagazine.com/2023/06/codux-react-visual-editor-improves-developer-experience/ https://smashingmagazine.com/2023/06/codux-react-visual-editor-improves-developer-experience/ Thu, 15 Jun 2023 08:00:00 GMT This article is a sponsored by Wix

Personally, I get tired of the antics at the start of any new project. I’m a contractor, too, so there’s always some new dependency I need to adopt, config files to force me to write the way a certain team likes, and deployment process I need to plug into. It’s never a fire-up-and-go sort of thing, and it often takes the better part of a working day to get it all right.

There are a lot of moving pieces to a project, right? Everything — from integrating a framework and establishing a component library to collaboration and deployments — is a separate but equally important part of your IDE. If you’re like me, jumping between apps and systems is something you get used to. But honestly, it’s an act of Sisyphus rolling the stone up the mountain each time, only to do it again on the next project.

That’s the setup for what I think is a pretty darn good approach to streamline this convoluted process in a way that supports any common project structure and is capable of enhancing it with visual editing capabilities. It’s called Codux, and if you stick with me for a moment, I think you’ll agree that Codux could be the one-stop shop for everything you need to build production-ready React apps.

Codux is More “Your-Code” Than "Low-Code"

I know, I know. "Yay, another visual editor!" says no one, ever. The planet is already full of those, and they’re really designed to give folks developer superpowers without actually doing any development.

That’s so not the case with Codux. There are indeed a lot of "low-code" affordances that could empower non-developers, but that’s not the headlining feature of Codux or really who or what it caters to. Instead, Codux is a fully-integrated IDE that provides the bones of your project while improving the developer experience instead of abstracting it away.

Do you use CodePen? What makes it so popular (and great to use) is that it "just" works. It combines frameworks, preprocessors, a live rendering environment, and modern build tools into a single interface that does all the work on "Save". But I still get to write code in a single place, the way I like it.

I see Codux a lot like that. But bigger. Not bigger in the sense of more complicated, but bigger in that it is more integrated than frameworks and build tools. It _is_ your framework. It _is_ your component library. It _is_ your build process. And it just so happens to have incredibly powerful visual editing controls that are fully integrated with your code editor.

That’s why it makes more sense to call Codux “your code” instead of the typical low-code or no-code visual editing tools. Those are designed for non-developers. Codux, on the other hand, is made for developers.

In fact, here’s a pretty fun thing to do. Open a component file from your project in VS Code and put the editor window next to the Codux window open to the same component. Make a small CSS change or something and watch both the preview rendering and code update instantly in Codux.

That’s just one of those affordances that really polish up the developer experience. Anyone else might overlook something like this, but as a developer, you know how much saved time can add up with something like this.

Code, Inspect And Debug Together At Last

There are a few other affordances available when selecting an element on the interactive stage on Codux:

  • A style panel for editing CSS and trying different layouts. And, again, changes are made in real-time, both in the rendered preview and in your code, which is visible to you all the time — whether directly in Codux or in your IDE.
  • A property panel that provides easy access to all the selected properties of a component with visual controllers to modify them (and see the changes reflected directly in the code)
  • An environment panel that provides you with control over the rendering environment of the component, such as the screen or canvas size, as well as the styling for it.

Maybe Give Codux A Spin

It’s pretty rad that I can fire up a single app to access my component library, code, documentation, live previews, DOM inspector, and version control. If you would’ve tried explaining this to me before seeing Codux, I would’ve said that’s too much for one app to handle; it’d be a messy UI that’s more aspiration than it is a liberating center of development productivity.

No lying. That’s exactly what I thought when the Wix team told me about it. I didn’t even think it was a good idea to pack all that in one place.

But they did, and I was dead wrong. Codux is pretty awesome. And apparently, it will be even more awesome because the FAQ talks about a bunch of new features in the works, things like supporting full frameworks. The big one is an online version that will completely remove the need to set up development environments every time someone joins the team, or a stakeholder wants access to a working version of the app. Again, this is all in the works, but it goes to show how Codux is all about improving the developer experience.

And it’s not like you’re building a Wix site with it. Codux is its own thing — something that Wix built to get rid of their own pain points in the development process. It just so happens that their frustrations are the same that many of us in the community share, which makes Codux a legit consideration for any developer or team.

Oh, and it’s free. You can download it right now, and it supports Windows, Mac, and Linux. In other words, you can give it a spin without buying into anything.

]]>
hello@smashingmagazine.com (Geoff Graham)
<![CDATA[How To Build Server-Side Rendered (SSR) Svelte Apps With SvelteKit]]> https://smashingmagazine.com/2023/06/build-server-side-rendered-svelte-apps-sveltekit/ https://smashingmagazine.com/2023/06/build-server-side-rendered-svelte-apps-sveltekit/ Wed, 14 Jun 2023 11:00:00 GMT I’m not interested in starting a turf war between server-side rendering and client-side rendering. The fact is that SvelteKit supports both, which is one of the many perks it offers right out of the box. The server-side rendering paradigm is not a new concept. It means that the client (i.e., the user’s browser) sends a request to the server, and the server responds with the data and markup for that particular page, which is then rendered in the user’s browser.

To build an SSR app using the primary Svelte framework, you would need to maintain two codebases, one with the server running in Node, along with with some templating engine, like Handlebars or Mustache. The other application is a client-side Svelte app that fetches data from the server.

The approach we’re looking at in the above paragraph isn’t without disadvantages. Two that immediately come to mind that I’m sure you thought of after reading that last paragraph:

  1. The application is more complex because we’re effectively maintaining two systems.
  2. Sharing logic and data between the client and server code is more difficult than fetching data from an API on the client side.
SvelteKit Simplifies The Process

SvelteKit streamlines things by handling of complexity of the server and client on its own, allowing you to focus squarely on developing the app. There’s no need to maintain two applications or do a tightrope walk sharing data between the two.

Here’s how:

  • Each route can have a page.server.ts file that’s used to run code in the server and return data seamlessly to your client code.
  • If you use TypeScript, SvelteKit auto-generates types that are shared between the client and server.
  • SvelteKit provides an option to select your rendering approach based on the route. You can choose SSR for some routes and CSR for others, like maybe your admin page routes.
  • SvelteKit also supports routing based on a file system, making it much easier to define new routes than having to hand-roll them yourself.
SvelteKit In Action: Job Board

I want to show you how streamlined the SvelteKit approach is to the traditional way we have been dancing between the SSR and CSR worlds, and I think there’s no better way to do that than using a real-world example. So, what we’re going to do is build a job board — basically a list of job items — while detailing SvelteKit’s role in the application.

When we’re done, what we’ll have is an app where SvelteKit fetches the data from a JSON file and renders it on the server side. We’ll go step by step.

First, Initialize The SvelteKit Project

The official SvelteKit docs already do a great job of explaining how to set up a new project. But, in general, we start any SvelteKit project in the command line with this command:

npm create svelte@latest job-list-ssr-sveltekit

This command creates a new project folder called job-list-ssr-sveltekit on your machine and initializes Svelte and SvelteKit for us to use. But we don’t stop there — we get prompted with a few options to configure the project:

  1. First, we select a SvelteKit template. We are going to stick to using the basic Skeleton Project template.
  2. Next, we can enable type-checking if you’re into that. Type-checking provides assistance when writing code by watching for bugs in the app’s data types. I’m going to use the “TypeScript syntax” option, but you aren’t required to use it and can choose the “None” option instead.

There are additional options from there that are more a matter of personal preference:

If you are familiar with any of these, you can add them to the project. We are going to keep it simple and not select anything from the list since what I really want to show off is the app architecture and how everything works together to get data rendered by the app.

Now that we have the template for our project ready for us let’s do the last bit of setup by installing the dependencies for Svelte and SvelteKit to do their thing:

cd job-listing-ssr-sveltekit
npm install

There’s something interesting going on under the hood that I think is worth calling out:

Is SvelteKit A Dependency?

If you are new to Svelte or SvelteKit, you may be pleasantly surprised when you open the project’s package.json file. Notice that the SvelteKit is listed in the devDependencies section. The reason for that is Svelte (and, in turn, SvelteKit) acts like a compiler that takes all your .js and .svelte files and converts them into optimized JavaScript code that is rendered in the browser.

This means the Svelte package is actually unnecessary when we deploy it to the server. That’s why it is not listed as a dependency in the package file. The final bundle of our job board app is going to contain just the app’s code, which means the size of the bundle is way smaller and loads faster than the regular Svelte-based architecture.

Look at how tiny and readable the package-json file is!

{
    "name": "job-listing-ssr-sveltekit",
    "version": "0.0.1",
    "private": true,
    "scripts": {
        "dev": "vite dev",
        "build": "vite build",
        "preview": "vite preview",
        "check": "svelte-kit sync && svelte-check --tsconfig ./tsconfig.json",
        "check:watch": "svelte-kit sync && svelte-check --tsconfig ./tsconfig.json --watch"
    },
    "devDependencies": {
        "@sveltejs/adapter-auto": "^2.0.0",
        "@sveltejs/kit": "^1.5.0",
        "svelte": "^3.54.0",
        "svelte-check": "^3.0.1",
        "tslib": "^2.4.1",
        "typescript": "^4.9.3",
        "vite": "^4.0.0"
    },
    "type": "module"
}

I really find this refreshing, and I hope you do, too. Seeing a big list of packages tends to make me nervous because all those moving pieces make the entirety of the app architecture feel brittle and vulnerable. The concise SvelteKit output, by contrast, gives me much more confidence.

Creating The Data

We need data coming from somewhere that can inform the app on what needs to be rendered. I mentioned earlier that we would be placing data in and pulling it from a JSON file. That’s still the plan.

As far as the structured data goes, what we need to define are properties for a job board item. Depending on your exact needs, there could be a lot of fields or just a few. I’m going to proceed with the following:

  • Job title,
  • Job description,
  • Company Name,
  • Compensation.

Here’s how that looks in JSON:

[{
    "job_title": "Job 1",
    "job_description": "Very good job",
    "company_name": "ABC Software Company",
    "compensation_per_year": "$40000 per year"
}, {
    "job_title": "Job 2",
    "job_description": "Better job",
    "company_name": "XYZ Software Company",
    "compensation_per_year": "$60000 per year"
}]

Now that we’ve defined some data let’s open up the main project folder. There’s a sub-directory in there called src. We can open that and create a new folder called data and add the JSON file we just made to it. We will come back to the JSON file when we work on fetching the data for the job board.

Adding TypeScript Model

Again, TypeScript is completely optional. But since it’s so widely used, I figure it’s worth showing how to set it up in a SvelteKit framework.

We start by creating a new models.ts file in the project’s src folder. This is the file where we define all of the data types that can be imported and used by other components and pages, and TypeScript will check them for us.

Here’s the code for the models.ts file:

export type JobsList = JobItem[]

export interface JobItem {
  job_title: string
  job_description: string
  company_name: string
  compensation_per_year: string
}

There are two data types defined in the code:

  1. JobList contains the array of job items.
  2. JobItem contains the job details (or properties) that we defined earlier.
The Main Job Board Page

We’ll start by developing the code for the main job board page that renders a list of available job items. Open the src/routes/+page.svelte file, which is the main job board. Notice how it exists in the /src/routes folder? That’s the file-based routing system I referred to earlier when talking about the benefits of SvelteKit. The name of the file is automatically generated into a route. That’s a real DX gem, as it saves us time from having to code the routes ourselves and maintaining more code.

While +page.svelte is indeed the main page of the app, it’s also the template for any generic page in the app. But we can create a separation of concerns by adding more structure in the /scr/routes directory with more folders and sub-folders that result in different paths. SvelteKit’s docs have all the information you need for routing and routing conventions.

This is the markup and styles we’ll use for the main job board:

<div class="home-page">
  <h1>Job Listing Home page</h1>
</div>

<style>
  .home-page {
    padding: 2rem 4rem;
    display: flex;
    align-items: center;
    flex-direction: column;
    justify-content: center;
  }
</style>

Yep, this is super simple. All we’re adding to the page is an <h1> tag for the page title and some light CSS styling to make sure the content is centered and has some nice padding for legibility. I don’t want to muddy the waters of this example with a bunch of opinionated markup and styles that would otherwise be a distraction from the app architecture.

Run The App

We’re at a point now where we can run the app using the following in the command line:

npm run dev -- --open

The -- --open argument automatically opens the job board page in the browser. That’s just a small but nice convenience. You can also navigate to the URL that the command line outputs.

The Job Item Component

OK, so we have a main job board page that will be used to list job items from the data fetched by the app. What we need is a new component specifically for the jobs themselves. Otherwise, all we have is a bunch of data with no instructions for how it is rendered.

Let’s take of that by opening the src folder in the project and creating a new sub-folder called components. And in that new /src/components folder, let’s add a new Svelte file called JobDisplay.svelte.

We can use this for the component’s markup and styles:

<script lang="ts">
  import type { JobItem } from "../models";
  export let job: JobItem;
</script>

<div class="job-item">
  <p>Job Title: <b>{job.job_title}</b></p>
  <p>Description: <b>{job.job_description}</b></p>
  <div class="job-details">
    <span>Company Name : <b>{job.company_name}</b></span>
    <span>Compensation per year: <b>{job.compensation_per_year}</b></span>
  </div>
</div>

<style>
  .job-item {
    border: 1px solid grey;
    padding: 2rem;
    width: 50%;
    margin: 1rem;
    border-radius: 10px;
  }

  .job-details {
    display: flex;
    justify-content: space-between;
  }
</style>

Let’s break that down so we know what’s happening:

  1. At the top, we import the TypeScript JobItem model.
  2. Then, we define a job prop with a type of JobItem. This prop is responsible for getting the data from its parent component so that we can pass that data to this component for rendering.
  3. Next, the HTML provides this component’s markup.
  4. Last is the CSS for some light styling. Again, I’m keeping this super simple with nothing but a little padding and minor details for structure and legibility. For example, justify-content: space-between adds a little visual separation between job items.

Fetching Job Data

Now that we have the JobDisplay component all done, we’re ready to pass it data to fill in all those fields to be displayed in each JobDisplay rendered on the main job board.

Since this is an SSR application, the data needs to be fetched on the server side. SvelteKit makes this easy by having a separate load function that can be used to fetch data and used as a hook for other actions on the server when the page loads.

To fetch, let’s create yet another new file TypeScript file — this time called +page.server.ts — in the project’s routes directory. Like the +page.svelte file, this also has a special meaning which will make this file run in the server when the route is loaded. Since we want this on the main job board page, we will create this file in the routes directory and include this code in it:

import jobs from ’../data/job-listing.json’
import type { JobsList } from ’../models’;

const job_list: JobsList = jobs;

export const load = (() => {
  return {
    job_list
  };
})

Here’s what we’re doing with this code:

  1. We import data from the JSON file. This is for simplicity purposes. In the real app, you would likely fetch this data from a database by making an API call.
  2. Then, we import the TypeScript model we created for JobsList.
  3. Next, we create a new job_list variable and assign the imported data to it.
  4. Last, we define a load function that will return an object with the assigned data. SvelteKit will automatically call this function when the page is requested. So, the magic for SSR code happens here as we fetch the data in the server and build the HTML with the data we get back.
Accessing Data From The Job Board

SvelteKit makes accessing data relatively easy by passing data to the main job board page in a way that checks the types for errors in the process. We can import a type called PageServerData in the +page.svelte file. This type is autogenerated and will have the data returned by the +page.server.ts file. This is awesome, as we don’t have to define types again when using the data we receive.

Let’s update the code in the +page.svelte file, like the following:

<script lang="ts">
  import JobDisplay from ’../components/JobDisplay.svelte’;
  import type { PageServerData } from ’./$types’;

  export let data: PageServerData;
</script>

<div class="home-page">
  <h1>Job Listing Home page</h1>

  {#each data.job_list as job}
    <JobDisplay job={job}/>
  {/each}
</div>

<style>....</style>

This is so cool because:

  1. The #each syntax is a Svelte benefit that can be used to repeat the JobDisplay component for all the jobs for which data exists.
  2. At the top, we are importing both the JobDisplay component and PageServerData type from ./$types, which is autogenerated by SvelteKit.

Deploying The App

We’re ready to compile and bundle this project in preparation for deployment! We get to use the same command in the Terminal as most other frameworks, so it should be pretty familiar:

npm run build

Note: You might get the following warning when running that command: “Could not detect a supported production environment.” We will fix that in just a moment, so stay with me.

From here, we can use the npm run preview command to check the latest built version of the app:

npm run preview

This process is a new way to gain confidence in the build locally before deploying it to a production environment.

The next step is to deploy the app to the server. I’m using Netlify, but that’s purely for example, so feel free to go with another option. SvelteKit offers adapters that will deploy the app to different server environments. You can get the whole list of adapters in the docs, of course.

The real reason I’m using Netlify is that deploying there is super convenient for this tutorial, thanks to the adapter-netlify plugin that can be installed with this command:

npm i -D @sveltejs/adapter-netlify

This does, indeed, introduce a new dependency in the package.json file. I mention that because you know how much I like to keep that list short.

After installation, we can update the svelte.config.js file to consume the adapter:

import adapter from ’@sveltejs/adapter-netlify’;
import { vitePreprocess } from ’@sveltejs/kit/vite’;

/** @type {import(’@sveltejs/kit’).Config} */
const config = {
    preprocess: vitePreprocess(),

    kit: {
        adapter: adapter({
            edge: false, 
            split: false
        })
    }
};

export default config;

Real quick, this is what’s happening:

  1. The adapter is imported from adapter-netlify.
  2. The new adapter is passed to the adapter property inside the kit.
  3. The edge boolean value can be used to configure the deployment to a Netlify edge function.
  4. The split boolean value is used to control whether we want to split each route into separate edge functions.

More Netlify-Specific Configurations

Everything from here on out is specific to Netlify, so I wanted to break it out into its own section to keep things clear.

We can add a new file called netlify.toml at the top level of the project folder and add the following code:

[build]
  command = "npm run build"
  publish = "build"

I bet you know what this is doing, but now we have a new alias for deploying the app to Netlify. It also allows us to control deployment from a Netlify account as well, which might be a benefit to you. To do this, we have to:

  1. Create a new project in Netlify,
  2. Select the “Import an existing project” option, and
  3. Provide permission for Netlify to access the project repository. You get to choose where you want to store your repo, whether it’s GitHub or some other service.

Since we have set up the netlify.toml file, we can leave the default configuration and click the “Deploy” button directly from Netlify.

Once the deployment is completed, you can navigate to the site using the provided URL in Netlify. This should be the final result:

Here’s something fun. Open up DevTools when viewing the app in the browser and notice that the HTML contains the actual data we fetched from the JSON file. This way, we know for sure that the right data is rendered and that everything is working.

Note: The source code of the whole project is available on GitHub. All the steps we covered in this article are divided as separate commits in the main branch for your reference.

Conclusion

In this article, we have learned about the basics of server-side rendered apps and the steps to create and deploy a real-life app using SvelteKit as the framework. Feel free to share your comments and perspective on this topic, especially if you are considering picking SvelteKit for your next project.

Further Reading On SmashingMag

]]>
hello@smashingmagazine.com (Sriram Thiagarajan)
<![CDATA[Smashing Podcast Episode 62 With Slava Shestopalov: What Is Design Management?]]> https://smashingmagazine.com/2023/06/smashing-podcast-episode-62/ https://smashingmagazine.com/2023/06/smashing-podcast-episode-62/ Tue, 13 Jun 2023 14:00:00 GMT In this episode of The Smashing Podcast, we ask what is a design manager? What does it take and how does it relate to the role of Designer? Vitaly talks to Slava Shestopalov to find out.

Show Notes

Weekly Update

Transcript

Vitaly: He’s a design leader, lecturer and design educator. He has seen it all working as a graphic designer in his early years and then, moving to digital products, UX, accessibility and design management. Most recently, he has worked as a lead designer and design manager in a software development company, Alex, and then, later, Bolt, the all-in-one mobility app. Now, he’s very keen on building bridges between various areas of knowledge rather than specializing in one single thing, and we’ll talk about that as well. He also loves to write, he has a passion for medieval style UX design myths. Who doesn’t? And is passionate about street and architecture photos. Originally from Cherkasy, Ukraine, he now lives in Berlin with his wonderful wife, Aksano. So we know that he’s an experienced designer and design manager, but did you know that he also loves biking, waking up at 5:00 AM to explore cities and can probably talk for hours about every single water tower in your city. My Smashing friends, please welcome Slava Shestopalov. Hello Slava. How are you doing today?

Slava: I am Smashing.

Vitaly: Oh yes, always.

Slava: Or at least I was told to say that.

Vitaly: Okay, so that’s a fair assessment in this case. It’s always a pleasure to meet you and to see you. I know so many things about you. I know that you’re very pragmatic. I know that you always stay true to your words. I know that you care about the quality of your work. But it’s always a pleasure to hear a personal story from somebody who’s kind of explaining where they’re coming from, how they ended up where they are today. So maybe I could ask you first to kind of share your story. How did you arrive kind of where you are today? Where you coming from or where you’re going? That’s very philosophical, but let’s start there.

Slava: That’s quite weird. I mean, my story is quite weird because I’m a journalist by education and I never thought of being a designer at school or the university. During my study years, I dreamt about something else. Maybe I didn’t really have a good idea of my future profession rather about the feeling that it should bring, that it should be something interesting, adventurous, something connected with helping other people. I dreamt about being a historian, geographer, maybe traveling in the pursuit of new adventures or inventions, but ended up being a journalist.

Slava: My parents recommended me choose this path because they thought I was quite talkative person and it would’ve been a great application for such a skill. And since I didn’t have any better ideas, I started studying at the university, studying journalism. And then, on the third year studying, during our practice, and by the way, I met my wife there, under the university, we are together since the first day of studying, we were in the same academic group, not only on the same faculty, and we were passing our journalistic practice at the Press Department of the local section of the Ministry of Emergencies, meaning that we were writing articles about various accidents happening in the Cherkasy region, taking photos of, sometimes, not very funny things. And accidentally, there I tried CorelDRAW, there is the whole generation of designers who don’t even know what those words mean.

Vitaly: Well, you don’t use CorelDRAW anymore, do you?

Slava: Not anymore. I don’t even know whether this software is still available. So I accidentally tried that in our editorial office where, as our practices, was not even real work. And somehow, it was more or less okay. I created the first layout. Of course, now I am scared to look at it. I don’t even have it saved somewhere on my computer. That’s an abomination, not design. But back then, it worked out and I started developing this skill as a secondary skill. I’m a self-taught designer, so never had any systematic way of learning design, rather learning based on my own mistakes, trying something new, producing a lot of work that I’m not proud of.

Vitaly: But also, I’m sure work that you are proud of.

Slava: Yeah. But then, later, I joined first small design studios and I’m forever thankful to my, back then, art director who once came to my desk, looked at the layout on my screen and told me, "Slava, please don’t get offense, but there is a book that you have to read." And he handed me the book Design for Non-Designers. That’s an amazing book, I learned a lot from it, the basics of composition, contrast, alignment, the visual basics. And I started applying it to my work, it got better. Then of course, I read many more books for designers, but also, books on design, on business and management and other topics. And gradually, by participating in more and more complex projects, I got to the position where I am right now.

Vitaly: So it’s interesting for me because actually I remember my days coming also without any formal education as a designer, I actually ended up just playing with boxes on page. And I actually came to design through the lens of HTML, CSS back in the day, really, through frontend development. And then, this is why I exclusive design accessibility lies way, it’s close to my heart. And it’s the thing that many people actually really like that kind of moving into design and then, starting just getting better at design.

Vitaly: But you decided to go even further than that. I think in 2019, you transitioned from the role of a lead designer, if I’m not mistaken, to design manager. Was it something that you envisioned, that you just felt like this is a time to do that? Because again, there are two kinds of people that I encounter. Some people really go into management thinking that this is just a natural progression of their career, you cannot be just a designer, and this is in quotation marks, "forever," so you’re going to go into the managerial role. And some people feel like, let me try that and see if it’s for me and if not, I can always go back to design or maybe to another company product team and whatnot. What was it like for you? Why did you decide to take this route?

Slava: The reason was curiosity. I wouldn’t say that I was the real manager because design management is slightly different, probably even other types of management like product management, engineering management; it’s not completely management because what is required there, if you look at the reconsis, you will notice that the domain knowledge, the hard skills are essential and you’ll be checked whether you have those skills as well apart from the managerial competence. So I wouldn’t say that this kind of management is 100% true, complete management as we can imagine it in the classical meaning, it’s the combination of what you’ve been doing before with management and the higher the percentage of management is, the higher in the hierarchy you go.

Slava: In my situation, switching from the lead designer to design manager was not that crucial. I would say more critical thing that I experienced was switching from a senior designer to lead designer because this is the point where I got my first team whom I had to lead. And that was the turning point when you realize that the area of your responsibility is not only yourself and your project, but also someone else. And in modern world, we don’t have feudalism and we cannot directly tell people what to do, we are not influencing their choices directly. That’s why it’s getting harder to manage without having the real power. And we are in the civilized world, authoritarian style is not working anymore, and that’s great, but we should get inventive to work with people using gentle, mild methods, taking into account what they want as personalities, but at the same time reaching the business goals of the company and KPIs of the team.

Vitaly: Right. But then also, speaking about the gentle way of managing, I remember the talk that you have given about the thing that you have learned and some of the important things that you consider to be important in a design manager position. So I’m curious if you could share some bits of knowledge of things that you discovered maybe the hard way, which were a little bit surprising to you as you were in that role, for example, also in Bolt. What were some things that you feel many designers maybe who might be listening at this point and thinking, "Oh, actually, I was always thinking about design manager, maybe I should go there," what was some things that were surprising to you and something that were really difficult?

Slava: Something that was surprising both for me and for other people with whom I talk about design management is that we perceive management in the wrong way. We have expectations pretty far from reality. There are some managerial activities that are quite typical for designers, for the design community in general, something that we encounter so often that we tend to think that this is actually management. Maybe there is something else but not much else that we don’t see at the moment, not much is hidden of that management. And that’s why when we jump into management, we discover a lot of unknown things that this type of work includes.

Slava: For example, as a Ukrainian, I know that, in our country, many designers are self-taught designers because the profession develops much faster than the higher education. And that’s why people organize themselves into communities and pass knowledge to each other much faster and easier. And there are so many private schools and private initiatives that spread the knowledge and do that more efficiently so that after couple of months of studying, you get something. Of course, there might be many complaints about the quality of that education, but the sooner you get to the first project, the sooner you make your first mistakes, the better you learn the profession and then, you won’t repeat them again. That’s why I know the power of this community. And mentorship, knowledge-sharing is something extremely familiar to Ukrainian designers.

Slava: And then, generally, I observe the same tendency in the Western Europe that knowledge-sharing, mentorship is the usual thing that many designers do, that many designers practice. And we think that when we switch to management, we will simply scale this kind of activity. In reality, it’s just not even the largest part of management. And when people are officially promoted to managers, to leaders, they discover a lot of other areas like hiring people then being responsible for the hires because it’s not enough just to participate in a technical interview and check the hard skills of a candidate, but also then live with this decision because you cannot easily fire a person, and sometimes, it’s even wrong because as a manager you are supposed to work with this person and develop them and help them grow or help them onboard better and pass this period of adaptation. By the way, adaptation and onboarding, another thing than retention cases, resolving problems when your employees are not satisfied with what they have right now, including you as a manager and many other things like salary, compensation, bonuses, team building trust and relationship in the team, performance management, knowledge assessments.

Vitaly: Right. But then, is there even at all any time then to be designing as you’re a design manager? I know that in some teams, in some companies you have this kind of roles where, well, you’re a design manager, sometimes it would be called just... Yeah, well, hmm — sometimes design leads are actually also managers, depending if it’s like a small company or a larger company. And then, would you say that given the scope that is really changing when you’re kind of moving to management, should you have hopes that you will still have time to play with designs in Figma?

Slava: It depends on how far you go and on the org structure of the particular company. In some cases, you still have plenty of time to design because management doesn’t occupy that much time, you don’t have many subordinates or the company so small that the processes are not very formalized. In that case, yep, you can still design maybe 50% of your time, maybe even 70% of your time and manage during the rest of the time. But there are large companies where management occupies more and more time and then, yeah, probably you won’t be designing or at least designing the same way as it used to be before.

Slava: There are multiple levels of design, multiple levels of obstruction. For example, when you’re moving pixels in Figma in order to create a well-balanced button, that’s design. But when you’re creating a customer journey map or mapping a service blueprint together with stakeholders from other departments of your company, that’s design as well, but on the higher level of obstruction. You are building a bit larger picture of the product service or the whole experience throughout products and multiple services of the company. So I would say that there is always space for design, but this design might get less digital and more connected with organizational design, interaction between different departments and other stuff like that.

Vitaly: Right. So maybe if we go back a little bit into team building or specifically the culture and the way teams are built, obviously, we kind of moved, I don’t know when it was, but we kind of moved to this idea that T-shaped employees is a good thing. So you basically specialize in one thing and then, you have a pretty general understanding about what’s going on in the rest of the organization, the rest of the product and so on. It’s quite shallow, but then, in one thing, you specialize. At the same time, you see a lot of people who call themselves generalists, they kind of know a lot about different things but never really specialized deeply into one thing. And so, you also have this, this is probably considered to be not necessarily just the I shape, where you kind of get very deep in one thing, but really, this is it, you just specialized so deep that you have pretty much no solid understanding about what’s happening around.

Vitaly: And then, one thing that has been kind of discussed recently, I’ve seen at least a few articles about that is a V-shape, where you kind of have a lot of depth in one thing. You also have a pretty okay, solid, general understanding about what’s going on. But then, you also have enough skills or enough information about the adjacent knowledge within the product that you’re working on. So I’m wondering at this point, let’s say if you build a team of designers, what kind of skills or what kind of shape if you like, do we need to still remain quite, I would say, interesting to companies small and large? What kind of shape would that be? If that makes sense.

Slava: Yeah, so you want me to give you a silver bullet, right, for-

Vitaly: Yes.

Slava: ... a company?

Vitaly: Ideally, yes.

Slava: Doesn’t exist. It doesn’t exist. On the one hand, I think that’s a good discussion, discussions about the skill sets of designers, but on the other hand, we are talking a lot about ourselves, maybe, more than representatives of all the other professions about what we should call our profession, what shapes, skillset should we have, what frameworks and tools should we use? It’s extremely designer-centered. And here, of course, I can talk for hours and participate in holy wars about what’s the best name for this, all that, but essentially, at the end of the day, I realize that it doesn’t matter, it doesn’t make sense at all. Okay, whatever we decide, if you are whatever shape designer, but you are not useful in this world, you cannot reach the goal and you cannot find your niche and make users happy and business happy, then it doesn’t matter what’s written on your resume.

Vitaly: Right. So-

Slava: But then, the one hand, yeah, of course, logically, when I think about it, I do support the T-shaped concept. But again, depends on how you understand it, whether those horizontal bar of the T is about shallow knowledge or good enough knowledge or decent knowledge. You see how thick it is? And that’s why we have another concept with this We shape designer, which is essentially another representation of the T-shaped format. The idea is the same that as a human being, of course, you want to specialize in something that’s passion, that you maybe love design for and maybe that’s why you came into the profession. But at the same time, you are obliged to know to a certain minimally required extent, the whole entirety of your profession.

Slava: Ask any other professional, a surgeon, police person, whoever, financial expert, of course, they have their favorite topics, but at the same time, there is a certain requirement to you as a specialist to obtain certain amount of knowledge and skills.

Slava: The same about designers, I don’t see how we are different from other professions. It’s why it’s quite fair to have this expectation that the person would know something about UX research. They are not obliged to be as professional and advanced as specialized UX researchers, but that’s fine for a designer to know about UX research, to do some UX research. The same about UX researchers, it never hurts to know the basics of design in order to understand what your colleagues are doing and then, you collaborate better together.

Vitaly: Which brings me, of course, to the question that I think you brought up in an article, I think maybe five or six years ago. You had a lot of comments on that article. I remember that article very vividly because you argued about all the different ways of how we define design, UX, CX and all the different wordings and abbreviations, service designer, CX designer, UX designer, and so many other things.

Vitaly: I mean, it’s really interesting to me because when I look back, I realize now that we’ve been working very professionally in this industry, in whatever you want to call design industry, UX industry, digital design industry for like... What? ... three decades now, maybe even more than that, really trying to be very professional. But when we look around, actually, and this is just a funny story because just as we started trying to record this session, we spent 14 minutes trying to figure out how to do that in the application here. So what went wrong, Slava? I mean, 30 years is a long time to get some things right and I think that we have done a lot of things. But frankly, too often, when you think about general experience that people would get, be it working with public services, working with insurance companies, working with something that’s maybe less exciting than the landing page or a fancy product or SaaS, very often it’s just not good. What went wrong, Slava? Tell us.

Slava: Nothing went wrong. Everything is fine. The world is getting more and more complex over time, but something never changed, and it’s people, or we didn’t change. Our brain is more or less the same as it was thousand years ago, maybe a couple of thousand years ago and that’s the reason. We are people, we are not perfect. Technology might be amazing, it even feels magical, but we are the same. We are not perfect. We’re not always driven by rational intention to do something well. There are many people who are not very excited about their jobs, that’s why they provide not so good service. There are periods when a good person does bad job and they will improve later, but the task that they deliver today because of many reasons will be at this lower quality.

Slava: Then decision making, we are emotional beings and even if you use a hundred of frameworks about decision making and prioritizing, it doesn’t deny our nature. There are even people who learned to manipulate all the modern techniques, who learned about design thinking and workshops and try to use it to their own advantage. Like, "Oh, okay, I cannot persuade my team, so let’s do this fancy exercise with colored sticky notes and try to-

Vitaly: Well, who doesn’t like colored sticky notes, Slava, come on.

Slava: Digital colored sticky note, they’re still colored and look like sticky notes, right? And those people just want to push their own ideas through workshops. But workshops were designed for something else. The same with business, there are unethical business models still flourishing, there are dark patterns just because some people don’t care. So the reason is that we are the same, we are not perfect.

Vitaly: Right. Well-

Slava: We create design for humans, but we are humans as well.

Vitaly: But sometimes I feel like we are designing for humans, but then, at the same time, I feel that we are spending more and more time designing with AI sometimes for AI, this is how it feels to me. I don’t know about you, every now and again I still get a feeling that, okay, this message that was written by somebody and sent to me, it has a little bit of sense or feel or I don’t know, taste of ChatGPT on it. Just I can tell sometimes that this is kind of for humans, but it’s in a way appears to me as if it was written for AI. So do you have this feeling sometimes that you get that email or you get that message, it’s a little bit too AI-ish? Do you have this experience?

Slava: Sometimes I have this experience, but the reason is that it’s a hot topic right now. You may have already forgotten about another trendy topic, NFT, blockchain, everything was in blockchain, everything was NFT. But over time, people realize where the use cases are really strong and deserve our efforts and where it just doesn’t fit. It’s like with every new technology, it passes the same stages. There is even a nice diagram, the cycle of adoption of any new technology when there is a peak of excitement first when we are trying to apply it everywhere. But then, there is this drop in excitement and disillusionment after which we finally get onto the plateau of enlightenment, finding the best application for this technology.

Slava: I remember the same in the area of design methodology when design sprint just appeared, people tried applying it everywhere, even in many places where it just didn’t fit or the problem was too large or the team culture wasn’t consistent with the trust and openness implied by such a methodology as a design sprint. But over time, it found its application and now, used not that often, but only by those people who need it.

Vitaly: Right. Talking actually about team culture, maybe just to switch the topic a little bit, maybe you could bring a few red flags that you always try to watch out for. Because of course, when you are working with a diverse team and you have people who have very different backgrounds and also have very different expectations and very different skill sets, inevitably, you will face situations where team culture clashes. So I’m wondering, what do you think would be the early warning signs that the manager needs to watch out for to prevent things from exploding down the line?

Slava: That’s a good question. I would turn it into slightly different direction because I think of that kind of paradigm. I would try to prevent this from happening. The best way to deal with it is not to deal with it, to avoid dealing with it. So embracing the culture, understanding it and building it is important because then you won’t need to face the consequence. I wouldn’t say that there are real red flags because culture is like user experience, it’s like gravity, like any other physical force, it just exists. And whether you want it or not, if it’s described in a fancy culture brand guideline or not, it exists anyway. The thing is to be sincere about culture, to embrace the existing culture and to broadcast it to the outside honestly.

Slava: The problem is when the communication about the culture is different from the actual culture. There are various cultures, there are even harsh cultures that someone would find extremely uncomfortable, but for example, for other people it can be a great environment for growth, for rapid growth. Maybe they will change their environment later, but during a certain period of life, it might be important.

Slava: I remember some of my previous companies with pretty harsh cultures, but they helped me to grow and to get where I am right now. Yeah, I wasn’t stressed, but I knew about it. I expected it to happen and I had my inner readiness to resist and to learn my lessons out of that. But the problem is when the company communicates its culture externally as the paradise of wellbeing and mindfulness, but in reality they have deadlines for tomorrow and never ending flow of tasks and crazy stakeholders who demand it from you immediately and give you contradicting requirements. So that’s the problem.

Slava: Of course, yeah, there are some extreme cases when the culture is really toxic, when these are insane, inhuman conditions, I don’t deny that. But in many cases, something that we simply perceive as uncomfortable for ourselves is not necessarily evil, sometimes it is, but not always. And my message is that cultures should be honest. And for that purpose, people should be honest with themselves.

Slava: Manager should look at their company and try to formulate in simple way what type of a community this is. For example, in, again, one of my previous jobs, we realized that our team is like a university for people come to us and are hired because they want to grow rapidly, they want to grow faster than anywhere else, that’s why they join our company. They don’t get many perks and bonuses, the office is not very fancy and we are not those hipster designers who are always using trendy things. But at the same time, you get a lot of practice and you can earn the trust of a client, you can take things you want to be responsible for yourself. You are not given task, but you can take the task you find important.

Slava: And when we realized that, we included it into our value proposition because as a company you’re not even interested in attracting people who will feel unsatisfied here. If you are working this way, but your external messaging is different and you attract those people who are searching for something different and then, when they come in they’re highly disappointed and you have to separate with them in a month or a year or they will bring the elements of this culture to your culture and there is a clash of cultures.

Slava: So the point here, I’m just trying to formulate the same idea but in different ways, it’s to be honest about the culture, it’s extremely important. But also, awareness about your culture. It’s not written, it exists. And sometimes, the company principles are quite misleading, they’re not often true because the real culture is seen at the office, it’s in the Slack chat, it’s in the way how people interact, what they discuss at the coffee machine.

Vitaly: Yeah. And there are, of course, also, I think I read this really nice article maybe a couple of years ago, the idea of different subcultures and how they evolve over time and how they can actually mingle and even merge with, as you might have very different teams working on different side of the world, which then find each other and bring and merge culture. So you kind of have this moving bits and moving parts.

Vitaly: Kind of on the way to one of the conference, I went to Iceland. And there was a really nice friendly guy there who was guiding us through Iceland. And he was telling all this story about nothing ever stops, everything is moving, everything is changing, glaciers are changing, the earth’s changing, everything is changing, everything is moving. And people are pretty much like that. People always find... I mean, maybe people don’t change that much, but they’re still finding ways of collaborating better and finding ways to create something that hopefully works better within the organization. How do you encourage that though?

Vitaly: Very often I encounter situations where it feels like there are people just looking at the clock to finish on time and then, go home. And then, there are people who just want to do everything and they’re very vocal and they will have this incredible amount of enthusiasm everywhere and they will have all the GIFs in Slack and so on and so forth. But then, sometimes I feel like, again, talking about culture, their enthusiasm is clashed against this coldness that is coming from some people. And then, you have camps building. How do you deal with situations like that? You cannot just make people more similar, you just have to deal with very different people who just happen to have very different interests and priorities. How would you manage that?

Slava: That’s an amazing question, and you know why? Because there is no definite answer to it.

Vitaly: I like those kind of questions.

Slava: Yeah. It’s not easy and I struggled a lot with that. I know perfectly, based on my experience, what you’re asking about. One of the solutions might be to hire people who have similar culture or at least consistent with the existing culture. Because if your whole team or the core team, the majority in the team who set this spirit and this atmosphere, they are proactive, you shouldn’t hire people who are highly inconsistent with this kind of culture. Yeah, they might be more passive, more attentive to their schedule, but they should not be resisted at least. They can support it maybe in a more calm way, but you don’t need someone critically opposing that state of things, and vice the versa. Over time, I understood that.

Slava: Sometime ago, I thought that all designers should be proactive, rock stars, super skilled, taking responsibility about everything. But you know what? That’s quite one-sided point of view. Even if I belong to this kind of designers, it’s important to embrace other types of professionals because the downside of being such a designer is that you are driven forward by your passion, but only when you have this passion and motivation. But if it disappears, you can hardly make yourself do the simplest task. And that’s the problem because this fuel doesn’t feed you anymore.

Slava: On the other hand, those people who are more attentive to their balance between work and relaxation, people who are more attentive to their schedule and are less energetic at work and may be less passionate about what they do, they are more persistent and they can much easier survive such a situation when everything around is falling apart and many people lose motivation just because motivation is not such a strong driver for them. So over time, I understood that there are multiple types of designers and they’re all fine. The thing is to find your niche and to be in the place where you belong.

Vitaly: Right. Interesting. Because on top of that, I do have to ask a question. We could do this forever, we could keep this conversation going forever. I want to be respectful of your time as well. Just from your experience... There are so many people, the people who I’ve been speaking to over this last couple of years, but also here on the podcast, everybody has different opinions about how teams should be led and how the culture should be defined in terms of how people are working, specifically all-remote, a hundred percent remote or all on site, a hundred percent on site or hybrid with one day overlap, two days overlap, three days overlap, four days overlap.

Vitaly: What do you think works? I mean, of course, it’s a matter of the company where people allocated. And obviously, if everybody is from different parts of the world, being on site all the time, moving from, let’s say, fully remote to fully on site is just really difficult. So what would you say is really critical in any of those environments? Can hybrid work really well? Can remote work really well? Can onsite work really well? And there’s truly no best option, but I’m just wondering what should we keep in mind for each of those?

Slava: The culture. So look, culture is everything and it influences the way how people work efficiently. If is networking is really active in the team, if people communicate a lot apart from their work and tasks and everything, and if it’s normal for the team, if it’s part of the reasons why people are here in this company, then offline work is preferable. If people are more autonomous and they like it and everyone works like that in the company, then there is nothing bad in being hybrid or remote. So you see, it depends on the attitude to work and general culture, the spirit, how people feel comfortable.

Vitaly: All right. But are you saying that if you have, let’s say, a mix of people who really prefer on site and then, really prefer remote, then you kind of get an issue because how do you merge both of those intentions?

Slava: But how do you get into that situation in the first place?

Vitaly: Well, good question.

Slava: Why have you attracted so different people to your company?

Vitaly: But for the rest — with HR?

Slava: Yes, I read processes.

Vitaly: But there might be different teams and then, eventually those teams get merged and then, eventually, some people come, some people leave and people are rotating from one team to another. And then, eventually, before you know it, you end up in a situation where you’re working on a new product with a new team and then, part are remote, part are on site and part don’t even want to be there.

Slava: That’s why large companies have processes. The thing that you are describing is quite typical for huge companies because you cannot keep similar work culture forever. As you scale, it’s becoming more awake and hard to match all the time. There is an amazing diagram that I saw in LinkedIn, it was created by Julie Zhuo, who also wrote a great book on management. And this diagram shows how people are hiring, like this, A hires, B hires, C hires, D, and there is a slight difference in their cultures. And if you imagine it as the line of overlapping circles, when A hires B, B hires C, C hires D and so on, then you notice how far A is from let’s say H or G, they’re very far away because this line of hiring brought certain distortion, certain mutation into the culture understanding with each step.

Slava: It’s like evolution is working. With every century or thousands of years, certain species changes one tiny trait, but in a million of years, you won’t even recognize that. The same with huge companies, you cannot control everything and micromanage it. So naturally, they’re extremely diverse. And many companies even are proud of being diverse and inclusive, which is another aspect, which is great, but in order to manage it all, they have to introduce processes and be more strictly regulated just to keep it working.

Vitaly: Right. Right. Well, I mean, we could speak about this for hours, I think. But maybe just two more questions before we wrap up. One thing that’s really important to me and really dear to me is that I know that you’ve been mentoring and you’ve been participating in kind of educating about design also specifically for designers who are in Ukraine. And I mean, at this point, we probably have many more connections and many more insights about how design is actually working from Ukraine right now when the war is going on. I’m just wondering, do you see... Because we had a Smashing meet a couple of months ago now. And there was an incredible talk by one of the people from set up team in Ukraine, in Kyiv, and they were speaking about just incredible way of how they changed the way the company works, how they adapted in any way to accommodate for everything. Like some people working from bomb shelters. This is just incredible.

Vitaly: Those kind of stories really make me cry. So this is just unbelievable. And I always have this very, I don’t even know how to describe it, like incredible sense of the strength that everybody who I’m interacting with who is coming through after all this time. It’s been now, what? It’s like one and a half years, right, well, much more than that, actually looking at 2014.

Vitaly: So the question, I guess, that I’m trying to ask here is that strength and that kind of obsession with quality, with good work, with learning, with educating, how did it come to be and how is it now? I don’t know if it makes sense the question, but just maybe your general feelings about what designers are feeling and how are they working at this point in May 2023?

Slava: That’s a good question. Unfortunately, I might not be the best person to answer because I’ve been living in Berlin for three years and fortunately, I never experienced working from a bomb shelter, although, many of my friends and acquaintances did. But what I know for sure is that Ukrainian design community is quite peculiar and it’s an insurance trait. It’s not something that we are taught, but something that just our characteristic. I know that unlike many other people from other countries, Ukrainian designers are really hungry for knowledge and new skills. And the level of self-organization is quite high because we are not used to getting it off the shelf, we are not used to receiving it, I don’t know, from educational institutions, from the government, from whoever else.

Slava: In Ukraine, or at least definitely my generation, millennials, we understand that if we don’t do anything, we will fail in life, that’s why we try to build our career early, we think about our future work during the last years of school and at the university, already planning where we going to work, how much we going to earn and how to find your niche, your place in life.

Slava: And the same in design, we are not waiting until our universities update their programs in order to teach us digital design, we are doing it ourselves, partnering with universities, participating in different courses, contributing to those programs. And I think that this feature, this trait of Ukrainian designers is extremely helpful right now in crisis times. Maybe it didn’t get us that much by surprise, it was still unexpected. But Ukrainian designers and other professionals in other professions, they just try to always have plan B and plan C and maybe even plan D.

Vitaly: Yeah, that’s probably also explains... I mean, I have to ask this question, I really do. Why medieval themes in your UX memes? Oh, even rhymes, it must be true.

Slava: First of all, it’s beautiful and funny. The first time I used medieval art-based memes was several years ago when I worked at EPAM Systems and prepared an internal presentation for one of our internal team meetups. And it was hilarious, everyone was laughing. And since then, I just started doing it all the time. It’s not like-

Vitaly: And you have like 50 of them now or even more?

Slava: More. Many more. It’s just something original. I haven’t seen many medieval memes, especially in the educational and other materials about design and UX. So it’s just, I like to bring positive emotions to my audience. So if it’s hilarious and makes them laugh and if it’s something new that others are not doing or at least that intensively, then why not? And I simply enjoy medieval art, including architecture, gothic style, Romanesque architecture, it’s something from fairy tales or legends, but then, you realize, it was real.

Vitaly: Yeah, so I guess, dear friends listening to this, if you ever want to give or find a nice gift for Slava, lookout for medieval art and any books related to that, I think that Slava will sincerely appreciated. Now, as we’re wrapping up, and I think that you mentioned already the future at this point, I’m curious, this is a question I like asking at the end of every episode. Slava, do you have a dream project that you’d love to work on one day, a magical brand or a particularly interesting project of any industry, of any scope of any sites with any team? Do you have something in mind, what you would love to do one day? Maybe somebody from that team, from that project, from that company, from that brand is now listening.

Slava: Great question, and maybe I don’t have an amazing answer to it because it doesn’t matter. I’m dreaming about bringing value, creating something significant, but I never limited myself to a particular area or a particular company or brand, it just doesn’t matter. If it’s valuable, then it’s a success.

Vitaly: All right, well, if you, dear listener would like to hear more from Slava, you can find him on LinkedIn where he’s... Guess what? ... Slava Shestopalov, but also on Medium where he writes a lot of stuff around UX, and of course, don’t forget medieval-themed UX memes, and also, on his 5:00 AM travel blog. Slava will also be speaking in Freiburg at SmashingConf, I’m very looking forward to see you there, and maybe even tomorrow, we’ll see about that. So please, dear friends, if you have the time, please drop in at SmashingConf, Freiburg, September 2023. All right, well, thank you so much for joining us today, Slava. Do you have any parting words of wisdom that you would like to send out to the people who might be listening to this 20 years from now? Who knows?

Slava: Oh, wisdom, I’m not that wise yet, but something that I discovered recently is that we should more care about people. Technology is advancing so fast, so the thing which is left is the human factor. Maybe AI will take part of our job and that’s great because there are many routine tasks no one is fond of doing, but people, we are extremely complex and understanding who we are and how we designers as humans can serve other humans is essential. So that’s where I personally put my effort into recently, and I think that’s a great direction of research for everyone working in design, UX and related areas.

]]>
hello@smashingmagazine.com (Drew McLellan)
<![CDATA[Gatsby Headaches And How To Cure Them: i18n (Part 1)]]> https://smashingmagazine.com/2023/06/gatsby-headaches-i18n-part-1/ https://smashingmagazine.com/2023/06/gatsby-headaches-i18n-part-1/ Mon, 12 Jun 2023 16:00:00 GMT Internationalization, or i18n, is making your content understandable in other languages, regions, and cultures to reach a wider array of people. However, a more interesting question would be, “Why is i18n important?”. The answer is that we live in an era where hundreds of cultures interact with each other every day, i.e., we live in a globalized world. However, our current internet doesn’t satisfy its globalized needs.

Did you know that 60.4% of the internet is in English, but only 16.2% percent of the world speaks English?

Source: Visual Capitalist

Yes, it’s an enormous gap, and until perfect AI translators are created, the internet community must close it.

As developers, we must adapt our sites’ to support translations and formats for other countries, languages, and dialects, i.e., localize our pages. There are two main problems when implementing i18n on our sites.

  1. Storing and retrieving content.
    We will need files to store all our translations while not bloating our page’s bundle size and a way to retrieve and display the correct translation on each page.
  2. Routing content.
    Users must be redirected to a localized route with their desired language, like my-site.com/es or en.my-site.com. How are we going to create pages for each locale?

Fortunately, in the case of Gatsby and other static site generators, translations don’t bloat up the page bundle size since they are delivered as part of the static page. The rest of the problems are widely known, and there are a lot of plugins and libraries available to address them, but it can be difficult to choose one if you don’t know their purpose, what they can do, and if they are compatible with your existing codebase. That’s why in the following hands-on guide, we will see how to use several i18n plugins for Gatsby and review some others.

The Starter

Before showing what each plugin can do and how to use them, we first have to start with a base example. (You can skip this and download the starter here). For this tutorial, we will work with a site with multiple pages created from an array of data, like a blog or wiki. In my case, I choose a cooking blog that will initially have support only for English.

Start A New Project

To get started, let’s start a plain JavaScript Gatsby project without any plugins at first.

npm init gatsby
cd my-new-site

For this project, we will create pages dynamically from markdown files. To be able to read and parse them to Gatsby’s data layer, we will need to use the gatsby-source-filesystem and gatsby-transformer-remark plugins. Here you can see a more in-depth tutorial.

npm i gatsby-source-filesystem gatsby-transformer-remark

Inside our gatsby-config.js file, we will add and configure our plugins to read all the files in a specified directory.

// ./gatsby-config.js

module.exports = {
  //...
  plugins: [
    {
      resolve: `gatsby-source-filesystem`,
      options: {
        name: `content`,
        path: `${__dirname}/src/content`,
      },
    },
    `gatsby-transformer-remark`,
  ],
};

Add Your Content

As you can see, we will use a new ./src/content/ directory where we will save our posts. We will create a couple of folders with each recipe’s content in markdown files, like the following:

├── src
│ ├── content
| | ├── mac-and-cheese
| | | ├── cover.jpg
| | | ├── index.en.md
| | ├── burritos
| | | ├── cover.jpg
| | | ├── index.en.md
| | ├── pizza
| | | ├── cover.jpg
| | | ├── index.en.md
│ ├── pages
│ ├── images

Each markdown file will have the following structure:

---
slug: "mac-and-cheese"
date: "2023-01-20"
title: "How to make mac and cheese"
cover_image:
    image: "./cover.jpg"
    alt: "Macaroni and cheese"
locale: "en"
---

Step 1
Lorem ipsum...

You can see that the first part of the markdown file has a distinct structure and is surrounded by --- on both ends. This is called the frontmatter and is used to save the file’s metadata. In this case, the post’s title, date, locale, etc.

As you can see, we will be using a cover.jpg file for each post, so to parse and use the images, we will need to install the gatsby-plugin-image gatsby-plugin-sharp and gatsby-transformer-sharp plugins (I know there are a lot 😅).

npm i gatsby-plugin-image gatsby-plugin-sharp gatsby-transformer-sharp

We will also need to add them to the gatsby-config.js file.

// ./gatsby-config.js

module.exports = {
  //...
  plugins: [
    {
      resolve: `gatsby-source-filesystem`,
      options: {
        name: `content`,
        path: `${__dirname}/src/content`,
      },
    },
    `gatsby-plugin-sharp`,
    `gatsby-transformer-sharp`,
    `gatsby-transformer-remark`,
    `gatsby-plugin-image`,
  ],
};

Querying Your Content

We can finally start our development server:

npm run develop

And go to http://localhost:8000/___graphql, where we can make the following query:

query Query {
  allMarkdownRemark {
    nodes {
      frontmatter {
        slug
        title
        date
        cover_image {
          image {
            childImageSharp {
              gatsbyImageData
            }
          }
          alt
        }
      }
    }
  }
}

And get the following result:

{
  "data": {
    "allMarkdownRemark": {
      "nodes": [
        {
          "frontmatter": {
            "slug": "/mac-and-cheese",
            "title": "How to make mac and cheese",
            "date": "2023-01-20",
            "cover_image": {
              /* ... */
            }
          }
        },
        {
          "frontmatter": {
            "slug": "/burritos",
            "title": "How to make burritos",
            "date": "2023-01-20",
            "cover_image": {
              /* ... */
            }
          }
        },
        {
          "frontmatter": {
            "slug": "/pizza",
            "title": "How to make Pizza",
            "date": "2023-01-20",
            "cover_image": {
              /* ... */
            }
          }
        }
      ]
    }
  }
}

Now the data is accessible through Gatsby’s data layer, but to access it, we will need to run a query from the ./src/pages/index.js page.

Go ahead and delete all the boilerplate on the index page. Let’s add a short header for our blog and create the page query:

// src/pages/index.js

import * as React from "react";
import {graphql} from "gatsby";

const IndexPage = () => {
  return (
    <main>
      <h1>Welcome to my English cooking blog!</h1>
      <h2>Written by Juan Diego Rodríguez</h2>
    </main>
  );
};

export const indexQuery = graphql`
  query IndexQuery {
    allMarkdownRemark {
      nodes {
        frontmatter {
          slug
          title
          date
          cover_image {
            image {
              childImageSharp {
                gatsbyImageData
              }
            }
            alt
          }
        }
      }
    }
  }
`;

export default IndexPage;

Displaying Your Content

The result from the query is injected into the IndexPage component as a props property called data. From there, we can render all the recipes’ information.

// src/pages/index.js

// ...
import {RecipePreview} from "../components/RecipePreview";

const IndexPage = ({data}) => {
  const recipes = data.allMarkdownRemark.nodes;

  return (
    <main>
      <h1>Welcome to my English cooking blog!</h1>
      <h2>Written by Juan Diego Rodríguez</h2>
      {recipes.map(({frontmatter}) => {
        return <RecipePreview key={frontmatter.slug} data={frontmatter} />;
      })}
    </main>
  );
};

// ...

The RecipePreview component will be the following in a new directory: ./src/components/:

// ./src/components/RecipePreview.js

import * as React from "react";
import {Link} from "gatsby";
import {GatsbyImage, getImage} from "gatsby-plugin-image";

export const RecipePreview = ({data}) => {
  const {cover_image, title, slug} = data;
  const cover_image_data = getImage(cover_image.image.childImageSharp.gatsbyImageData);

  return (
    <Link to={/recipes/${slug}}>
      <h1>{title}</h1>
      <GatsbyImage image={cover_image_data} alt={cover_image.alt} />
    </Link>
  );
};

Creating Pages From Your Content

If we go to http://localhost:8000/, we will see all our recipes listed, but now we have to create a custom page for each recipe. We can do it using Gatsby’s File System Route API. It works by writing a GraphQL query inside the page’s filename, generating a page for each query result. In this case, we will make a new directory ./src/pages/recipes/ and create a file called {markdownRemark.frontmatter__slug}.js. This filename translates to the following query:

query MyQuery {
  allMarkdownRemark {
    nodes {
      frontmatter {
        slug
      }
    }
  }
}

And it will create a page for each recipe using its slug as the route.

Now we just have to create the post’s component to render all its data. First, we will use the following query:

query RecipeQuery {
  markdownRemark {
    frontmatter {
      slug
      title
      date
      cover_image {
        image {
          childImageSharp {
            gatsbyImageData
          }
        }
        alt
      }
    }
    html
  }
}

This will query the first markdown file available in our data layer, but to specify the markdown file needed for each page, we will need to use variables in our query. The File System Route API injects the slug in the page’s context in a property called frontmatter__slug. When a property is in the page’s context, it can be used as a query variable under a $ followed by the property name, so the slug will be available as $frontmatter__slug.

query RecipeQuery {
  query RecipeQuery($frontmatter__slug: String) {
    markdownRemark(frontmatter: {slug: {eq: $frontmatter__slug}}) {
      frontmatter {
        slug
        title
        date
        cover_image {
          image {
            childImageSharp {
              gatsbyImageData
            }
          }
          alt
        }
      }
      html
    }
  }
}

The page’s component is pretty simple. We just get the query data from the component’s props. Displaying the title and date is straightforward, and the html can be injected into a p tag. For the image, we just have to use the GatsbyImage component exposed by the gatsby-plugin-image.

// src/pages/recipes/{markdownRemark.frontmatter__slug}.js

const RecipePage = ({data}) => {
  const {html, frontmatter} = data.markdownRemark;
  const {title, cover_image, date} = frontmatter;
  const cover_image_data = getImage(cover_image.image.childImageSharp.gatsbyImageData);

  return (
    <main>
      <h1>{title}</h1>
      <p>{date}</p>
      <GatsbyImage image={cover_image_data} alt={cover_image.alt} />
      <p dangerouslySetInnerHTML={{__html: html}}></p>
    </main>
  );
};

//...

The last thing is to use the Gatsby Head API to change the page’s title to the recipe’s title. This can be easily done since the query’s data is also available in the Head component.

// src/pages/recipes/{markdownRemark.frontmatter__slug}.js

//...

export const Head = ({data}) => <title>{data.markdownRemark.frontmatter.title}</title>;

Summing all up results in the following code:

// src/pages/recipes/{markdownRemark.frontmatter__slug}.js

import * as React from "react";
import {GatsbyImage, getImage} from "gatsby-plugin-image";
import {graphql} from "gatsby";

const RecipePage = ({data}) => {
  const {html, frontmatter} = data.markdownRemark;
  const {title, cover_image, date} = frontmatter;
  const cover_image_data = getImage(cover_image.image.childImageSharp.gatsbyImageData);

  return (
    <main>
      <h1>{title}</h1>
      <p>{date}</p>
      <GatsbyImage image={cover_image_data} alt={cover_image.alt} />
      <p dangerouslySetInnerHTML={{__html: html}}></p>
    </main>
  );
};

export const recipeQuery = graphqlquery RecipeQuery($frontmatter&#95;&#95;slug: String) {
    markdownRemark(frontmatter: {slug: {eq: $frontmatter&#95;&#95;slug}}) {
      frontmatter {
        slug
        title
        date
        cover&#95;image {
          image {
            childImageSharp {
              gatsbyImageData
            }
          }
          alt
        }
      }
      html
    }
  };

export default RecipePage;

export const Head = ({data}) => <title>{data.markdownRemark.frontmatter.title}</title>;

Creating Localized Content

With all this finished, we have a functioning recipe blog in English. Now we will use each plugin to add i18n features and localize the site (for this tutorial) for Spanish speakers. But first, we will make a Spanish version for each markdown file in ./src/content/. Leaving a structure like the following:

├── src
│ ├── content
| | ├── mac-and-cheese
| | | ├── cover.jpg
| | | ├── index.en.md
| | | ├── index.es.md
| | ├── burritos
| | | ├── cover.jpg
| | | ├── index.en.md
| | | ├── index.es.md
| | ├── pizza
| | | ├── cover.jpg
| | | ├── index.en.md
| | | ├── index.es.md
│ ├── pages
│ ├── images

Inside our new Spanish markdown files, we will have the same structure in our frontmatter but translated to our new language and change the locale property in the frontmatter to es. However, it’s important to note that the slug field must be the same in each locale.

gatsby-plugin-i18n

This plugin is displayed in Gatsby’s Internationalization Guide as its first option when implementing i18n routes. The purpose of this plugin is to create localized routes by adding a language code in each page filename, so, for example, a ./src/pages/index.en.js file would result in a my-site.com/en/ route.

I strongly recommend not using this plugin. It is outdated and hasn’t been updated since 2019, so it is kind of a disappointment to see it promoted as one of the main solutions for i18n in Gatsby’s official documentation. It also breaks the File System API, so you must use another method for creating pages, like the createPages function in the Gatsby Node API. Its only real use would be to create localized routes for certain pages, but considering that you must create a file for each page and each locale, it would be impossible to manage them on even medium sites. A 20 pages site with support for five languages would need 100 files!

gatsby-theme-i18n

Another plugin for implementing localized routes is gatsby-theme-i18n, which will be pretty easy to use in our prior example.

We will first need to install the gatsby-theme-i18n plugin and the gatsby-plugin-react-helmet and react-helmet plugins to help add useful language metadata in our <head> tag.

npm install gatsby-theme-i18n gatsby-plugin-react-helmet react-helmet

Next, we can add it to the gatsby-config.js:

// ./gatsby-config.js

module.exports = {
  //...
  plugins: [
    //other plugins ...
    {
      resolve: `gatsby-theme-i18n`,
      options: {
        defaultLang: `en`,
        prefixDefault: true,
        configPath: require.resolve(`./i18n/config.json`),
      },
    },
  ],
};

As you can see, the plugin configPath points to a JSON file. This file will have all the information necessary to add each locale. We will create it in a new ./i18n/ directory at the root of our project:

[
  {
    "code": "en",
    "hrefLang": "en-US",
    "name": "English",
    "localName": "English",
    "langDir": "ltr",
    "dateFormat": "MM/DD/YYYY"
  },

  {
    "code": "es",
    "hrefLang": "es-ES",
    "name": "Spanish",
    "localName": "Español",
    "langDir": "ltr",
    "dateFormat": "DD.MM.YYYY"
  }
]

Note: To see changes in the gatsby-config.js file, we will need to restart the development server.

And just as simple as that, we added i18n routes to all our pages. Let’s head to http://localhost:8000/es/ or http://localhost:8000/en/ to see the result.

Querying Localized Content

At first glance, you will see a big problem: the Spanish and English pages have all the posts from both locales because we aren’t filtering the recipes for a specific locale, so we get all the available recipes. We can solve this by once again adding variables to our GraphQL queries. The gatsby-theme-i18n injects the current locale into the page’s context, making it available to use as a query variable under the $locale name.

index page query:

query IndexQuery($locale: String) {
  allMarkdownRemark(filter: {frontmatter: {locale: {eq: $locale}}}) {
    nodes {
      frontmatter {
        slug
        title
        date
        cover_image {
          image {
            childImageSharp {
              gatsbyImageData
            }
          }
          alt
        }
      }
    }
  }
}

{markdownRemark.frontmatter__slug}.js page query:

query RecipeQuery($frontmatter__slug: String, $locale: String) {
  markdownRemark(frontmatter: {slug: {eq: $frontmatter__slug}, locale: {eq: $locale}}) {
    frontmatter {
      slug
      title
      date
      cover_image {
        image {
          childImageSharp {
            gatsbyImageData
          }
        }
        alt
      }
    }
    html
  }
}

Localizing Links

You will also notice that all Gatsby links are broken since they point to the non-localized routes instead of the new routes, so they will direct the user to a 404 page. To solve this, gatsby-theme-i18n exposes a LocalizedLink component that works exactly like Gatsby’s Link but points to the current locale. We just have to switch each Link component for a LocalizedLink.

// ./src/components/RecipePreview.js

+ import {LocalizedLink as Link} from "gatsby-theme-i18n";
- import {Link} from "gatsby";

//...

Changing Locales

Another vital feature to add will be a component to change from one locale to another. However, making a language selector isn’t completely straightforward. First, we will need to know the current page’s path, like /en/recipes/pizza, to extract the recipes/pizza part and add the desired locale, getting /es/recipes/pizza.

To access the page’s location information (URL, HREF, path, and so on) in all our components, we will need to use the wrapPageElement function available in the gatsby-browser.js and gatsby-ssr.js files. In short, this function lets you access the props used on each page, including a location object. We can set up a context provider with the location information and pass it down to all components.

First, we will create the location context in a new directory: ./src/context/.

// ./src/context/LocationContext.js

import * as React from "react";
import {createContext} from "react";

export const LocationContext = createContext();

export const LocationProvider = ({location, children}) => {
  return <LocationContext.Provider value={location}>{children}</LocationContext.Provider>;
};

As you can imagine, we will pass the page’s location object to the provider’s location attribute on each Gatsby file:

// ./gatsby-ssr.js & ./gatsby-browser.js

import * as React from "react";
import {LocationProvider} from "./src/context/LocationContext";

export const wrapPageElement = ({element, props}) => {
  const {location} = props;

  return <LocationProvider location={location}>{element}</LocationProvider>;
};

Note: Since we just created the gatsby-ssr.js and gatsby-browser.js files, we will need to restart the development server.

Now the page’s location is available in all components through context, and we can use it in our language selector. We have also to pass down the current locale to all components, and the gatsby-theme-i18n exposes a useful useLocalization hook that let you access the current locale and the i18n config. However, a caveat is that it can’t get the current locale on Gatsby files like gatsby-browser.js and gatsby-ssr.js, only the i18n config.

Ideally, we would want to render our language selector using wrapPageElement so it is available on all pages, but we can’t use the useLocazication hook. Fortunately, the wrapPageElement props argument also exposes the page’s context and, inside, its current locale.

Let’s create another context to pass down the locale:

// ./src/context/LocaleContext.js

import * as React from "react";
import {createContext} from "react";

export const LocaleContext = createContext();

export const LocaleProvider = ({locale, children}) => {
  return <LocaleContext.Provider value={locale}>{children}</LocaleContext.Provider>;
};

And use it in our wrapPageElement function:

// ./gatsby-ssr.js & ./gatsby-browser.js

import * as React from "react";
import {LocationProvider} from "./src/context/LocationContext";
import {LocaleProvider} from "./src/context/LocaleContext";

export const wrapPageElement = ({element, props}) => {
  const {location} = props;
  const {locale} = element.props.pageContext;

  return (
    <LocationProvider location={location}>
      <LocaleProvider locale={locale}>{element}</LocaleProvider>
    </LocationProvider>
  );
};

The last thing is how to remove the locale (es or en) from the path (/es/recipes/pizza). Using the following simple but ugly regex, we can remove all the /en/ and /es/ at the beginning of the path:

/(\/e(s|n)|)(\/*|)/

It’s important to note that the regex pattern only works for the en and es combination of locales.

Now we have to create our LanguageSelector.js:

// ./src/components/LanguageSelector

import * as React from "react";
import {useContext} from "react";
import {useLocalization} from "gatsby-theme-i18n";
import {Link} from "gatsby";
import {LocationContext} from "../context/LocationContext";
import {LocaleContext} from "../context/LocaleContext";

export const LanguageSelector = () => {
  const {config} = useLocalization();
  const locale = useContext(LocaleContext);
  const {pathname} = useContext(LocationContext);

  const removeLocalePath = /(\/e(s|n)|)(\/*|)/;
  const pathnameWithoutLocale = pathname.replace(removeLocalePath, "");

  return (
    <div>
      {config.map(({code, localName}) => {
        return (
          code !== locale && (
            <Link key={code} to={`/${code}/${pathnameWithoutLocale}`}>
              {localName}
            </Link>
          )
        );
      })}
    </div>
  );
};

Let’s break down what is happening:

  1. Get our i18n config through the useLocalization hook.
  2. Get the current locale through context.
  3. Get the page’s current pathname through context, which is the part that comes after the domain (like /en/recipes/pizza).
  4. We remove the locale part of the pathname using a regex pattern (leaving just recipes/pizza).
  5. We want to render a link for each available locale except the current one, so we will check if the locale is the same as the page before rendering a common Gatsby Link to the desired locale.

Now inside our gatsby-ssr.js and gatsby-browser.js files, we can add our LanguageSelector:

// ./gatsby-ssr.js & ./gatsby-browser.js

import * as React from "react";
import {LocationProvider} from "./src/context/LocationContext";
import {LocaleProvider} from "./src/context/LocaleContext";
import {LanguageSelector} from "./src/components/LanguageSelector";

export const wrapPageElement = ({element, props}) => {
  const {location} = props;
  const {locale} = element.props.pageContext;

  return (
    <LocationProvider location={location}>
      <LocaleProvider locale={locale}>
        <LanguageSelector />
        {element}
      </LocaleProvider>
    </LocationProvider>
  );
};

Redirecting Users

The last detail to address is that now the non-i18n routes like http://localhost:8000/ or http://localhost:8000/recipes/pizza are empty. To solve this, we can redirect the user to their desired locale using Gatsby’s redirect in gatsby-node.js.

// ./gatsby-node.js

exports.createPages = async ({actions}) => {
  const {createRedirect} = actions;

  createRedirect({
    fromPath: `/*`,
    toPath: `/en/*`,
    isPermanent: true,
  });

  createRedirect({
    fromPath: `/*`,
    toPath: `/es/*`,
    isPermanent: true,
    conditions: {
      language: [`es`],
    },
  });
};

Note: Redirects only work in production! Not in the local development server.

With this, each page that doesn’t start with the English or Spanish locale will be redirected to a localized route. The wildcard * at the end of each route says it will redirect them to the same path, e.g., it will redirect /recipes/mac-and-cheese/ to /en/recipes/mac-and-cheese/. Also, it will check for the specified language in the request’s origin and redirect to the locale if available; else, it will default to English.

react-intl

react-intl is an internationalization library for any React app that can be used with Gatsby without any extra configuration. It provides a component to handle translations and many more to format numbers, dates, times, etc. Like the following:

  • FormattedNumber,
  • FormattedDate,
  • FormattedTime.

It works by adding a provider called IntlProvider to pass down the current locale to all the react-intl components. Among others, the provider takes three main attributes:

  • message
    An object with all your translations.
  • locale
    The current page’s locale.
  • defaultLocale
    The default page’s locale.

So, for example:

  <IntlProvider messages={{}} locale="es" defaultLocale="en" >
      <FormattedNumber value={15000} />
      <br />
      <FormattedDate value={Date.now()} />
      <br />
      <FormattedTime value={Date.now()} />
      <br />
  </IntlProvider>,

Will format the given values to Spanish and render:

15.000

23/1/2023

19:40

But if the locale attribute in IntlProvider was en, it would format the values to English and render:

15,000

1/23/2023

7:42 PM

Pretty cool and simple!

Using react-intl With Gatsby

To showcase how the react-intl works with Gatsby, we will continue from our prior example using gatsby-theme-i18n.

We first will need to install the react-intl package:

npm i react-intl

Secondly, we have to write our translations, and in this case, we just have to translate the title and subtitle on the index.js page. To do so, we will create a file called messajes.js in the ./i18n/ directory:

// ./i18n/messages.js

export const messages = {
  en: {
    index_page_title: "Welcome to my English cooking blog!",
    index_page_subtitle: "Written by Juan Diego Rodríguez",
  },
  es: {
    index_page_title: "¡Bienvenidos a mi blog de cocina en español!",
    index_page_subtitle: "Escrito por Juan Diego Rodríguez",
  },
};

Next, we have to set up the IntlProvider in the gatsby-ssr.js and gatsby-browser.js files:

// ./gatsby-ssr.js & ./gatsby-browser.js

import * as React from "react";
import {LocationProvider} from "./src/context/LocationContext";
import {LocaleProvider} from "./src/context/LocaleContext";
import {IntlProvider} from "react-intl";
import {LanguageSelector} from "./src/components/LanguageSelector";
import {messages} from "./i18n/messages";

export const wrapPageElement = ({element, props}) => {
  const {location} = props;
  const {locale} = element.props.pageContext;

  return (
    <LocationProvider location={location}>
      <LocaleProvider locale={locale}>
        <IntlProvider messages={messages[locale]} locale={locale} defaultLocale="en">
          <LanguageSelector />
          {element}
        </IntlProvider>
      </LocaleProvider>
    </LocationProvider>
  );
};

And use the FormattedMessage component with an id attribute holding the desired translation key name.

// ./src/pages/index.js

// ...
import {FormattedMessage} from "react-intl";

const IndexPage = ({data}) => {
  const recipes = data.allMarkdownRemark.nodes;

  return (
    <main>
      <h1>
        <FormattedMessage id="index_page_title" />
      </h1>
      <h2>
        <FormattedMessage id="index_page_subtitle" />
      </h2>
      {recipes.map(({frontmatter}) => {
        return <RecipePreview key={frontmatter.slug} data={frontmatter} />;
      })}
    </main>
  );
};

// ...

And as simple as that, our translations will be applied depending on the current user’s locale. However, i18n isn’t only translating all the text to other languages but also adapting to the way numbers, dates, currency, and so on are formatted in the user’s regions. In our example, we can format the date on each recipe page to be formatted according to the current locale using the FormattedDate component.

// ./src/pages/recipes/{markdownRemark.frontmatter__slug}.js

//...
import {FormattedDate} from "react-intl";

const RecipePage = ({data}) => {
  const {html, frontmatter} = data.markdownRemark;
  const {title, cover_image, date} = frontmatter;
  const cover_image_data = getImage(cover_image.image.childImageSharp.gatsbyImageData);

  return (
    <main>
      <h1>{title}</h1>
      <FormattedDate value={date} year="numeric" month="long" day="2-digit" />
      <GatsbyImage image={cover_image_data} alt={cover_image.alt} />
      <p dangerouslySetInnerHTML={{__html: html}}></p>
    </main>
  );
};

//...

As you can see, we feed the component the raw date and specify how we want to display it. Then the component will automatically format it to the correct locale. And with the year, month, and day attributes, we can further customize how to display our date. In our example, the date 19-01-2023 will be formatted the following way:

English: January 19, 2023

Spanish: 19 de enero de 2023

If we want to add a localized string around the date, we can use react-intl arguments. Arguments are a way to add dynamic data inside our react-intl messages. It works by adding curly braces {} inside a message.

The arguments follow this pattern { key, type, format }, in which

  • key is the data to be formatted;
  • type specifies if the key is going to be a number, date, time, and so on;
  • format further specifies the format, e.g., if a date is going to be written like 10/05/2023 or October 5, 2023.

In our case, we will name our key postedOn, and it will be a date type in a long format:

// ./i18n/messages.js

export const messages = {
  en: {
    // ...
    recipe_post_date: "Written on {postedOn, date, long}",
  },
  es: {
    // ...
    recipe_post_date: "Escrito el {postedOn, date, long}",
  },
};
// ./src/pages/recipes/{markdownRemark.frontmatter__slug}.js

//...
import {FormattedMessage} from "react-intl";

const RecipePage = ({data}) => {
  const {html, frontmatter} = data.markdownRemark;
  const {title, cover_image, date} = frontmatter;
  const cover_image_data = getImage(cover_image.image.childImageSharp.gatsbyImageData);

  return (
    <main>
      <h1>{title}</h1>
      <FormattedMessage id="recipe_post_date" values={{postedOn: new Date(date)}} />
      <GatsbyImage image={cover_image_data} alt={cover_image.alt} />
      <p dangerouslySetInnerHTML={{__html: html}}></p>
    </main>
  );
};
//...

Note: For the date to work, we will need to create a new Date object with our date as its only argument.

Localizing The Page’s Title

The last thing you may have noticed is that the index page’s title isn’t localized. In the recipes pages’ case, this isn’t a problem since it queries the already localized title for each post, but the index page title doesn’t. Solving this can be tricky for two reasons:

  1. You can’t use Gatsby Head API directly with react-intl since the IntlProvider doesn’t exist for components created inside the Head API.
  2. You can’t use the FormattedMessage component inside the title tag since it only allows a simple string value, not a component.

However, there is a workaround for both problems:

  1. We can use react-helmet (which we installed with gatsby-theme-i18n) inside the page component where the IntlProvider is available.
  2. We can use react-intl imperative API to get the messages as strings instead of the FormmatedMessage component. In this case, the imperative API exposes a useIntl hook which returns an intl object, and the intl.messages property holds all our messages too.

So the index component would end up like this:

// ./src/pages/index.js

// ...
import {FormattedMessage, useIntl} from "react-intl";
import {Helmet} from "react-helmet";

const IndexPage = ({data}) => {
  const intl = useIntl();

  const recipes = data.allMarkdownRemark.nodes;

  return (
    <main>
      <Helmet>
        <title>{intl.messages.index_page_title}</title>
      </Helmet>
      <h1>
        <FormattedMessage id="index_page_title" />
      </h1>
      <h2>
        <FormattedMessage id="index_page_subtitle" />
      </h2>
      {recipes.map(({frontmatter}) => {
        return <RecipePreview key={frontmatter.slug} data={frontmatter} />;
      })}
    </main>
  );
};

// ...
react-i18next

react-i18next is a well-established library for adding i18n to our react sites, and it brings the same and more features, hooks, and utils of react-intl. However, a crucial difference is that to set up react-i18next, we will need to create a wrapper plugin in gatsby-node.js while you can use react-intl as soon as you install it, so I believe it’s a better option to use with Gatsby. However, there already are plugins to set up faster the react-i18next library like gatsby-plugin-react-i18next and gatsby-theme-i18n-react-i18next.

Conclusion

The current state of Gatsby and especially its plugin is precarious, and each year its popularity goes lower, so it’s important to know how to handle it and which plugins to use if you want to work with Gatsby. Despite all, I still believe Gatsby is a powerful tool and is still worth starting a new project with npm init gatsby.

I hope you found this guide useful and leave with a better grasp of i18n in Gatsby and with less of a headache. In the next article, we will explore an in-depth solution to i18n by creating your own i18n plugin!

]]>
hello@smashingmagazine.com (Juan Diego Rodríguez)
<![CDATA[Design Under Constraints: Challenges, Opportunities, And Practical Strategies]]> https://smashingmagazine.com/2023/06/design-constraints-challenges-opportunities-practical-strategies/ https://smashingmagazine.com/2023/06/design-constraints-challenges-opportunities-practical-strategies/ Fri, 09 Jun 2023 10:00:00 GMT “If you don’t want to work within constraints, become an artist.” That is what one of my design lecturers told me when I was at university back when the web wasn’t even a thing.

That has turned out to be one of the most useful pieces of advice I ever received in my career and has led me to embrace and even enjoy working within constraints, which probably explains why I tend to specialize in highly regulated sectors with enormous amounts of stakeholders and legacy.

So, if you find working within constraints challenging, this is the post for you. In it, I hope to change your attitude towards constraints and provide practical ways of dealing with even the most frustrating barriers.

But let’s begin by looking at the kind of constraints you could find yourself facing.

Constraints On Every Side

The constraints we face come in all shapes and sizes, from technical constraints due to legacy technology or backwards compatibility to legal constraints relating to compliance requirements or accessibility.

Then there can be inadequate availability of images, video, and text or simply a lack of access to stakeholders.

However, the biggest two, without a doubt, are a lack of time and a lack of resources (either money or people). In fact, it is rare to encounter a project where somebody is not in a hurry, and you have enough resources to do the job properly!

It is easy to let all of these obstacles demoralize you, but I would encourage you to embrace, rather than resist, their constraints.

Why You Should Embrace Your Constraints

Constraints are not a set of necessary evils we have to endure. Instead, they are the core of what shapes the work we do.

  • Constraints provide a clear set of guidelines and limitations, which can help focus the design process and prevent scope creep.
  • Constraints help to build trust with clients or stakeholders, as they can see that the designer is able to work within their limitations and still deliver a high-quality product.
  • But most importantly of all, constraints can lead to more creative and innovative solutions, as designers are forced to think creatively within the given limitations.

I have done some of my best work over the years precisely because of the constraints placed upon me, not despite them.

Also, some constraints are a good idea. Ensuring a site is accessible just makes sense, as does limiting the time and money an organization is willing to invest.

Not that you should blindly accept every constraint placed upon you.

Know When To Push Back Against Constraints

Unsurprisingly, I would encourage you to challenge constraints that are based on incorrect assumptions or outdated information. However, you won’t come across those that frequently.

More common are constraints that make sense from “a certain point of view.” However, these kinds of constraints are not always right within the context of the project and its long-term objectives.

For example, attempting to deliver a project within a strict budget and on an aggressive schedule may reduce the cost to the organization. But it will substantially increase the risk of the project failing, and so ultimately, the money and time that were spent will be wasted.

Another common example is compliance constraints. These constraints exist to protect the organization from possible risk, but many larger organizations become so risk-averse that they undermine their competitiveness in the market. They swap one type of risk for another.

The key in these situations is to demonstrate the cost of any constraint placed upon you.

Demonstrating The Cost Of An Unhealthy Constraint

Often, those who impose constraints upon you do not see the problems these constraints create. This is usually because they are only thinking in terms of their own area of responsibility. For example, a compliance officer is only going to be thinking about compliance and not the broader user experience. Equally, the IT department is going to be more focused on security and privacy than conversion or usability.

Ultimately the decision of whether to enforce a constraint or not comes down to balancing multiple factors. Therefore, what you need to do is

Demonstrate the cost associated with a constraint so that senior management (who take a more holistic view) has all of the facts to make a final decision.

You can demonstrate the cost in one of three ways. You can either focus on the damage that a constraint causes, the cost of not taking an action the constraint prevents, or the lost opportunities imposed by the constraint.

Let’s look at each to help you see more clearly how this can work.

Highlight The Hidden Damage Of A Constraint

I once worked for a consumer electronics company. One of their biggest sellers was a kettle that included a water filter which prevented limescale build-up (I know, I work on the most exciting projects!)

The company insisted that when somebody added the kettle to their shopping cart, we should automatically add a set of water filters as well.

This is a well-known dark pattern that damages the user experience, but I also knew that it was increasing the average order value, a key metric the e-commerce team tracked.

To combat this constraint, I knew I had to demonstrate that it was causing damage that the e-commerce team and leadership were unaware of. So, I took the following steps:

  • I gathered evidence on social media of users complaining about this issue.
  • I contacted the customer support team to get some metrics about the number of complaints.
  • I contacted the returns team to find out how many people returned the filters.
  • I looked on review sites to see the number of negative reviews relating to filters.

Sure enough, I found substantial evidence that this was a major issue among consumers. But I didn’t stop there. I wanted to associate a financial cost with the decision, so I made some estimates:

  • I made my best guess at the cost of combating the negative reviews, referencing various sources I found online.
  • I researched the average cost of dealing with a complaint and combined it with the data from the customer services team to guess the overall cost of dealing with filter complaints.
  • I used a similar approach to work out an approximate cost of processing returned filters.

Now, let me be clear, these were nothing more than guesses on my part. My figures were not accurate, and people in the company were quick to challenge them. But associating a dollar value with the problem got their attention!

I agreed that my figures were probably wildly off and suggested we did some proper research to find out the real cost.

You don’t need hard data to demonstrate there is a problem. An educated guess is good enough to start a discussion.

Of course, not all constraints are actively causing damage. Some are merely preventing some better action from being taken. In these cases, you need a different approach.

Focus On The Cost Of Inaction

Over time, an organization establishes processes and procedures that have been proven to work for them. The bigger the organization, the more standard operating procedures they have, and the more constraints you encounter.

Well-established companies become so afraid of losing their position that they become extremely risk-averse, and so place considerable constraints on any project.

People succeed in organizations like this by doing what has been done before. This can be problematic for those of us who work in digital because most of what we are trying to do is new.

To combat this bias towards the status quo, we need to demonstrate the cost of inaction. Put another way, we need to show management that if they do not do things differently, it will threaten the market position the organization has established.

In most cases, the best approach is to focus on the competition. Do a bit of research and show that the competition is less risk-averse and gaining market share as a result. Keep mentioning how they are doing things differently and how that threatens your organization’s market position.

Another tactic is to demonstrate how customer expectations have changed and that if the company does not act, they will begin to lose market share.

This is particularly easy to do because users’ expectations regarding digital have skyrocketed in recent years.

“The last best experience that anyone has anywhere becomes the minimum expectation for the experiences they want everywhere.”
— Bridget van Kranlingen, Senior Vice President of IBM Global Markets

Put another way, users are comparing your organization’s subpar digital experience to the very best of what they are interacting with online, even when that comparison is not fair.

A bit of user research goes a long way in this regard. For example, consider running a system usability scale survey to compare your digital platforms to this industry benchmark. Alternatively, run a survey asking how important the digital experience is to customers.

While fear of losing market share is a big motivator to well-established businesses, younger, hungrier businesses tend to be more motivated by lost opportunities.

Demonstrate Lost Opportunities

Your management, stakeholders, and colleagues often do not realize what they are missing out on because of the constraints they place upon you. It, therefore, falls to you to demonstrate those opportunities.

Sometimes, you can make this case with analytics. For example, recently, I was working with a client who insisted on having a pricing page on their website, despite the fact the page showed no pricing! Instead, the page had a request pricing form.

They wanted to keep the page because they were afraid to lose the handful of leads that came via the page. However, I was able to convince them otherwise by pointing out that the page was actively alienating the majority of users who visited it, effectively losing them leads.

I did this by demonstrating the page had a higher bounce rate than any other page on the site, was the most common exit page, and had the lowest dwell time.

But analytics is not my favorite approach for demonstrating lost opportunities. Instead, I typically turn to prototyping.

Prototyping is a great way of demonstrating exactly what an organization will miss out on if they insist on unreasonable constraints, presuming, that is, that you create a prototype that is free from those constraints.

I use this approach all the time. Imagine, for example, that you have been told that a particular technology stack imposes a set of restrictive constraints on how an interface is designed. By prototyping what the interface could be if you were free from those constraints, you can make a powerful case for changing the technology stack.

Having a prototype gives you something to test against. You can use usability testing to provide hard evidence of how much it improves the user experience, findability, and even conversion.

Even more significantly, a prototype will excite internal stakeholders. If your prototype is compelling enough, they will want that solution, and that changes the conversation.

Instead of you having to justify why the IT stack needs to be changed, now the IT team has to justify why their IT stack cannot accommodate your solution. Stakeholders and management will want to know why they cannot have what they have fallen in love with.

Of course, people will not always fall in love with your prototype, and ultimately, many of your attempts to overcome constraints will fail despite your best efforts, and you need to accept that.

Conceding Defeat With Grace

Let’s be clear. It is your job to demonstrate to management or clients that a constraint placed upon you is unhealthy. They cannot be expected to know instinctively. They do not have your perspective on the project and so cannot see what you see.

This means that if they fail to remove the constraint you consider unhealthy, it is your failing to demonstrate the problem, not their fault.

Sure, you might consider them shortsighted or naive. But ultimately, you failed to make your case.

Also, it is important to note that you don’t always have the whole picture. A decision may be bad from a user experience perspective, for example, but it may be the right thing for the business. There will always be other factors at play that you are unaware of.

So when you fail to make your case, accept that with grace and do your best to work within the constraints given to you.

Ultimately your working relationship with management, colleagues, and clients is more important than your professional pride and getting your way.

]]>
hello@smashingmagazine.com (Paul Boag)
<![CDATA[Testing Sites And Apps With Blind Users: A Cheat Sheet]]> https://smashingmagazine.com/2023/06/testing-sites-apps-blind-users-cheat-sheet/ https://smashingmagazine.com/2023/06/testing-sites-apps-blind-users-cheat-sheet/ Wed, 07 Jun 2023 10:00:00 GMT This article focuses on the users of screen readers — special software that converts the source code of a site or app into speech. Usually, these are people with low vision and blindness but not only. They’ll help you discover most accessibility issues. Of course, the topic is too vast for a single article, but this might help to get started.

Table Of Contents Part 1. What Is Accessibility Testing?

1.1. Testing vs. Audit

There are many ways of evaluating the accessibility of a digital product, but let’s start with distinguishing two major approaches.

Auditing is an element-by-element comparison of a site or app against a list of accessibility requirements, be it a universal standard (WCAG) or a country-specific law (like ADA in the U.S. or AODA in Ontario, Canada). There are two ways to do an audit:

  1. Automated audit
    Checking accessibility by means of web apps, plugins for design and coding software, or browser extensions (for example, axe DevTools, ARC Toolkit, WAVE, Stark, and others). These tools generate a report with issues and recommendations.
  2. Expert audit
    Evaluation of web accessibility by a professional who knows the requirements. This person can employ assistive technology and have a disability, but this is anyway an expert with advanced knowledge, not a “common user.” As a result, you get a report too, but it’s more contextual and sensible.

Testing, unlike auditing, cannot be done by one person. It involves users of assistive technologies and comprises a set of one-on-one sessions facilitated by a designer, UX researcher, or another professional.

Today we’ll focus on testing as an undervalued yet powerful method.

1.2. Usability vs. Accessibility Testing

You might have already heard about usability testing or even tried it. No wonder it’s the top research method among designers. So how is it different from its accessibility counterpart?

Common features:

  • Script
    In both cases, a facilitator prepares a full written script with an introduction, questions, and tasks based on a realistic scenario (for example, buying a ticket or ordering a taxi). By the way, here are handy testing script templates.
  • Insights gathering
    Despite accessibility testing’s main focus, it also reveals lots of usability issues, simply said, whether a site or app is easy to use. In both cases, a facilitator should ask follow-up questions to get an insight into people’s way of thinking, pain points, and needs.
  • Format
    Both testing types can be organized online or offline. Usually, one session takes from 30 minutes to 1 hour.

Key differences:

  • Participants selection
    People for usability testing are recruited mainly by demographic characteristics: job title, gender, country, professional experience, etc. When you test accessibility, you take into account the senses and assistive technologies involved in using a product.
  • What you can test
    In usability testing, you can test a live product, an interactive prototype (made in Figma, Protopie, Framer, etc.), or even a static mockup. Accessibility testing, in most cases, requires a live product; prototyping tools cannot deliver a source code compatible with assistive technology. Figma attempted to make prototypes accessible, but it’s still far from perfect.
  • Giving hints
    When participants get stuck in the flow, you help them find the way out. But when you involve people with disabilities, you have to understand how their assistive gear works. Just to give you an example, a phrase like “Click on the red cross icon in the corner” will sound silly to a blind user.

1.3. Why Opt For Testing?

Now that you know the difference between an audit and testing and the distinction between usability and accessibility testing, let’s clarify why testing is so powerful. There are two reasons:

  1. Get valuable insights.
    The idea of testing is to learn how you can improve the product. While you won’t check all interface elements and edge cases, such sessions show if the whole flow works and if people can reach the goal. Unlike even the most comprehensive audits, testing is much closer to reality and based on the usage of real assistive technology by a person with a disability.
  2. Build empathy through storytelling.
    A good story is more compelling than bare numbers. Besides, it can serve as a helpful addition to such popular pro-accessibility arguments as legal risks, winning new customers, or brand impact. Even 1–2 thorough sessions can give you enough material for a vivid story to excite the team about accessibility. An audit report alone may not be as thrilling to read.

Testing gives you more realistic insights into common scenarios. Laws and standards aren’t perfect, and formal compliance might not cover all the user challenges. Sometimes people take not the “designed” path to the goal but the one that seems safer or more intuitive, and testing reveals it.

Of course, auditing is still a powerful method; however, its combination with testing will show much more accurate results. Now, let’s talk about accessibility testing in detail.

Part 2. Recruiting Users

There are many types of disabilities and, consequently, various assistive technologies that help people browse the web. Without a deep dive into theory, let’s just recap the variety of disabilities:

  • Depending on the senses involved or the affected area of life: visual (blindness, color deficiency, low vision), physical (cerebral palsy, amputation, arthritis), cognitive (dyslexia, Down syndrome, autism), auditory (deafness, hearing loss), and so on.
  • By severity: permanent (for example, an amputated leg or some innate condition), temporary (a broken arm or, let’s say, blurred vision right after using eye drops), and situational (for instance, a noisy room or carrying a child).

Note: You can find more information on various types of disabilities on the Microsoft Inclusive Design hub.

For the sake of simplicity, we’ll focus on the case applicable to most digital products: when a site or app mostly relies on vision. In this case, visual assistive technologies offer users an alternative way to work with content online. The most common technologies are:

  • Screen readers: software that converts text into speech and has numerous handy shortcuts to navigate efficiently. (We’ll talk about it in detail in the next chapters.)
  • Refreshable Braille displays: devices able to show a line of tactile Braile-script text. Round-tipped pins are raised through holes in a surface and refresh as a user moves their cursor on the screen. Such displays are vital for blind-deaf people.
  • Virtual assistants (Amazon Alexa, Apple Siri, Google Assistant, and others): an excellent example of universal design that serves the needs of both people with disabilities and non-disabled people. Assistants interpret human speech and respond via synthesized voices.
  • High-contrast displays or special modes: for people with low vision. Some users combine a high-contrast mode with a screen reader.

2.1. Who To Involve

Debates around an optimal number of testing participants are never-ending. But we are talking here about a particular case — organizing accessibility testing for the first time, hence the recommendation is the following:

  • Invite 3–6 users with blindness and low vision who either browse the web by means of screen readers or use a special mode (for example, extra zoom or increased contrast).
  • If your product has rich data visualization (charts, graphs, dashboards, or maps), involve several people with color blindness.

In any case, it’s better to conduct even one or two high-quality sessions than a dozen of poorly prepared ones.

2.2. Where To Find People

It is not as hard to find people for testing as it seems at first glance. If you are working on a mass product for thousands of users, participants won’t need any special knowledge apart from proficiency with their assistive technology. Here are three sources we recommend checking:

  • Specialized platforms for recruiting users according to your parameters (for example, Access Works or UserTesting). This method is the fastest but not the cheapest one because platforms take their commission on top of user compensation.
  • Social media communities of people with disabilities. Try searching by the keywords like “people with disabilities,” “PWD,” “support group,” “visually impaired,” “partially sighted,” or “blind people.” Ask the admin’s permission to post your research announcement, and it won’t be rejected.
  • Social enterprises and non-profits that work in the area of inclusion, employment, and support for people with disabilities (for example, Inclusive IT in Ukraine or The Federation of the Blind and Partially Sighted in Germany). Drop them an email with your request.

We noticed that the last two points might sound like getting participants for free, but not everyone has an opportunity to volunteer.

When we organized accessibility testing sessions last year, three persons agreed to take part pro bono because it was a university course, and we didn’t get any profits. Otherwise, be ready to compensate for the participant’s time (in my experience, around €15–30). It can be an Amazon gift card or coupon for something useful in a particular country (only ensure it’s accessible).

Digital product companies that test accessibility regularly hire people with disabilities so that they have access to in-progress software and can check it iteratively before the official launch.

Part 3. Preparing For The Session

Now that you’ve recruited participants, it’s time to discuss things to prepare before the sessions. And the first question is:

3.1. Online Or offline?

There are basically two ways to conduct testing sessions: remotely or face-to-face. While we usually prefer the first one, both techniques have pros and cons, so let’s talk about them.

Benefits of online:

  • Native environment.
    Participants can use familiar home equipment, like a desktop computer or laptop, with nicely tuned assistive technology (plugins, modes, settings).
  • Cost and time efficiency.
    No need to reimburse expenses for traveling to your office. It might be quite costly if a participant arrives with an accompanying person or needs special accessible transport.
  • Easier recruitment.
    It’s more likely you’ll find a participant that meets your criteria around the world instead of searching in your city (and again, zero travel expenses).

Benefits of offline:

  • Testing products in development.
    If you have a product that isn’t public yet, participants won’t be able to easily install it or open it in a browser. So, you’ll have to invite participants to your office, but they should probably bring the portable version of their assistive technology (for example, on a USB drive).
  • Testing mobile apps.
    If a person brings a personal phone, you’ll see not only the interaction with your product but also how the device is set up and what gestures and shortcuts a person uses.
  • Helping inexperienced users.
    Using assistive technology is a skill, and you may involve someone who is not yet proficient with it. So, the offline setting is more convenient when participants get stuck and you help them find the way out.

As you can see, online testing has more universal advantages, whereas the offline format rather suits niche cases.

3.2. Communication Tools

Once you decide to test online, a logical question is what tool to choose for the session. Basically, there are two options:

Specialized testing tools (for instance, UserTesting, Lookback, UserZoom, Hotjar, Useberry):

  • Apart from basic conferencing functionality, they support advanced note-taking, automatic transcription, click heatmaps, dashboards with testing results, and other features.
  • They are quite costly. Besides, trial versions may be too limited for even a single real session.
  • Participants may get stuck with an unfamiliar tool that they’ve never used before.

Popular video conferencing tools (for example, Google Meet, Zoom, Microsoft Teams, Skype, Webex):

  • Support all the minimally required functionality, such as video calls, screen-sharing, and call recording.
  • They are usually free.
  • There is a high chance that participants know how to use them. (Note: even in this case, people may still experience trouble launching screen-sharing).

Since we are talking about your first accessibility testing, it’s much safer and easier to utilize an old good video conferencing tool, namely the one that your participants have experience with. For example, when we organized educational testing sessions for the Ukrainian Catholic University, we used Skype, and at the HTW University in Berlin, we chose Zoom.

Regardless of the tool choice, learn in advance how screen-sharing works in it. You’ll likely need to explain it to some of the participants using suitable (non-visual) language. As a result, the intro to accessibility testing sessions may take longer compared to usability testing.

3.3. Tasks

As we figured out before, accessibility testing requires a working piece of software (let’s say, an alpha or beta version); it’s harder to build, but it opens vast research opportunities. Instead of asking a participant to imagine something, you can actually observe them ordering a pizza, booking a ticket, or filling in a web form.

Recommendations for accessibility testing tasks aren’t much different from the ones in usability testing. Tasks should be real-life and formulated in a way people naturally think. Instead of referring to an interface (what button a person is supposed to click), you should describe a situation that could happen in reality.

Start a session with a mini-interview to learn about participants’ relevant experiences. For example, if you are going to test an air travel service, ask people if they travel frequently and what their desired destinations are. Based on these details, customize the tasks — booking a ticket to the place of the participant’s choice, not a generic location suggested by you.

Examples of realistic, broad tasks:

  • Testing a consumer product: bicycle online store.
    You want to buy a gift card for your colleague George who enjoys bikepacking. Choose the card value, customize other preferences, and select how George will receive the gift. (This task implies that you learned about a real George who likes cycling during a mini-interview.)
  • Testing a professional product: customer support tool.
    Your manager asked you to take a look at several critical issues that haven’t been answered for a week. Find those tickets and find out how to react to them. (This task implies that you invited a participant who worked as a customer support agent or in a similar role.)

Examples of leading UI-based tasks:

  • Consumer product
    “Open the main menu and find the ‘Other’ category. Choose a €50 gift card. In the ‘For whom’ input field enter ‘John Doe’… Select ‘Visa/Mastercard’ as a paying method…”
  • Professional product
    “Navigate to the dashboard. Choose the ‘Last week’ option in the ‘Status’ filter and look at the list of tickets. Apply the filter ‘Sort by date’ and tell me what the top-most item is…”

A testing session is 50% preparation and 50% human conversation. It’s not enough to give even a well-formulated task and silently wait.

An initial task reveals which of the ways to accomplish a task a participant will choose as the most intuitive one. When a person gets stuck, you can give hints, but they shouldn’t sound like “click XYZ button”; instead, let them explore further. Something like the following:

— No worries. So, the search doesn’t give the expected result. What else can you do here?
— Hmm, I don’t know. Maybe filtering it somehow…
— OK, please try that.

3.4. Wording

Your communication style impacts participants’ way of thinking and the level of bias. Even a huge article won’t cover all the nitty-gritty, but here are several frequent mistakes.

Beware of the following:

  • Leading tasks: “Go to the ‘Dashboard’ section and find the frequency chart” or “Scroll to the bottom to see advanced options.”
    Such hints totally ruin the session, and you will never know how a person would act in reality.
  • Selling language: “Check our purchase in one click” or “Try the ‘Smart filtering’ feature.”
    It makes people feel as if they have to praise your product, not share what they really think.
  • Humorous tasks: “Create a profile for Johnny Cash” or, for example, “Request Christmas tree delivery to Lapland.”
    Jokes distract participants and decrease session realism.
  • IT terminology: “On the dashboard, find toggle switch” or “Go to the block with dropdowns and radio buttons.”
    It’s bad for two reasons: you may confuse people with words they don’t understand; it can be a sign that you give leading tasks and excessive UI hints.

Here is recommended further reading by Nielsen Norman Group:

Part 4. Session Facilitation

As agreed before, your first accessibility testing session will probably involve a blind person or a person with low vision who uses a screen reader to browse the web. So, let’s cover the two main aspects you have to know before starting a session.

4.1. Screen Readers

A screen reader is an assistive software that transforms visual information (text and images) into speech. When a visually impaired person navigates through a site or app using a keyboard or touchscreen, the software “reads” the text and other elements out loud.

Screen readers rely on the source code but interpret it in a special way. They skip code accountable for visual effects (like colors or fonts) and take into account meaningful parts, such as heading tags, text descriptions for pictures, and labels of interactive elements (whether it’s a button, input field, or checkbox). The better a code is written, the easier it will be for users to comprehend the content.

Now that you know how screen readers function, it’s time to experience them firsthand. Depending on the operating system, you’ll have a standard embedded screen reader already available on your device:

  • VoiceOver: Mac and iOS;
  • Narrator: Windows;
  • TalkBack: Android.

During one of our training courses, we learned from blind users that the screen reader on iPhone is more comfortable and flexible than the Android one. Interestingly, people don’t like standard desktop screen readers either on Mac or on Windows and usually install one of the advanced third-party readers, for instance:

  • JAWS (Job Access With Speech): Windows, paid, the most popular screen reader worldwide;
  • NVDA (Non-Visual Desktop Access): Windows, free of charge.

4.2. Navigation

Visually impaired people usually navigate apps and sites using a keyboard or touchscreen. And while sighted people scan a page and jump from one part to another, screen reader users can keep only one element in focus at a time, be it a paragraph of text or, let’s say, an input field.

Participants of your accessibility testing will likely run into an unpassable obstacle at some point in the session, and you’ll give them hints on how to find the way out and proceed with the next task. In this case, you’ll need a special non-visual language that makes sense.

Not helpful hints:

  • “Click the cross icon in the upper right corner.”
  • “Scroll to the bottom of the modal window and find the button there.”
  • “Look at the table in the center of the page.”

Helpful hints:

  • “Please, navigate to the next/previous item.”
  • “Go to the second element in the list.”
  • “Select the last heading/link/button.”

Note: UI hints above are suggested for cases when a user is completely stuck in the flow and cannot proceed, for example, when an element is not navigable via a keyboard or, let’s say, an interactive element doesn’t have a proper label or name.

Summary

Once all the testing sessions have been completed, you can analyze the collected feedback, determine priorities, and develop an action plan. This process could be the subject of a separate guideline, but let’s cover the three key principles right away:

  • Catching information
    Testing produces tons of data, so you should be prepared to capture it; otherwise, it will be lost or obscured by your imperfect human memory. Don’t rely on a recording. Make notes in the process or ask an assistant to do that. Notes are easier to analyze and find repeating observations across sessions. Besides, they ensure you’ll have data if the recording fails.
  • Raw datainsights
    Not everything you observe in testing sessions should be perceived as a call to action. Raw data shows what happened, while insights explain reasons, motivations, and ways of thinking. For example, you see that people use search instead of filters, but the insight may be that typing a search request needs less effort than going through the filter menu.
  • Criticality and impact
    Not all observations are significant. If five users struggle to proceed because the shopping cart isn’t keyboard-navigable, it’s a major barrier both for them and the business. But if one out of five participants didn’t like the button name, it isn’t critical. Take into account the following:
    • How many participants encountered a problem;
    • How much a problem impacts reaching the goal: booking a ticket, ordering pizza, or sending a document.

Once the information has been collected and processed, it is essential to share it with the team: designers, engineers, product managers, quality assurance folks, and so on. The more interactive it will be, the better. Let people participate in the discussion, ask questions, and see what it means for their area of responsibility.

As you gain more experience in conducting testing sessions, invite team members to watch the live stream (for instance, via Google Meet) or broadcast the session to a meeting room with observers, but make sure they stay silent and don’t intrude.

Further Reading

]]>
hello@smashingmagazine.com (Slava Shestopalov & Eugene Shykiriavyi)
<![CDATA[Exploring Universal And Cognitive-Friendly UX Design Through Pivot Tables And Grids]]> https://smashingmagazine.com/2023/06/universal-cognitive-friendly-ux-design-tables-grids/ https://smashingmagazine.com/2023/06/universal-cognitive-friendly-ux-design-tables-grids/ Tue, 06 Jun 2023 16:30:00 GMT Tables are one of the most popular ways to visualize data. Presenting data in tables is so ubiquitous — and core to the web itself — that I doubt many of you reading this have any trouble with the basics of the <table> element in HTML. But building a good complex table isn’t an easy task.

Though, I’d even go so far as to say that tables are an integral part of our daily life.

That’s why we need to start thinking about making tables more inclusive. The web is supposed to be designed for everybody. That includes those with impairments that may prevent access to the information in the tables we make and rely on assistive technology to “read” them.

For the last several months, I’ve been working on this scientific project around inclusive design for people with cognitive disorders for my university degree. I’ve mostly focused on developing guidelines to help educational platforms adapt to such users.

I also work for a company that has developed a JavaScript library for creating pivot tables used for business analysis and data visualizations. At one point in my research, I found that tables are a type of popular data representation that can simultaneously be a lifesaver and a troublemaker, yes, for people with learning and cognitive problems, but for everyone else as well. Remember, we are all temporarily “abled” and prone to lose abilities like eyesight and hearing over time.

Plus, a well-executed inclusive table design is a pathway to improving everyone’s productivity and overall experience, regardless of impairment.

Table Of Contents What We Mean By Cognitive Disorders

Cognitive disorders are defined as any kind of disorder that significantly impairs an individual’s conscious intellectual activity, such as thinking, reasoning, or remembering.

ADHD is one example that prevents a person from remaining focused or paying attention. There’s also dyslexia , which makes it tough to recognize and comprehend written words. Dyscalculia is specific to working with numbers and arithmetic.

For those without this condition, it is difficult to understand what exactly can be wrong with the perception of written information. But based on the descriptions of people with the relevant deviations, simulators were created that imitate what people with dyslexia see.

Currently, you can even install a special browser extension to estimate how difficult your site will be to perceive by people with this deviation. It is much more difficult to understand the condition of people with ADHD, but certain videos with ADHD simulations do exist, which can also allow you to evaluate the level of difficulty in the perception of any information by such people.

These are all things that can make it difficult for people to use tables on the web. Tables are capable of containing lots of information that requires a high level of cognitive work.

  • The first stage toward helping users with such deviations is to understand their condition and feel its details on themselves — in other words, practicing empathy.
  • The second stage is to systematize the details and identify specific usability problems to solve.

Please indulge me as we dive a bit into some psychological theory that is important to understand when designing web pages.

Cognitive Load

Cognitive load relates to the amount of information that working memory can hold at one time. Our memory has a limited capacity, and instructional methods should avoid overloading it with unnecessary activities and information that competes with what the individual needs to complete their task.

UX professionals often say complex tasks that require the use of external resources may result in an increased cognitive load. But the amount of the load can be affected by any additional information, unusual design, or wrong type of data visualization. When a person is accustomed to a particular representation of certain types of data — like preferred date format or where form input labels are positioned — even a seemingly minor change increases the processing time of our brain.

Here’s an example: If a particular student is from a region where content is presented in a right-to-left direction and the software they are provided by their university only supports a left-to-right direction, the amount of mental work it takes to comprehend the information will be greater compared to other students.

If you still want another example, Anne Gibson explains this exceptionally well in a blog post that uses ducks to illustrate the idea.

Cognitive Biases

I also want to call special attention to cognitive biases, which are systematic errors in thinking that become patterns of deviation from rationality in judgment. When people are processing and interpreting information around them, it often can influence the decisions a person makes without even noticing.

For example, the peak-end rule says that people judge an experience by its "peak” and last interactions. It’s easy to prove. Try to reflect on a game you used to play as a kid, whether it’s from an arcade, a computer console, or something you played online. What do you remember about it? Probably the level that was hardest for you and the ending. That’s the“peak” of your experience and the last, most “fresh” one, and they create your overall opinion of the game. For more examples, there is a fantastic resource that outlines 106 different types of cognitive biases and how they affect UX.

Signal-to-noise Ratio

Last but not least, I’d like to touch on the concept of a signal-to-noise ratio briefly. It is similar to the engineering term but relates to the concept that most of the information we encounter is noise that has nothing to do with a user’s task.

  • Relevant and necessary information is a signal.
  • The ratio is the proportion of relevant information to irrelevant information.

A designer’s goal is to achieve a high signal-to-noise ratio because it increases the efficiency of how information is transmitted. The information applied to this ratio can be anything: text, illustrations, cards, tables, and more.

The main idea about cognitive disabilities I want you to take away is that they make individuals very sensitive to the way the information is presented. A font that’s too small or too bright will make content unperceivable. Adding gratuitous sound or animation may result in awful distractions (or worse) instead of nice enhancements.

I’ll repeat it:

A good user experience will prevent cognitive overload for everyone. It’s just that we have to remember that many out there are more sensitive to such noises and loads.

Focusing on individuals with specific considerations only gives you a more detailed view of what you need to solve for everybody to live a simpler life.

Considering Cognitive Disorders In UX Design

Now that we have defined the main problems that can arise in a design, I can sum up our goals for effective UX:

  • Reduce the cognitive load.
  • Maximize the signal-to-noise ratio.
  • Use correct cognitive biases to boost the user experience.

“Design” is a loaded term meaning lots of different things, from colors and fonts to animations and sounds and everything in between. All of that impacts the way an individual understands the information that is presented to them. This does not mean all design elements should be excluded when designing table elements. A good table design is invisible. The design should serve content, not the other way around.

With the help of lots of academic, professional, and personal research, I’ve developed a set of recommendations that I believe will result in cognitive-friendly and easy-to-perceive table designs.

Color Palettes And Usage

We should start by talking about the color because if the colors used in a table are improperly implemented, subsequent decisions do not really matter.

Many people consider colors to have their meanings, which differ from culture to culture. That’s certainly true in a sociological sense, but as far as UX is concerned, the outcome is the same — colors carry information and emotions and are often unnecessary to mean something in a design.

Rule 1: Aim For A Minimalist Color Palette

When you see a generous use of color in a table, it isn’t to make the table more functional but to make a design stand out. I won’t say that using fewer colors guarantees a more functional table, but more color tends to result in individuals losing attention from the right things.

Accordingly, bright colors and accents should highlight information that has established meaning. This isn’t to say that interesting color schemes and advanced color palettes are off-limits. This means using colors wisely. They are a means to an end rather than a splash of paint for attention.

Adam Wathan & Steve Schoger offer a perfect example of color usage in a design study of customized Slack themes. Consider the two following interfaces. It may not seem like it at first, but the second UI actually has a more extensive color palette than the first.

The difference is that the second interface applies shades of the core color defined in the palette and that brighter and more vibrant shades are only used to highlight the important stuff.

You can explore this phenomenon by yourself and test your perception of the colors in the design by changing the look of your messenger. For example, Telegram has some interface customization options, and while playing with that, I noticed I read and navigate between my chats in the “Night Accent” mode rather than the plain “System” mode.

Of course, both designs were designed for people with different preferences and characteristics, but such a personal experiment led me to the following thought. Even though the second option uses fewer colors, the uniformity of information is a bit confusing. From this, I concluded that too few colors and too minimal a design is also a bad choice. It is necessary to find a balance between the color palette and its usage.

The best option is to pick from one to three primary colors and then play with their shades, tints, and tones. To combine the colors wisely, you can use complementary, split complementary, or analogous approaches.

That said, I suggest using a “shading” monochromatic approach for tables. It means defining a base color in a palette, then expanding it with different shades in dark and light directions. In other words:

  1. Choose a primary color.
  2. Define an evenly darker and lighter shade of that primary color.

This produces two more colors to which you can apply the previous technic to create colors that are a perfect compromise between the shades on either side. Repeat this process until you reach the number of colors you need (generally, 7–9 shades will do).

Rule 2: Embrace The Power Of Whitespace

I find that it’s good to offer a fair amount of “breathing room” around elements rather than trying to crowd everything in as close as possible. For example, finding a balance of space between the table rows and columns enhances the legibility of the contents as it helps distinguish the UI from the information.

I’ll qualify this by noting that “breathing room” often depends on the type of data that’s being presented, as well as the size of the device on which it’s being viewed. As such, it sometimes makes sense to enhance a table’s functionality by allowing the user to adjust the height and width of rows and columns for the most optimal experience.

If you are worried about using too few or too many colors, apply the 60/30/10 rule. It’s a basic pattern for any distribution selection. People use this strategy when budgeting assets like content and media, and it’s applicable to design. The rule says the color usage should be distributed as follows:

  • 60% for neutral colors,
  • 30% for primary colors,
  • 10% for secondary colors (e.g., highlights, CTAs, and alerts).

Rule 3: Avoid Grays

Talking about neutral colors, in color theory, gray represents neutrality and balance. Its color meaning likely comes from being the shade between white and black and often is also perceived as the absence of color. You can not overdo it; its light shades do not oppress, so gray is just “okay.”

However, gray does carry some negative connotations, particularly when it comes to depression and loss. Its absence of color makes it dull. For this reason, designers often resort to it to de-emphasize an element or certain bits of data.

But maintaining such a philosophy of gray color will only work in black and white designs, such as on the Apple website. Though, as I mentioned before, it actually works really well as grey is the tone of black or a shade of white.

The problem, however, comes up when other colors are added to the color palette, which leads to a change in a color’s roles and functions. In the case of gray, putting it next to brighter colors makes the design pale and dull.

Having no color of its own, gray seems to eat away the brightness of neighboring elements. Instead of maintaining balance, gray makes the design cloudy and unclear. After all, against the background of already illuminated elements, gray makes the elements not just less significant but unnecessary for our perception.

That does not mean you should totally give up gray. But highlighting some information inherently de-emphasizes other information, negating the need for gray in the first place.

The easy way out is to replace gray with lighter shades of a palette’s base color on a table cell’s background. The effect is the same, but the overall appearance will pop more without adding more noise or cognitive load.

Rule 4: Know What’s Worthy Of Highlighting

Designers are always looking for a way to make their work stand out. I get the temptation because bold and bright colors are definitely exciting and interesting.

Blogs can be considered a good example of this problem as their variety is wide and growing, and a lot of platforms prioritize exclusive design over inclusive design.

For example, Medium uses only black and shades of it for a color palette, which significantly facilitates even simple tasks like reading titles. Hackernoon, although looking interesting and drawing attention, requires more concentration and does not allow you to “breathe” as freely as on Medium.

In analytical software, that only leads to a table design that emphasizes a designer’s needs ahead of the user’s needs.

Don’t get me wrong — a palette that focuses on shades rather than a large array of exciting colors can still be exciting and interesting. That provides a discussion about which grid elements benefit from color. Here are my criteria for helping decide what those are and the colors that add the most benefit for the given situation.

Active cells: If the user clicks on a specific table cell or selects a group of cells, we can add focus to it to indicate the user’s place in the data. The color needs to call attention to the selection without becoming a distraction, perhaps by changing the border color with a base color and using a light shade of it for the background so as to maintain WCAG-compliant contrast with the text color.

Tip! It’s also good to highlight the row and column that a focused cell belongs to, as this information is a common thing to check when deciphering the cell’s meaning. You can highlight the entire row and column it belongs to or, even better, just the first cell of the row and column.

Error messaging: Error messages definitely benefit from color because, in general, errors contain critical feedback for the user to take corrective action.

A good example might be an injected alert that informs the user that the table’s functionality is disabled until an invalid data point is fixed. Reds, oranges, and yellows are commonly used in these situations but bear in mind that overly emphasizing an error can lead to panic and stress. (Speaking of error messaging, Vitaly Friedman has an extensive piece on designing effective error messages, including the pitfalls of relying too heavily on color.)

Outstanding data: I’m referring to any data in the table that is an outlier. For example, in a table that compares data points over time, we might want to highlight the high and low points for the sake of comparison. I suggest avoiding reds and greens, as they are commonly used to indicate success and failure. Perhaps styling the text color with a darker shade of a palette’s base color is all you need to call enough attention to these points without the user losing track of them.

The key takeaway:

Data-heavy tables are already overwhelming, and we don’t want any additional noise. Try to remove all unnecessary colors that add to a user’s cognitive load.

Tip! Remember the main goal when designing a table: reliability, not beauty. Always check your final decisions, ideally with a variety of target users. I really recommend using contrast checkers to spot mistakes quickly and efficiently correct them.

Typographical Considerations

The fonts we use to represent tabular data are another aspect of a table’s look and feel that we need to address when it comes to implementing an inclusive design. We want the data to be as legible and scannable as possible, and I’ve found that the best advice boils down to the typography of the content — especially for numerics — as well as how it is aligned.

Rule 1: The Best Font Is A “Simple” Font

The trick with fonts is the same as with colors: simplicity. The most effective font is one that takes less brainpower to interpret rather than one that tries to stand out.

No, you don’t need to ditch your Google Fonts or any other font library you already use, but choose a font from it that meets these recommendations:

  1. Sans-serif fonts (e.g., Helvetica, Arial, and Verdana) are more effective because they tend to take up less space in a dense area — perfect for promoting more “breathing room” in a crowded table of data.
  2. A large x-height is always easier to read. The X-height is the height of the body of a lowercase letter minus any ascenders or descenders. In other words, the height of the lowercase “x” in the font.
  3. Monospace fonts make it easier to compare cells because the width of each character is consistent, resulting in evenly-spaced lines and cells.
  4. Regular font weights are preferable to bolder weights because the boldfacing text is another form of highlighting or emphasizing content, which can lead to confusion.
  5. A stable, open counter. The counter is a space in the letter “o” or the letter “b.” Fonts with distorted counters render poorly in small sizes and are hard to read.

Fonts that fulfill these criteria are more legible and versatile than others and should help whittle down the number of fonts you have to choose from when choosing your table design.

Rule 2: Number Formatting Matters

When choosing a font, designers often focus on good legible letters and forget about numbers. Needless to say, numbers often are what we’re displaying in tables. They deserve first-class consideration when it comes to choosing an effective font for a good table experience.

As I mentioned earlier, monospace fonts are an effective option when numbers are a table’s primary content. The characters take up the same width per character for consistent spacing to help align values between rows and columns. In my experience, finding a proportional font that doesn’t produce a narrow “1” is difficult.

If you compare the two fonts in the figure above, it’s pretty clear that data is easier to read and compare when the content is aligned and the characters use the same amount of space. There’s less distance for the eye to travel between data points and less of a difference in appearance to consider whether one value is greater than the other.

If you are dealing with fractions, you will want to consider a font that supports that format or go with a variable font that supports font-variant-numeric features for more control over the spacing.

Rule 3: There Are Only Two Table Alignments: Left And Right

Technically there are four alignments: left, right, center, and justify. We know that because the CSS text-alignment property supports all four of them.

My personal advice is to avoid using center alignment, except in less-common situations where unambiguous data is presented with consistently-sized icons. But that’s a significant and rare exception to the rule, and it is best to use caution and good judgment if you have to go there.

Justified content alters the spacing between characters to achieve a consistent line length, but that’s another one to avoid, as the goal is less about line lengths than it is about maintaining a consistent amount of space between characters for a quick scan. That is what monospaced fonts are effective for.

Data should instead be aligned toward the left or right, and which one is based on the user’s language preference.

Then again, at school, we’re taught to compare numbers in a right-to-left direction by looking first at single units, then tens, followed by hundreds, then thousands, and so forth. Accordingly, the right alignment could be a better choice that’s universally easier to read regardless of a person’s language preference. You may notice that spreadsheet apps like Excel, Sheets, and Notion align numeric values to the right by default.

There are exceptions to that rule, of course, because not all numbers are measurements. There are qualitative numbers that probably make more sense with left alignment since that is often the context in which they are used. They aren’t used for comparison and are perceived as a piece of text information written in numbers. Examples include:

  • Dates (e.g., 12/28/2050),
  • Zip/Postal code (e.g., 90815),
  • Phone number (e.g., 555-544-4349).

Table headings should be aligned to the same edge as the data presented in the column. I know there could be disagreement here, as the default UA styling for modern browsers centers table headings.

The screenshots above are examples of bad and good headers. When looking at the first screenshot, your initial focus is likely drawn to the column headers, which is good! That allows you to understand what the table is about quickly. But after that initial focus, the bold text is distracting and tricks your brain into thinking the header is the most important content.

The header in the second screenshot also uses bold text. However, notice how changing the color from black to white emphasizes the headers at the same time. That negates the impact bolding has, preventing potential cognitive load.

At this point, I should include a reminder to avoid gray when de-emphasizing table elements. For example, notice the numbers in the far left column and very top row. They get lost against the background color of the cells and even further obscured by the intense background color of surrounding cells. There’s no need to de-emphasize what is already de-emphasized.

I also suggest using short labels to prevent them from competing with the data. For example, instead of a heading that reads “Grand Total of Annual Revenue,” try something like “Total Revenue” or “Grand Total” instead.

Table Layout Considerations

There once was a time when tables were used to create webpage layouts because, again, it was a simple and understandable way to present the information in the absence of standardized CSS layout features. That’s not the case today, thankfully, but that period taught us a lot about best practices when working with table design that we can use today.

Rule 1: Fewer Borders = More White Space

Borders are commonly used to distinguish one element from one another. In tables, specifically, they might be used to form outlines around rows and columns. That distinction is great but faces the same challenge that we’ve covered with using color: too much of a layout can steal focus from the data, making the design busy and cluttered. With the proper design and text alignment, however, borders can become unnecessary.

Borders help us navigate the table and delimit individual records. At the same time, if there are many of them in a grid, it becomes a problem in large tables with a lot of rows and columns. To prevent the cells from being too densely connected, try adding more space between them with padding. As I have mentioned before, negative space is not an enemy but a design saver.

That said, the law of diminishing returns applies to how much space there should be, particularly when considering a table’s width. For example, a table might not need to flex to the full width of its parent container by default. It depends on the content, of course. Avoiding large spacing between columns will help prevent a reader’s eyes from having to travel far distances when scanning data and making mistakes.

I know that many front-enders struggle with column widths. Should they be even? Should they only be as wide as the content that’s in them? It’s a juggling act that, in my mind, is not worth the effort. Some cells will always be either too wide or too narrow when table cells contain data points that result in varying line lengths. Embrace that unevenness, allowing columns to take up a reasonable amount of space they need to present the data and scale down to as little as they need without being so narrow that words and numbers start breaking lines.

Lines should be kept to a minimum. Add them if adjusting the alignment, joining cells, and increased spacing is not enough to indicate the direction — or keep them as light as possible.

Allow multi-line wrapping when you really need it, such as when working with longer data points with just enough room around them to indicate the alignment direction. But if you caught yourself thinking of using multi-line wrapping in a grid, then first of all, analyze whether there is a more practical way to visualize the data.

Rule 2: Stylish rows, stylish columns

When deciding how to style a table’s rows, it’s important to understand the purpose of the table you are developing. Reducing visual noise will help to present a clear picture of the data on smaller datasets but not for large datasets.

It’s easy for a user to lose their place when scrolling through a table that contains hundreds or thousands of rows. This is where borders can help a great deal, as well as zebra striping, for a visual cue that helps anchor a user’s eyes enough to hold focus on a spot while scanning.

Speaking of zebra striping, it’s often used as a stylistic treatment rather than a functional enhancement. Being mindful of which colors are used for the striping and how they interact with other colors and shades used for highlighting information will go a long way toward maintaining a good user experience that avoids overwhelming color combinations. I often use a slightly darker shade of the table’s default background color on alternating rows (or columns) when establishing stripes. If that’s white, then I will go with the lightest shade of my palette’s base color. The same choice should be made while maintaining the borders — they should be marked but remain invisible.

Typically, row density gravitates around 40px-56px with a minimum padding of 16px on both the right and left edges of each column.

Feature Enhancements

Tables are often thought of as static containers for holding data, but we’ve all interacted with tables that do lots of other things, like filtering and reordering.

Whatever features are added to a table, it’s important to let users customize the table themselves based on their preferences. Then the user experience you create can become even better by conforming to the user’s comfort level. As with everything else, there is a line. Smaller datasets may not need the same enhancements for filtering data that large datasets do, for example, because they may wind up causing more confusion than convenience and raise the threshold for understanding the data.

In addition to the ability to customize a table’s elements, such as colors, fonts, conditional formatting, value formatting, and cell sizing, there are a few questions you can ask to help determine the enhancements a table might need for a better experience.

Could A User Lose Context When Scrolling?

We’ve already discussed how a table with hundreds of rows or columns can lead to many user scrolling and cognitive errors. Striping is one way to help users remain focused on a particular spot, but what if there’s so much scrolling that the table’s headers are no longer available?

If that’s a possibility, and the headers are important for establishing the context of the presented data, then you might consider sticky positioning on the headers so they are always available for reference. Chris Coyier has a nice demo that implements sticky headers and a sticky first column.

Who Can Have Problems Using My Design? (Accessibility Support)

Of all the points, this is the most difficult to implement, but at the same time, in our context, it is the most important. People with diagnosed abnormalities and disorders have a much stronger impact on their work process due to their condition. Therefore, supporting an additional — and optional — accessibility mode is necessary. Each element must be adapted for screen readers, navigable via keyboard, and contain the most semantic markup possible. This will help people who use assistive technology without a loss in performance.

Conclusion

Thanks for letting me share my best practices for presenting tabular data on the web. It’s amazing how something as seemingly simple as a table element can quickly grow in scope when we start considering user needs and enhancements to include as many of those needs as possible.

We discussed a great number of things that get in the way of an inclusive table design, including our own cognitive biases and design choices. At the same time, we covered strategies for tackling those obstacles from a wide range of considerations, from design choices all the way to determining possible features for enhancing a user’s experience when interacting with the table and the data it contains.

There can be a lot of headwork that goes into a table implementation, but not everything in this article has to be considered for every situation. A lot of the advice I’ve shared — like so many other things on the web — simply depends on the specific case. That’s why we spent a good amount of time defining the goals for an effective table experience:

  • Reduce the cognitive load.
  • Maximize the signal-to-noise ratio.
  • Use correct cognitive biases to boost the user experience.

But if you only take one thing away from this, I’d say it is this: in data analytics data > than everything else. Keeping that idea in mind throughout the development process prevents spoiling your design with frivolous designs and features that work against our goals.

Further Reading on Smashing Magazine

]]>
hello@smashingmagazine.com (Yuliia Nikitina)
<![CDATA[Primitive Objects In JavaScript: When To Use Them (Part 2)]]> https://smashingmagazine.com/2023/06/primitive-objects-javascript-part-2/ https://smashingmagazine.com/2023/06/primitive-objects-javascript-part-2/ Mon, 05 Jun 2023 10:00:00 GMT Writing programs in JavaScript is approachable at the beginning. The language is forgiving, and you get accustomed to its affordances. With time and experience working on complex projects, you start to appreciate things like control and precision in the development flow.

Another thing you might start to appreciate is predictability, but that’s way less of a guarantee in JavaScript. While primitive values are predictive enough, objects aren’t. When you get an object as an input, you need to check for everything:

  • Is it an object?
  • Does it have that property you’re looking for?
  • When a property holds undefined, is that its value, or is the property itself missing?

It’s understandable if this level of uncertainty leaves you slightly paranoid in the sense that you start to question all of your choices. Subsequently, your code becomes defensive. You think more about whether you’ve handled all the faulty cases or not (chances are you have not). And in the end, your program is mostly a collection of checks rather than bringing real value to the project.

By making objects primitive, many of the potential failure points are moved to a single place — the one where objects are initialized. If you can make sure that your objects are initialized with a certain set of properties and those properties hold certain values, you don’t have to check for things like the existence of properties anywhere else in your program. You could guarantee that undefined is a value if you need to.

Let’s look at one of the ways we can make primitive objects. It’s not the only way or even the most interesting one. Rather, its purpose is to demonstrate that working with read-only objects doesn’t have to be cumbersome or difficult.

Note: I also recommend you to check the first part of the series, where I covered some aspects of JavaScript that help bring objects closer to primitive values, which in return allows us to benefit from common language features that aren’t usually associated with an object, like comparisons and arithmetic operators.

Making Primitive Objects In Bulk

The most simple, most primitive (pun intended) way to create a primitive object is the following:

const my_object = Object.freeze({});

This single line results in an object that can represent anything. For instance, you could implement a tabbed interface using an empty object for each tab.

import React, { useState } from "react";

const summary_tab = Object.freeze({});
const details_tab = Object.freeze({});

function TabbedContainer({ summary_children, details_children }) {
    const [ active, setActive ] = useState(summary_tab);

    return (
        <div className="tabbed-container">
            <div className="tabs">
                <label
                    className={active === summary_tab ? "active" : ""}
                    onClick={() => {
                        setActive(summary_tab);
                    }}
                >
                    Summary
                </label>
                <label
                    className={active === details_tab ? "active": ""}
                    onClick={() => {
                        setActive(details_tab);
                    }}
                >
                    Details
                </label>
            </div>
            <div className="tabbed-content">
                {active === summary_tab && summary_children}
                {active === details_tab && details_children}
            </div>
        </div>
    );
}

export default TabbedContainer;

If you’re like me, that tabs element just screams to be reworked. Looking closely, you’ll notice that tab elements are similar and need two things, such as an object reference and a label string. Let’s include the label property in the tabs objects and move the objects themselves into an array. And since we’re not planning to change tabs in any way, let’s also make that array read-only while we’re at it.

const tab_kinds = Object.freeze([
    Object.freeze({ label: "Summary" }),
    Object.freeze({ label: "Details" })
]);

That does what we need, but it is verbose. The approach we’ll look at now is often used to hide repeating operations to reduce the code to just the data. That way, it is more apparent when the data is incorrect. What we also want is to freeze objects (including the array) by default rather than it being something we have to remember to type out. For the same reason, the fact that we have to specify a property name every time leaves room for errors, like typos.

To easily and consistently initialize arrays of primitive objects, I use a populate function. I don’t actually have a single function that does the job. I usually create one every time based on what I need at the moment. In the particular case of this article, this is one of the simpler ones. Here’s how we’ll do it:

function populate(...names) {
    return function(...elements) {
        return Object.freeze(
            elements.map(function (values) {
                return Object.freeze(names.reduce(
                    function (result, name, index) {
                        result[name] = values[index];
                        return result;
                    },
                    Object.create(null)
                ));
            })
        );
    };
}

If that one feels dense, here’s one that’s more readable:

function populate(...names) {
    return function(...elements) {
        const objects = [];

        elements.forEach(function (values) {
            const object = Object.create(null);

            names.forEach(function (name, index) {
                object[name] = values[index];
            });

            objects.push(Object.freeze(object));
        });

        return Object.freeze(objects);
    };
}

With that kind of function at hand, we can create the same array of tabbed objects like so:

const tab_kinds = populate(
    "label"
)(
    [ "Summary" ],
    [ "Details" ]
);

Each array in the second call represents the values of resulting objects. Now let’s say we want to add more properties. We’d need to add a new name to the first call and a value to each array in the second call.

const tab_kinds = populate(
    "label",
    "color",
    "icon"
)(                                          
    [ "Summary", colors.midnight_pink, "💡" ],
    [ "Details", colors.navi_white, "🔬" ]
);

Given some whitespace, you could make it look like a table. That way, it’s much easier to spot an error in huge definitions.

You may have noticed that populate returns another function. There are a couple of reasons to keep it in two function calls. First, I like how two contiguous calls create an empty line that separates keys and values. Secondly, I like to be able to create these sorts of generators for similar objects. For example, say we need to create those label objects for different components and want to store them in different arrays.

Let’s get back to the example and see what we gained with the populate function:

import React, { useState } from "react";
import populate_label from "./populate_label";

const tabs = populate_label(
    [ "Summary" ],
    [ "Details" ]
);

const [ summary_tab, details_tab ] = tabs;

function TabbedContainer({ summary_children, details_children }) {
    const [ active, setActive ] = useState(summary_tab);

    return (
        <div className="tabbed-container">
            <div className="tabs">
                {tabs.map((tab) => (
                    <label
                        key={tab.label}
                        className={tab === active ? "active" : ""}
                        onClick={() => {
                            setActive(tab);
                        }}
                    >
                        {tab.label}
                    </label>
                )}
            </div>
            <div className="tabbed-content">
                {summary_tab === active && summary_children}
                {details_tab === active && details_children}
            </div>
        </div>
    );
}

export default TabbedContainer;

Using primitive objects makes writing UI logic straightforward.

Using functions like populate is less cumbersome for creating these objects and seeing what the data looks like.

Check That Radio

One of the alternatives to the approach above that I’ve encountered is to retain the active state — whether the tab is selected or not — stored as a property of the tabs object:

const tabs = [
    {
        label: "Summary",
        selected: true
    },
    {
        label: "Details",
        selected: false
    },
];

This way, we replace tab === active with tab.selected. That might seem like an improvement, but look at how we would have to change the selected tab:

function select_tab(tab, tabs) {
    tabs.forEach((tab) => tab.selected = false);
    tab.selected = true;
}

Because this is logic for a radio button, only a single element can be selected at a time. So, before setting an element to be selected, we first need to make sure that all the other elements are unselected. Yes, it’s silly to do it like that for an array with only two elements, but the real world is full of longer lists than this example.

With a primitive object, we need a single variable that represents the selected state. I suggest setting the variable on one of the elements to make it the currently-selected element or setting it to undefined if your implementation allows for no selection.

With multi-choice elements like checkboxes, the approach is almost the same. We replace the selection variable with an array. Each time an element is selected, we push it to that array, or in the case of Redux, we create a new array with that element present. To unselect it, we either splice it or filter out the element.

let selected = []; // Nothing is selected.

// Select.
selected = selected.concat([ to_be_selected ]);

// Unselect.
selected = selected.filter((element) => element !== to_be_unselected);

// Check if an element is selected.
selected.includes(element);

Again, this is straightforward and concise. You don’t need to remember if the property is called selected or active; you use the object itself to determine that. When your program becomes more complex, those lines would be the least likely to be refactored.

In the end, it is not a list element’s job to decide whether it is selected or not. It shouldn’t hold this information in its state. For example, what if it’s simultaneously selected and not selected in several lists at a time?

Alternative To Strings

The last thing I’d like to touch on is an example of string usage I often encounter.

Text is a good trade-off for interoperability. You define something as a string and instantly get a representation of a context. It’s like getting an instant energy rush from eating sugar. As with sugar, the best case is that you get nothing in the long term. That said, it is unfulfilling, and you inevitably get hungry again.

The problem with strings is that they are for humans. It’s natural for us to distinguish things by giving them a name. But a program doesn’t understand the meaning of those names.

Most code editors and integrated development environments (IDEs) don’t understand strings. In other words, your tools won’t tell you whether or not the string is correct.

Your program only knows whether two strings are equal or not. And even then, telling whether strings are equal or unequal doesn’t necessarily provide an insight into whether or not any of those strings contain a typo.

Objects provide more ways to see that something is wrong before you run your program. Because you cannot have literals for primitive objects, you would have to get a reference from somewhere. For example, if it’s a variable and you make a typo, you get a reference error. There are tools that could catch that sort of thing before the file is saved.

If you were to get your objects from an array or another object, then JavaScript won’t give you an error when the property or an index does not exist. What you get is undefined, and that’s something you could check for. You have a single thing to check. With strings, you have surprises you might want to avoid, like when they’re empty.

Another use of strings I try to avoid is checking if we get the object we want. Usually, it’s done by storing a string in a property named id. Like, let’s say we have a variable. In order to check if it holds the object we want, we might need to check if a string in the id property matches the one we expect it to. To do that, we would first check if the variable holds an object. If the variable does hold an object, but the object lacks the id property, then we get undefined, and we’re fine. However, if we have one of the bottom values in that variable, then we are unable to ask for the property directly. Instead, we have to do something to either make sure that only objects arrive at this point or to do both checks in place.

const myID = "Oh, it's so unique";

function magnification(value) {
    if (value && typeof value === "object" && value.id === myID) {
        // do magic
    }
}

Here’s how we can do the same with primitive objects:

import data from "./the file where data is stored";

function magnification(value) {
    if (value === data.myObject) {
        // do magic
    }
}

The benefit of strings is that they are a single thing that could be used for internal identification and are immediately recognizable in logs. They sure are easy to use right out of the box, but they are not your friend as the complexity of a project increases.

I find there’s little benefit in relying on strings for anything other than output to the user. The lack of interoperability of strings in primitive objects could be solved gradually and without the need to change how you handle basic operations, like comparisons.

Wrapping Up

Working directly with objects frees us from the pitfalls that come with other methods. Our code becomes simpler because we write what your program needs to do. By organizing your code with primitive objects, we are less affected by the dynamic nature of JavaScript and some of its baggage. Primitive objects give us more guarantees and a greater degree of predictability.

Further Reading On SmashingMag

]]>
hello@smashingmagazine.com (Kirill Myshkin)
<![CDATA[How To Enable Collaboration In A Multiparty Setting]]> https://smashingmagazine.com/2023/06/enable-collaboration-multiparty-setting/ https://smashingmagazine.com/2023/06/enable-collaboration-multiparty-setting/ Fri, 02 Jun 2023 16:00:00 GMT As Artificial Intelligence becomes more widespread and pervasive, the transition to a data-driven age poses a conundrum for many: Will AI replace me at my job? Can it become smarter than humans? Who is making the important decisions, and who is accountable?

AI is becoming more and more complex, and tools like ChatGPT, Siri, and Alexa are already a part of everyday life to an extent where even experts struggle to grasp and explain the functionality in a tangible way. How can we expect the average human to trust such a system? Trust matters not only in decision-making processes but also in order for societies to be successful. Ask yourself this question: Who would you trust with a big personal or financial decision?

Today’s banking counseling sessions are associated with various challenges: Besides preparation and follow-up, the consultant is also busy with many different tasks during the conversation. The cognitive load is high, and tasks are either done on paper or with a personal computer, which is why the consultant can’t engage sufficiently with the client. Clients are mostly novices who are not familiar with the subject matter. The consequent state of passivity or uncertainty often stems from a phenomenon known as information asymmetry, which occurs when the consultant has more or better information than the client.

In this article, we propose a new approach based on co-creation and collaboration in advisory services. An approach that enables the consultant to simply focus on the customers’ needs by leveraging the assistance of a digital agent. We explore the opportunities and limitations of integrating a digital agent into an advisory meeting in order to allow all parties to engage actively in the conversation.

Rethinking Human-Machine Environments In Advisory Services

Starting from the counseling session described above, we tackled the issues of information asymmetry, trust building, and cognitive overload within the context of a research project.

Understanding the linguistic landscape of Switzerland with its various Swiss-German dialects, the digital agent “Mo” supports consultants and clients in banking consultations by taking over time-consuming tasks, providing support during the consultation, and extracting information. By means of an interactive table, the consultation becomes a multimodal environment in which the agent acts as a third interaction partner.

The setup enables a collaborative exchange between interlocutors, as information is equally visible and accessible to all parties (shared information). Content can be placed anywhere on the table through natural, haptic interactions. Whether the agent records information in the background, actively participates in the composition of a stock portfolio, or warns against risky transactions, Mo “sits” at the table throughout the entire consultation.

To promote active participation from all parties during the counseling session, we have pinpointed crucial elements that facilitate collaboration in a multi-party setting:

  • Shared Device
    All information is made equally visible and interactable for all parties.
  • Collaborative Digital Agent
    By using human modes of communication, social cues, and the support of local dialects, the agent becomes accessible and accepted.
  • Comprehensible User Interfaces
    Multimodal communication helps to convey information in social interactions. Through the use of different output channels, we can convey information in different complexities.
  • Speech Patterns for Voice User Interfaces
    Direct orders to an AI appear unnatural in a multi-party setting. The use of different speech and turn-taking patterns allows the agent to integrate naturally into the conversation.

In the next sections, we will take a closer look at how collaborative experiences can be designed based on those key factors.

“Hello Mo”: Designing Collaborative Voice User Interfaces

Imagine yourself sitting at the table with your bank advisor in a classic banking advisory meeting. The consultant tries to explain to you a ton of banking-specific stuff, all while using a computer or tablet to display stock price developments or to take notes on your desired transactions. In this setup, it is hard for consultants to keep up a decent conversation while retrieving and entering data into a system. This is where voice-based interactions save the day.

When using voice as an input method during a conversation, users do not have to change context (e.g., take out a tablet, or operate a screen with a mouse or keyboard) in order to enter or retrieve data. This helps the consultant to perform a task more efficiently while being able to foster a personal relationship with the client. However, the true strength of voice interactions lies in their ability to handle complex information entry. For example, purchasing stocks requires an input of multiple parameters, such as the title or the number of shares. Where in a GUI, all of these input variables have to be tediously entered by hand, VUIs offer an option of entering everything with one sentence.

Nonetheless, VUIs are still uncharted territory for many users and are accordingly viewed with a huge amount of skepticism. Thus, it is important to consider how we can create voice interactions that are accessible and intuitive. To achieve this goal, it is essential to grasp the fundamental principles of voice interaction, such as the following speech patterns.

Command and Control

This pattern is widely used by popular voice assistants such as Siri, Alexa, and Google Assistant. As the name implies, the assistants are addressed with a direct command — often preceded by a signal “wake word.” For example,

“Hey, Google” → Command: “Turn on the Bedroom Light”

Conversational

The Conversational Pattern, in which the agent understands intents directly from the context of the conversation, is less common in productive systems. Nevertheless, we can find examples in science fiction, such as HAL (2001: A Space Odyssey) or J.A.R.V.I.S. (Iron Man 3). The agent can directly extract intent from natural speech without the need for a direct command to be uttered. In addition, the agent may speak up on his own initiative.

As the Command and Control approach is widely used in voice applications, users are more familiar with this pattern. However, utilizing the Conversational Pattern can be advantageous, as it enables users to interact with the agent effortlessly, eliminating the requirement for them to be familiar with predefined commands or keywords, which they may formulate incorrectly.

In our case of a multi-party setting, users perceived the Conversational Pattern in the context of transaction detection as surprising and unpredictable. For the most part, this is due to the limitations of the intent recognition system. For example, during portfolio customization, stock titles are discussed actively. Not every utterance of a stock title corresponds to a transaction, as the consultant and client are debating possibilities before execution. It is fairly difficult or nearly impossible for the agent to distinguish between option and intent. In this case, command structures offer more reliability and control at the expense of the naturalness of the conversation since the Command and Control Pattern results in unnatural interruption and pauses in the conversation flow. To get the best of both worlds (natural interactions and predictable behavior), we introduce a completely new speech pattern:

Conversational Confirmation

Typically, transaction intents are formulated according to the following structure:

Interlocutor 1: We then buy 20 shares of Smashing Media Stocks (intent).
Interlocutor 2: Yes, let’s do that (confirmation).
Interlocutor 1: All right then, let’s buy Smashing Media Stocks (reconfirmation).

In the current implementation of the Conversational Pattern, the transaction would be executed after the first utterance, which was often perceived to be irritating. In the Conversational Confirmation pattern, the system waits for both parties to confirm and executes the transaction only after the third utterance. By adhering to the natural rules of human conversation, this approach meets the users’ expectations.

Conclusion

  1. Regarding the users’ mental model of digital agents, the Command and Control Pattern provides users with more control and security.
  2. The Command and Control Pattern is suitable as a fallback in case the agent does not understand an intent.
  3. The Conversational Pattern is suitable when information has to be obtained passively from the conversation. (logging)
  4. For collaborative counseling sessions, the Conversational Confirmation Pattern could greatly enhance the counseling experience and lead to a more natural conversation in a multi-party setting.
Sharing Is Caring: The Concept Of The Shared Device

In a world where personal devices such as PCs, mobile phones, and tablets are prevalent, we have grown accustomed to interacting with technical devices in “single-player mode.” The use of private devices undoubtedly has its advantages in certain situations (as in not having to share the million cute cats we google during work with our boss). But when it comes to collaborative tasks — sharing is caring.

Put yourself back into the previously described scenario. At some point, the consultant is trying to show stock price trends on the computer or tablet screen. However, regardless of how the screen is positioned, at least one of the participants has limited vision. Due to the fact that the computer is a personal device of the consultant, the client is excluded from actively engaging with it — leading to the problem of unequal distribution of information.

By integrating an interactive tabletop projection into the consultation meeting, we aimed to overcome the limitations of “personal devices,” improving trust, transparency, and decision empowerment. It is essential to understand that human communication relies on various channels, i.e., modalities (voice, sight, body language, and so on), which help individuals to express and comprehend complex information more effectively. The interactive table as an output system facilitates this aspect of human communication in the digital-physical realm. In a shared device, we use the physical space as an interaction modality. The content can be intuitively moved and placed in the interaction space using haptic elements and is no longer bound to a screen. These haptic tokens are equally accessible to all users, encouraging especially novice users to interact and collaborate on a regular tabletop surface.

The interactive tabletop projection also makes information more comprehensible for users. For example, during the consultation, the agent updates the portfolio visualization in real time. The impact of a transaction on the overall portfolio can be directly grasped and pulled closer by the client and advisor and used as a basis for discussion.

A result is a transparent approach to information, which increases the understanding of bank-specific and system-specific processes, consequently improving trust in the advisory service and leading to more interaction between customer and advisor.

Apart from the spatial modality, the proposed mixed reality system provides other input and output channels, each with its unique characteristics and strengths. If you are interested in this topic this article on Smashing provides a great comparison of VUIs and GUIs and when to use which.

Conclusion

The proposed mixed reality system fosters collaboration since:

  1. Information is equally accessible to all parties (reducing information asymmetry, fostering shared understanding, and building trust).
  2. One user interface can be operated collectively by several interaction partners (engagement).
  3. Multisensory human communication can be transferred to the digital space (ease of use).
  4. Information can be better comprehended due to multimodal output (ease of use).
Next Stop: Collaborative AI (Or How To Make A Robot Likable)

For consultation services, we need an intelligent agent to reduce the consultant’s cognitive load. Can we design an agent that is trustworthy, even likable, and accepted as a third collaboration partner?

Empathy For Machines

Whether it’s machines or humans, empathy is crucial for interactions, and social cues are the salt and pepper to achieve this. Social cues are verbal or nonverbal signals that guide conversations and other social interactions by influencing our perceptions of and reactions toward others. Examples of social cues include eye contact, facial expressions, tone of voice, and body language. These impressions are important communicative tools because they provide social and contextual information and facilitate social understanding. In order for the agent to appear approachable, likable, and trustworthy, we have attempted to incorporate social elements while designing the agent. As described above, social cues in human communication are transported through different channels. Transferring to the digital context once again requires the use of multimodality.

The visual manifestation of the agent enables the elaboration of character-defining elements, such as facial expressions and body language in digital space, analogous to the human body. Highlighting important context information, such as indicating system status.

In terms of voice interactions, social cues play an important role in system feedback. For example, a common human communication practice is to confirm an action by stating a short “mhm” or “ok.” Applying this practice to the agent’s behavior, we tried to create a more transparent and natural feeling VUI.

When designing voice interactions, it’s important to note that the agent’s perception is heavily influenced by the speech pattern utilized. Once the agent is addressed with a direct command, it is assigned a subordinate role (servant) and is no longer perceived as an equal interaction partner. Recognizing the intent of the conversation independently, the agent is perceived as more intelligent and trustworthy.

Mo: Ambassador Of System Transparency

Despite great progress in Swiss German speech recognition, transaction misrecognition still occurs. While dealing with an imperfect system, we have tried to take advantage of it by leveraging the agent to make system-specific processes more understandable and transparent. We implemented the well-known usability heuristic: the more comprehensible system-specific processes are, the better the understanding of a system and the more likely users feel empowered to interact with it (and the more they trust and accept the agent).

A core activity of every banking consultation meeting is the portfolio elaboration phase, where the consultant, client, and agent try to find the best investment solutions. In the process of adjusting the portfolio, transactions get added and removed with the helping hand of the agent. If “Mo” is not fully confident of a transaction, “Mo” checks in and asks whether the recognized transaction has been understood correctly.

The agent’s voice output follows the usual conventions of a conversation: as soon as an interlocutor is unsure regarding the content of a conversation, he or she speaks up, politely apologizes, and asks if the understood content corresponds to the intent of the conversation. In case the transaction was misunderstood, the system offers the possibility to correct the error by adjusting the transaction using touch and a scrolling token (Microsoft Dial). We deliberately chose these alternative input methods over repeating the intent with voice input to avoid repetitive errors and minimize frustration. By giving the user the opportunity to take action and be in control of an actual error situation, the overall acceptance of the system and the agent are strengthened, creating a nutritious soil for collaboration.

Conclusion:

  • Social cues provide opportunities to design the agent to be more approachable, likable, and trustworthy. They are an important tool for transporting context information and enabling system feedback.
  • Making the agent part of explaining system processes helps improve the overall acceptance and trust in both the agent and the system (Explainable AI).
Towards The Future

Irrespective of the specific consulting field, whether it’s legal, healthcare, insurance, or banking, two key factors significantly impact the quality of counseling. The first factor involves the advisor’s ability to devote undivided attention to the client, ensuring their needs are fully addressed. The second factor pertains to structuring the counseling session in a manner that facilitates equal access to information for all participants, presenting it in a way that even inexperienced individuals can understand. By enhancing customer experience through promoting self-determined and well-informed decision-making, businesses can boost customer retention and foster loyalty.

Introducing a shared device in counseling sessions offers the potential to address the problem of information asymmetry and promote collaboration and a shared understanding among participants. Does this mean that every consultation session depends on the proposed mixed reality setup? For physical consultations, the interactive tabletop projection (or an equivalent interaction space where all participants have equal access to information) does enable a democratic approach to information — personal devices just won’t do the job.

In the context of digital (remote) consultations, collaboration, and transparency remain crucial, but the interaction space undergoes significant changes, thereby altering the requirements. Regardless of the specific interaction space, careful consideration must be given to conveying information in an understandable manner. Utilizing different modalities can enhance the comprehensibility of user interfaces, even in traditional mobile or desktop UIs.

To alleviate the cognitive load on consultants, we require a system capable of managing time-consuming tasks in the background. However, it is important to acknowledge that digital agents and voice interactions remain unfamiliar territory for many users, and there are instances where voice processing falls short of users’ high expectations. Nevertheless, speech processing will certainly see great improvements in the next few years, and we need to start thinking today about what tomorrow’s interactions with voice assistants might look like.

Further Reading On SmashingMag

]]>
hello@smashingmagazine.com (Hannah Kühne & Madlaina Kalunder)
<![CDATA[iA Presenter: A Case Study On Product Pricing Considerations]]> https://smashingmagazine.com/2023/06/ia-presenter-case-study-product-pricing-considerations/ https://smashingmagazine.com/2023/06/ia-presenter-case-study-product-pricing-considerations/ Fri, 02 Jun 2023 08:00:00 GMT This article is a sponsored by iA

So, you’ve created a thing. That thing could be anything, say a product the world never knew it needed or maybe a stellar SaaS app that makes everyone way more productive. You had a brilliant idea and took the initiative to make it happen. It’s time to put it on the market!

But wait… how much money are you going to charge for this thing? That’s often a way more difficult question to answer than it might seem. I mean, slap a price on the tin, and that’s it, right?

The truth is that pricing a product or service is one of the more challenging aspects of product development. Pricing is an inexact science, and chances are you will not get it right the first time. But where do you even begin?

That’s where the team at Information Architects — commonly known as iA — found itself when tasked with pricing a new product called iA Presenter. iA already had a hit product on its hands, the popular iA Writer app, with its claim to fame being a minimal, distraction-free writing interface. iA Writer is already a mature offering, having been available for many years and having undergone several significant iterations since its initial release. How does a new offering like iA Presenter fit into the picture?

Let’s use iA Presenter to study the considerations that go into product pricing. Its status as a brand-new product that sits alongside an existing product with an established history makes iA Presenter an interesting case study on pricing. Plus, the iA team was generous enough to share a bunch of the research and work that went into their pricing for iA Presenter.

Finding Pricing Parallels

The first step to pricing might be looking at what others are doing. Chances are that you are not the only player in the market, and you can certainly learn by observing what others are doing. I know that’s what I did when getting into the pricing of a SaaS-based app. There were plenty of competitors in that particular market, and mapping them out in a spreadsheet was a nice way to compare the similarities and differences — not only in the prices themselves but the pricing models as well. Some were one-time purchases, but many were recurring subscriptions. Some offered free trials, while others relied on a generous return policy. Some required a credit card upfront, and others allowed you to jump right into the app. You get the idea. There’s more to pricing than meets the eye.

The key is to find parallels between what others are doing and what aligns with what you’re doing. If everyone else is selling subscriptions, then maybe that’s clear enough for you to do the same. Or perhaps it’s more of an opportunity to differentiate your product, offering a pricing model that might appeal to an overlooked segment of the market.

The purpose of finding parallels is to prevent sticker shock by setting a price that is far outlier from what the rest of the market has already set.

iA says it extremely well in a blog post that’s incredibly transparent with their findings:

“As you can see, the pricing ranges from $5 to $25 per user. There are outliers on the upper scale. Some of them offer a free model for individuals or low-usage cases. As you already know, they can do that because they have venture capital or run on an ad-based model (Google). Google and PowerPoint come as part of a suite.”
—iA, “Presenter Pricing (I)

Ah! There’s always a story lurking in the details. Outliers can exist, and they might actually be on the low end of the spectrum. Competing on price alone always feels like a risky call; just ask any company that’s had to play along with Walmart’s aggressive tactics to be a low-price leader.

Identifying Opportunities

Perhaps the most important lesson from my own pricing research is that finding parallels in the market will also provide a clearer picture of what value your product provides. Does your product do something that the others don’t? Is it so much easier to use than the rest that the user experience is where the value comes from?

Add those things to the spreadsheet! The spreadsheet becomes more of a matrix than a competitor list. You can use it to surface what’s unique about your product and lean into it when determining the overall value your product offers compared to everyone else.

Again, the iA team throws a bit of a curveball based on its recent experience:

“Whether a price is low, high, or right depends on what [customers] compare it to. Customers will compare apples and oranges”.
—iA, “Presenter Pricing (I)

Did you catch that last point? You may need to find pricing parallels with products that are tangentially related to your market because you can’t control what you might be compared to. My own pricing journey was on a hosted calendar, and while it has way less in common with something like Google Calendar, customers would inevitably compare our offering to it because Google Calendar is such a common point of reference when talking about anything related to online calendars.

Starting The Conversation

The topic of pricing usually comes up during product development but could certainly come much sooner. The closer the finish line for development gets, the more the reality sets in that there’s work to do to get the product to market, and pricing is one step that simply cannot be skipped — how else will customer compensate you for the pleasure of getting their hands on a product?

You could start spewing numbers until one resonates with you, but that’s rather subjective. Will your customers see the same value in the product that you do? It’s worth checking, and sometimes it works to directly ask your customers — whether it’s existing customers or a target audience you’ve identified.

That’s what iA did when they published the question “How Much Would You Charge for iA Presenter?” in the aforementioned blog post from November 2022. The post provides oodles of context for readers to get an idea of what the iA team was already considering and what they’ve learned from an initial round of research on different pricing models.

What I like about this approach is the transparency, sure, but also how it leads to two other things:

  • Setting expectations
    iA had already introduced iA Presenter in another post that precedes the call for pricing opinions. But in bringing pricing to the forefront, the team is giving existing and potential customers a heads-up of what’s to come. So, even if they settled on a high price point that is an outlier in the market, at least everyone is already familiar with the thinking behind it.
  • Data
    Posing the question means they had opened the door for customers to weigh in. That’s the sort of feedback that can be designed as a survey, with the data helping inform pricing experiments and identify insightful patterns.
Parsing Information

Have you ever had to design a survey? Good gosh, that can be a frustrating experience. The challenge is to get useful feedback that leads to insights that allow you to make better decisions. But the process is all too easy to mess up, from choosing the wrong type of form input for a particular question or, worse, injecting your own biases into how things are worded. Surveys can be as much a balancing act as product pricing!

That’s why I find iA’s approach so interesting. They had the idea to ship not one version of the survey but three. This is what they shared with us:

“We divided our newsletter’s subscribers into different groups of roughly 5000 people each and sent them different versions of the form. The first group received the Version 0 of the form, and each time we updated this one, we sent it to a different group.

In retrospect, it’s clear why, but we didn’t expect the form design to affect the price suggestions so much. A lot has been written about A/B testing, form design, and questionnaire design. But here we were right in the middle of a form/questionnaire experiment and saw how directly the design affected the results. It was amazing to see all of this happening in real-time.”

It was a genius move, even if it wasn’t obvious at first. Sending three versions sent to different segments of the audience does a few things:

  • It considers different scenarios.
    Rather than asking its audience what pricing model they prefer, iA assumed a pricing model and put it in front of users. This way, they get a reaction to the various pricing scenarios they are considering and gain a response that is just as useful as directly asking.
  • It challenges assumptions.
    The iA team put a lot of legwork into researching pricing models and evaluating their pros and cons. That certainly helped the team form some opinions about which strategies might be the most effective to implement. But even all the research digging in the world doesn’t guarantee a particular outcome. Evaluating responses from a clearly defined target audience using three versions of the form allowed iA to put its assumptions to the test. Is a subscription-based model really the best way to go? Now they know!
  • It reveals customer biases.
    Anything you ask will have a degree of bias in it, so why not embrace that fact and let the customers show you their biases in the process? One version of the iA Presenter survey was based on a subscription pricing model, and the team found that some users hate subscriptions so much that they refused to fill out this form and were quite vocal about it.

I love the way iA sums up the patterns they found in the survey results and how those results were influenced by differentiating the surveys:

“We offered a form that required you to fill out monthly and yearly subscriptions plus ownership. […] We offered a second version that didn’t require you to fill out all fields. What happened there raised brows. The price suggestions changed. They got lower. We continued changing the form, and every time, the result changed.”

And with that, iA had unlocked what they needed to determine a price for iA Presenter. From a follow-up blog post that reports their findings:

“All data combined, you decided that iA Presenter should charge the industry standard of 5.- for a single license. Multiplying 5.- times twelve for a year and times three to make it worthwhile would make iA Presenter make a 150.- app.”
—iA, “Presenter Pricing (II)

Aligning Data With Strategy

Great! iA was able to determine a specific price point with some level of scientific certainty. It would be easy enough to slap that on a price tag and start selling, but that doesn’t do justice to the full picture the data provides. Specifically, iA learned that the price point they determined would not align with all of the audience segments they surveyed.

Here’s more of what they were willing to share with us about their audience’s feelings on pricing:

  • The collective audience suggested charging the industry standard of $5 for a single license.
  • Some think that the $50 price for the existing iA Writer app is high. $100 is not that much in Switzerland, but in some countries, $100 can be a big chunk of a monthly salary. That means local pricing adjustments ought to be considered.
  • Suggestions for business subscriptions varied between $10 and $20 per month per license.
  • Students want a free tier of access.

iA is lucky enough to have an internal source of useful data, thanks to the long sales history it has with iA Writer. They found that new customers tend to prefer a subscription model, while existing (or “convinced”) customers show a preference for a single purchase.

So, it’s more like they were looking at different pricing tiers instead of a flat rate. Their audience is all over the map as far as what their pricing expectations are, and a pricing model that offers choices based on the type of customer you are (e.g., business vs. student) and where people are geographically is likely to cast a wider net to attract more customers than they would get from a single price point. So, even if verified students are able to get the product for free, that should be offset by the price points for single-license customers and businesses.

Wrapping Up

What we’ve looked at are several important considerations that go into product pricing. The work it takes to determine a price goes way past subjective guesses. Pricing is one of the “Four Ps of Marketing” that influence a product’s market position and how customers perceive it.

Setting a price is a statement of the product’s quality and the value it adds to the market.

That’s the sort of thing you can’t leave to chance.

That said, it’s clear that determining a product price is far from an exact science. The challenge is to elicit the right information that leads to insights that are more reflective of and aligned with the expectations of the target audience. Will they pay the price you want?

There are many other considerations that go into pricing, to be sure. You might discover that the price the market is willing to pay is unsustainable and does not cover enough of the costs that went into product development or the ongoing costs of maintenance, developing new features, marketing, support, salaries, and so on. You don’t want to enter yourself in a race to the bottom, after all.

iA Presenter makes for a great case study on product pricing. The fact that it’s the type of software that those of us in the web design and development community often work on makes it an extremely relevant example. Plus, iA put so much effort into research and was generous enough to share it with us that it provides a nice recent snapshot of a real-world situation.

And, hey, now that you know everything that went into setting prices for iA Presenter, you should check it out. Do you think they made the right choice? Will the multi-tier pricing strategy work next to market competitors who are more mature and are able to practically give away their stuff for free, like Google Slides? We’ll find out soon as iA Presenter is officially out of beta and has been released to the public on June 1st. You can follow along with their ongoing journey of shipping a new product on their blog or by signing up for their newsletter.

]]>
hello@smashingmagazine.com (Geoff Graham)
<![CDATA[Advanced Form Control Styling With Selectmenu And Anchoring API]]> https://smashingmagazine.com/2023/06/advanced-form-control-styling-selectmenu-anchoring-api/ https://smashingmagazine.com/2023/06/advanced-form-control-styling-selectmenu-anchoring-api/ Thu, 01 Jun 2023 10:00:00 GMT <selectmenu>, that will make styling this type of form control a whole lot better. You’re going to walk through an early implementation of this new experimental element by creating a pattern that you would never have thought possible with CSS alone — a radial selection menu.]]> No doubt you’ve had to style a <select> menu before. And when you do, you often have had to reach far down in your CSS arsenal of tricks or rely on JavaScript to get anything near the level of customization you want. It’s a long-running headache in the front-end world.

Well, thanks to the efforts of the Open UI community, we have a new <selectmenu> element to look forward to, and its purpose is to provide CSS styling affordances to selection menus in ways we’ve never had before.

We’re going to demonstrate an initial implementation of <selectmenu> in this article. But we’ll throw in a couple of twists while we’re at it. What we’re making is a radial select menu, something we could never have done with CSS alone. And since we're working with experimental tech, we’re going to toss in more experimental features along the way, including images, the HTML Popover API, and the CSS Anchor Positioning API. The result is going to wind up like this:

  • <selectmenu>: This is the selector itself. It holds the button and listbox of menu options.
  • button: This part toggles the visibility of the listbox between open and close.
  • selected-value: This displays the value of the menu option that is currently selected. So, if you have a listbox with three options and the second option is selected, the second option is what matches the part.
  • marker: Dropdown menus usually have some sort of downward-facing arrow icon to indicate that the menu can be expanded. This is that part of the menu.
  • listbox: This is the wrapper that contains the options and any <optgroup> elements that group certain options together inside the listbox.
  • <optgroup>: We already let the cat out of the bag on this one, but this part groups options together. It includes a label for the group.
  • <option>: A value that the user is able to select in the menu. There can be one, but it’s much more common to see a <select> — and, by extension — a <selectmenu> with multiple options.

The other way is to slot the content ourselves in HTML. This can be a nice approach since it allows us to customize the markup any way we like. In other words, we can replace any of the parts we want, and the browser will use our markup instead of the implicit structure. In fact, this is the approach we’ll use in the radial menu we’re making.

The way to replace parts in the HTML is to use the slots. The markup we use for a slot lives in a separate tree in the Shadow DOM, replacing the contents of the DOM with what we specify in the Shadow DOM.

Here’s an abbreviated example in HTML. Notice how the <button> and listbox are both contained in slots that represent the HTML we want to use for those parts.

<selectmenu class="my-custom-select">
  <div slot="button">
    <span behavior="selected-value" slot="selected-value"></span>
    <button behavior="button"></button>
  </div>
  <div slot="listbox">
    <div popover="auto" behavior="listbox">
       <option value="one">one</option>
       <option value="two">two</option>
    </div>
  </div>
</selectmenu>

By using slots and behavior as attributes, we can tell the browser how it should behave and how it should interact with keyboard navigation. If managed carefully, this will also mean that we get good accessibility out of the box because the browser will know how to behave based on what we define.

Ready? OK, let’s start by setting up our markup for our radial <selectmenu>.

The Radial Selectmenu Markup

We will start by creating our own markup for this basic example. We will use pretty much the same approach as used in the explainer of the Selectmenu element because I think it demonstrates the vast flexibility we have to style this element using similar markup.

<selectmenu class="selectmenu">
  <button class="selected-button" slot="button" behavior="button">
    <span behavior="selected-value" class="selected-value"></span>
  </button>
  <div slot="listbox">
    <div popover behavior="listbox">
      <option value="one">one</option>
      <option value="two">two</option>
      <option value="three">three</option>
      <option value="four">four</option>
      <option value="five">five</option>
      <option value="six">six</option>
    </div>
  </div>
</selectmenu>

You might notice from the markup that we’ve added the selected-value behavior in the button. This is perfectly fine, as our button will always show the selected value by doing this.

And, just like the example in the explainer, we are using the Popover API inside of our listbox slot. When we look at what we have in Chrome Canary, and see that it already works fine. Take note that even keyboard navigation already seems to be handled for us!

We can add the following formula for our options when the popover is open by adding a transform to our options:

[popover]:popover-open option {
  /* Half the size of the circle */
  --half-circle: calc(var(--circle-size) / -2);

  /* Straighten things up and space them out */
  transform:
      rotate(var(--deg))
      translate(var(--half-circle))
      rotate(var(--negative-deg));
}

Now, when the popover-open state is triggered, we will rotate each option by a certain number of degrees, translate them along both axes by half the circle size, and rotate it once again by a negative amount of degrees. The order of the transforms is important!

I said we would rotate the options “by a certain number of degrees” because we have to do it for each individual option. This is totally possible in vanilla CSS (and that’s how we’re going to do it), but it could also be done with a Sass loop or even with JavaScript if we needed it.

Let’s add this to our popover style rules:

[popover] {
  --rotation-divide: calc(180deg / 2);

  /* etc. */
}

This will be our default rotation, and it’s a special case for when we only have one option. We’ll use 360deg for the rest in a moment.

For now, we can select the first option and set the --rotation-divide variable on it:

option:nth-child(1) {
  --deg: var(--rotation-divide);
}

Great! Why you would use a select when there is only one option, I don’t know, but nevertheless, it’s handled gracefully:

Styling the other options takes a bit of work because we have to:

  • Divide the circle by the number of available options and
  • Multiply that result for each option.

I’m so glad we have the calc() function in CSS to help us do this. Otherwise, it would be some pretty heavy lifting.

[popover]:has(option:nth-child(2)) {
  --rotation-divide: calc(360deg / 2);
}

[popover]:has(option:nth-child(3)) {
  --rotation-divide: calc(360deg / 3);
}

[popover]:has(option:nth-child(4)) {
  --rotation-divide: calc(360deg / 4);
}

[popover]:has(option:nth-child(5)) {
  --rotation-divide: calc(360deg / 5);
}

[popover]:has(option:nth-child(6)) {
  --rotation-divide: calc(360deg / 6);
}

option:nth-child(1) {
  --deg: var(--rotation-divide);
}

option:nth-child(2) {
  --deg: calc(var(--rotation-divide) * 2);
}

option:nth-child(3) {
  --deg: calc(var(--rotation-divide) * 3);
}

option:nth-child(4) {
  --deg: calc(var(--rotation-divide) * 4);
}

option:nth-child(5) {
  --deg: calc(var(--rotation-divide) * 5);
}

option:nth-child(6) {
  --deg: calc(var(--rotation-divide) * 6);
}

/* that’s enough options for you! */
option:nth-child(1n + 7) {
  display: none;
}

Here’s a live demo of what this produces. Remember, Chrome Canary is the only browser that currently supports this, as long as the experimental features flag is enabled.

See the Pen Radial selectmenu with Anchoring API - Open UI [forked] by @utilitybend.

Do We Need All Those :has() Pseudo-Classes?

Yeah, I think so, as long as we’re using plain CSS. And that’s been my goal all along. That said, JavaScript could be useful here.

For example, we could add an ID to the element with the popover attribute and count the children it contains:

const optionAmount = document.getElementById('popoverlistbox').childElementCount;
popoverlistbox.style.setProperty('--children', optionAmount);

That way, we can replace all the :has() instances with more concise styles:

option {
  --rotation-divide: calc(360deg / var(--children));
  --negative: calc(var(--deg) / -1);
}

For this demo, however, you might still want to limit the --children custom property to a maximum of 6. I’ve found that’s the sweet spot before the circle gets too crowded and needs additional tweaks.

See the Pen Radial selectmenu Open UI with JS children count [forked] by @utilitybend.

Let’s Animate This Thing

There are a few more CSS features coming up that will make animating popovers a lot easier. But they’re not ready for us yet, even for this example.

We can get around that with a little trick. But please keep in mind that what we’re about to do will not be the best practice when we get the new animating features. I wanted to give you the information anyway because I think it’s a nice enhancement for what we’re making.

First, let’s add the following to our popover selector:

[popover] {
  display: block;
  position: absolute;
  /* etc. */
}

This makes it so our popover will always be displayed block and ready to go wherever it is placed, and we have established a stacking context.

We will lose the benefit of our top layer popover and will have to play around with a z-index to get the effect we want. Juggling z-index values — especially with a large number of items — is never fun. It gets messy fast. That’s one of the ways popovers were designed to help us.

But let’s go ahead and give our button a z-index:

.selected-button {
  z-index: 2;
  /* etc. */
}

Now we can use animations to reveal the options by using the :not() pseudo-class. This is how we can reset the transform when the popover is in its closed state:

[popover]:not(:popover-open) {
  z-index: -1;
}

[popover]:not(:popover-open) option {
  transform: rotate(var(--deg)) translate(0) rotate(var(--negative-deg));
}

And there you have it! An animated radial <selectmenu>:

See the Pen Radial selectmenu with Anchoring API and animation [forked] by @utilitybend.

Let’s Add Some Images While We’re At It

There was quite a bit of discussion about this in the Open UI community, but the selected value does not take innerHTML as an option as, for one, this could result in IDs being duplicated. But I sure do love a good old role-playing game, and I decided to use the <selectmenu> as a potion selector.

This is completely based on everything we just covered, only adding images to demonstrate that it is possible:

See the Pen Open-UI - Select a potion (Chrome Canary) [forked] by @utilitybend.

With a sprinkle of JavaScript (for this totally optional enhancement), we can select the innerHTML of the <selectmenu> element and pass it to our .selected-value button:

const selectMenus = document.querySelectorAll("selectmenu");
selectMenus.forEach((menu) => {
  const selectedvalue = menu.querySelector(".selected-value");
  selectedvalue.innerHTML = menu.selectedOption.innerHTML;
  menu.addEventListener("change", () => {
    selectedvalue.innerHTML = menu.selectedOption.innerHTML;
  });
});
Conclusion

I don’t know about you, but all of this gets me super excited for the future. Everything we looked at, from the Selectmenu element to the CSS Anchor Position API, is still a work in progress. Still, we can already see the great number of possibilities they will open up for us as designers and developers.

The fact that all of this is coming by way of built-in browser features is what’s most exciting because it gives us a standard way to approach things like customized <select> menus, popovers, and anchoring to the extent that it could eliminate the need for frameworks or libraries that we use today for the same things. We win because we get more control, and users win because they get lighter page loads.

If you’d like to do a bit of research on Selectmenu or even get involved with the Open UI community, you’re more than welcome, as we need more developers to create demos and share their struggles to help make these features better if — and when — they ship.

Further Reading On SmashingMag

]]>
hello@smashingmagazine.com (Brecht De Ruyte)
<![CDATA[Create Your Own Path In June (2023 Wallpapers Edition)]]> https://smashingmagazine.com/2023/05/desktop-wallpaper-calendars-june-2023/ https://smashingmagazine.com/2023/05/desktop-wallpaper-calendars-june-2023/ Wed, 31 May 2023 08:00:00 GMT There’s an artist in everyone. Some bring their ideas to life with digital tools, others capture the perfect moment with a camera or love to grab pen and paper to create little doodles or pieces of lettering. And even if you think you’re far from being an artist, well, it might just be hidden deep inside of you. So why not explore it?

For more than twelve years already, our monthly wallpapers series has been the perfect opportunity to do just that: to break out of your daily routine and get fully immersed in a creative little project. This month was no exception, of course.

In this collection, you’ll find beautiful, unique, and inspiring wallpapers designed by creative folks who took on the challenge this month. All of them are available in versions with and without a calendar for June 2023 and can be downloaded for free. As a little bonus goodie, we also compiled a selection of timeless June wallpapers from our archives at the end of this post. Maybe you’ll spot one of your almost-forgotten favorites in there, too? A big thank-you to everyone who shared their designs with us this month! Happy June!

  • You can click on every image to see a larger preview,
  • We respect and carefully consider the ideas and motivation behind each and every artist’s work. This is why we give all artists the full freedom to explore their creativity and express emotions and experience through their works. This is also why the themes of the wallpapers weren’t anyhow influenced by us but rather designed from scratch by the artists themselves.
  • Submit a wallpaper!
    Did you know that you could get featured in our next wallpapers post, too? We are always looking for creative talent.

World Environment Day

“An annual event celebrated on June 5th to raise awareness and promote action for the protection of the environment. It serves as a global platform for individuals, communities, and governments to come together and address pressing environmental issues. So I decided to design this wallpaper and to promote awareness among us. Hope you like it.” — Designed by Hrishikesh Shome from India.

Back In My Days

Designed by Ricardo Gimenes from Sweden.

Boundless Joy

“Boundless Joy is a magical realm where children and dogs find pure delight. It’s a place where laughter echoes through sunlit meadows and imaginations take flight. In this enchanting world, youthful spirits soar as kids and their furry companions chase dreams, playfully bound together. With every step, Boundless Joy sparks smiles, ignites friendships, and creates memories that last a lifetime.” — Designed by Kasturi Palmal from India.

Cuban Bartender

“Summer arrives and with it the long days and nights that allow us to enjoy the weather. We are heading to Cuba and from the Malecón we observe the city waiting for the new day.” — Designed by Veronica Valenzuela from Spain.

Blue Butterfly

“Captured with Sony A7II and FE 90mm F2.8 Macro lens. Macro photography is my favorite.” — Designed by Viktor Hanacek from Czechia.

Holding Out For Me

“Effectively captures the essence of a girl observing the view outside through a window. It conveys the image of someone attentively observing or gazing at what’s happening outside, suggesting a sense of curiosity or contemplation.” — Designed by Bhabna Basak from India.

Pre-Wash Instructions

Designed by Ricardo Gimenes from Sweden.

Summer Palms

“Looks like Bahamas, but these are from San Francisco! Yep, photographers’ secrets!” — Designed by Viktor Hanacek from Czechia.

Raise A Glass To World Milk Day

“World Milk Day is a reminder to appreciate the nourishing qualities of milk and the impact it has on our well-being. Whether enjoyed on its own, added to a smoothie, or used to create mouthwatering recipes, milk is a versatile and wholesome ingredient that deserves to be celebrated.” — Designed by PopArt Studio from Serbia.

Oldies But Goodies

So many wonderful wallpaper designs have seen the light of day since we first embarked on this monthly journey. Below you’ll find a selection of favorites from past June editions. Please note that these wallpapers don’t come with a calendar.

Create Your Own Path

“Nice weather has arrived! Clean the dust off your bike and explore your hometown from a different angle! Invite a friend or loved one and share the joy of cycling. Whether you decide to go for a city ride or a ride in nature, the time spent on a bicycle will make you feel free and happy. So don’t wait, take your bike and call your loved one because happiness is greater only when it is shared. Happy World Bike Day!” — Designed by PopArt Studio from Serbia.

Summer Coziness

“I’ve waited for this summer more than I waited for any other summer since I was a kid. I dream of watermelon, strawberries, and lots of colors.” — Designed by Kate Jameson from the United States.

Old Kyiv

“This picture is dedicated to Kiev (Kyiv), the capital of Ukraine. It is loosely based on a 13th century map — this is what the center of Kyiv looked like ca. 900 years ago! The original map also included the city wall — however, I decided not to wrap the buildings into the wall, since in my dream world, a city would not need walls.” — Designed by Vlad Gerasimov from Georgia.

Travel Time

“June is our favorite time of the year because the keenly anticipated sunny weather inspires us to travel. Stuck at the airport, waiting for our flight but still excited about wayfaring, we often start dreaming about the new places we are going to visit. Where will you travel to this summer? Wherever you go, we wish you a pleasant journey!” — Designed by PopArt Studio from Serbia.

Strawberry Fields

Designed by Nathalie Ouederni from France.

Oh, The Places You Will Go!

“In celebration of high school and college graduates ready to make their way in the world!” — Designed by Bri Loesch from the United States.

Expand Your Horizons

“It’s summer! Go out, explore, expand your horizons!” — Designed by Dorvan Davoudi from Canada.

Summer Surf

“Summer vibes…” — Designed by Antun Hirsman from Croatia.

Summertime

Designed by Ricardo Gimenes from Sweden.

Deep Dive

“Summer rains, sunny days, and a whole month to enjoy. Dive deep inside your passions and let them guide you.” — Designed by Ana Masnikosa from Belgrade, Serbia.

Join The Wave

“The month of warmth and nice weather is finally here. We found inspiration in the World Oceans Day which occurs on June 8th and celebrates the wave of change worldwide. Join the wave and dive in!” — Designed by PopArt Studio from Serbia.

Melting Away

Designed by Ricardo Gimenes from Sweden.

Bauhaus

“I created a screenprint of one of the most famous buildings from the Bauhaus architect Mies van der Rohe for you. So, enjoy the Barcelona Pavillon for your June wallpaper.” — Designed by Anne Korfmacher from Germany.

World Environment Day

“On June 5th, we celebrate World Environment Day — a moment to pause and reflect on how we impact Earth’s health. A few activities represented in this visual include conserving energy and water, shopping and growing local, planting flowers and trees, and building a sustainable infrastructure.” — Designed by Mad Fish Digital from Portland, OR.

Pineapple Summer Pop

“I love creating fun and feminine illustrations and designs. I was inspired by juicy tropical pineapples to celebrate the start of summer.” — Designed by Brooke Glaser from Honolulu, Hawaii.

Window Of Opportunity

“‘Look deep into nature and then you will understand everything better,’ A.E.” — Designed by Antun Hiršman from Croatia.

Midsummer Night’s Dream

“The summer solstice in the northern hemisphere is nigh. Every June 21 we celebrate the longest day of the year and, very often, end up dancing like pagans. Being landlocked, we here in Serbia can only dream about tidal waves and having fun at the beach. What will your Midsummer Night’s Dream be?” — Designed by PopArt Studio from Serbia.

Papa Merman

“Dream away for a little while to a land where June never ends. Imagine the ocean, feel the joy of a happy and carefree life with a scent of shrimps and a sound of waves all year round. Welcome to the world of Papa Merman!” — Designed by GraphicMama from Bulgaria.

Gravity

Designed by Elise Vanoorbeek (Doud Design) from Belgium.

Solstice Sunset

“June 21 marks the longest day of the year for the Northern Hemisphere — and sunsets like these will be getting earlier and earlier after that!” — Designed by James Mitchell from the United Kingdom.

Yoga Is A Light, Which Once Lit, Will Never Dim

“You cannot always control what goes on outside… you can always control what goes on inside… Breathe free, live and let your body feel the vibrations and positiveness that you possess inside you. Yoga can rejuvenate and refresh you and ensure that you are on the journey from self to the self. Happy International Yoga Day!” — Designed by Acodez IT Solutions from India.

Summer Things

“Summer is coming so I made this simple pattern with all my favorite summer things.” — Designed by Maria Keller from Mexico.

Night Night!

“The time we spend with our dads is precious so I picked an activity my dad enjoys a lot, reading.” — Designed by Maria Keller from Mexico.

Evolution

“We’ve all grown to know the month of June through different life stages. From toddlers to adults with children, we’ve enjoyed the weather with rides on our bikes. As we evolve, so do our wheels!” — Designed by Jason Keist from the United States.

Handmade Pony Gone Wild

“This piece was inspired by the My Little Pony cartoon series. Because those ponies irritated me so much as a kid, I always wanted to create a bad ass pony.” — Designed by Zaheed Manuel from South Africa.

Getting Better Everyday

“Inspired by the eternal forward motion to get better and excel.” — Designed by Zachary Johnson-Medland from the United States.

Comfort Reading

Designed by Bobby Voicu from Portugal.

Happy Squatch

“I just wanted to capture the atmosphere of late spring/early summer in a fun, quirky way that may be reflective of an adventurous person during this time of year.” — Designed by Nick Arcarese from the United States.

]]>
hello@smashingmagazine.com (Cosima Mielke)
<![CDATA[Designing A Better Design Handoff File In Figma]]> https://smashingmagazine.com/2023/05/designing-better-design-handoff-file-figma/ https://smashingmagazine.com/2023/05/designing-better-design-handoff-file-figma/ Fri, 26 May 2023 15:30:00 GMT Creating an effective handoff process from design to development is a critical step in any product development cycle. However, as any designer knows, it can be a nerve-wracking experience to send your carefully crafted design off to the dev team. It’s like waiting for a cake to bake — you can’t help but wonder how it will evolve in the oven and how it will taste when you take it out of the oven.

The relationship between designers and developers has always been a little rocky. Despite tools like Figma’s Inspect feature (which allows developers to inspect designs and potentially convert them to code in a more streamlined way), there are still many barriers between the two roles. Often, design details are hidden within even more detailed parts, making it difficult for developers to accurately interpret the designer’s intentions.

For instance, when designing an image, a designer might import an image, adjust its style, and call it done. More sophisticated designers might also wrap the image in a frame or auto layout so it better matches how developers will later convert it to code. But even then, many details could still be missing. The main problem here is that designers typically create their designs within a finite workspace (a frame with a specific width). In reality, however, the design elements will need to adapt to a variety of different environments, such as varying device sizes, window widths, screen resolutions, and other factors that can influence how the design is displayed. Therefore, developers will always come back with the following questions:

  • What should be the minimum/maximum width/height of the image?
  • What is its content style?
  • What effects need to be added?

As in reality, these are the details needed to be addressed.

Designers, let’s face the truth: there’s no perfect handoff.

Every developer works, thinks, and writes code differently, which means there is no such thing as the ideal handoff document. Instead, our focus should be on creating a non-perfect but still effective and usable handoff process.

In this article, we will explore how to create a design handoff document that attempts to strike the right balance between providing developers with the information they need while still allowing them the flexibility to bring the design to life in their own way.

How Can The Handoff Files Be improved?

1. Talk To Developers More Often

Design is often marked as complete once the design handoff file is created and the developers start transforming it into code. However, in reality, the design is only complete when the user finds the experience pleasant.

Therefore, crafting the design handoff file and having the developer help bring your design to the user is essentially another case study on top of the one you have already worked on. To make it perfect, just as you would talk to users, you also need to communicate with engineers — to better understand their needs, how they read your file, and perhaps even teach them a few key things about using Figma (if Figma is your primary design tool).

Here are a few tips you can teach your developers to make their lives easier when working with Figma:

Show Developers The Superpower Of The Inspect Panel

Figma’s Inspect feature allows developers to see the precise design style that you’ve used, which can greatly simplify the development process. Additionally, if you have a design library in place, Inspect will display the name of each component and style that you’ve used. This can be incredibly helpful for developers, especially if they’re working with a style guide, as they can use the component or style directly to match your design with ease.

In addition to encouraging developers to take advantage of the Inspect panel, it’s sometimes helpful to review your own design in a read-only view. This allows you to see precisely what the developers will see in the Inspect panel and ensures that components are named accurately, colors are properly linked to the design system, and other vital details are structured correctly.

Share With Developers The Way To Export Images/Icons

Handling image assets, including icons, illustrations, and images, is also an essential part of the handoff process, as the wrong format might result in a poor presentation in the production environment.

Be sure to align with your developers on how they would like you to handle the icons and images. It could either be the case where they would prefer you to export all images and icons in a single ZIP file and share it with them or the case where they would prefer to export the images and icons on their own. If it’s the latter, it’s important to explain in detail the correct way to export the images and icons so that they can handle the export process on their own!

Encourage Them To Use Figma’s Commenting Feature

It’s common for developers to have questions about the design during the handoff process. To make it easier for everyone involved, consider teaching them to leave comments directly in Figma instead of sending you a message. This way, the comments are visible to everyone and provide context for the issue at hand. Additional features, such as comment reactions and the “mark as resolved” button, further enable better interaction between team members and help everyone keep track of whether an issue has been addressed or not.

Leverage Cursor Chat

If you and the developers are both working within the same Figma file, you can also make use of the cursor chat feature to clarify any questions or issues that arise. This can be a fun and useful way to collaborate and ensure that everyone is on the same page.

Use Figma Audio Chat

If you need to discuss a complex issue in more detail, consider using Figma’s audio chat feature. This can be a quick and efficient way to clarify any questions or concerns arising during the development process.

It’s important to keep in mind that effective collaboration relies on good communication. Therefore, it’s crucial to talk to your developers regularly and understand their approach to reading and interpreting your designs, especially when you first start working with them. This sets the foundation for a productive and successful partnership.

2. Documenting Design Decisions For You And Developers

We have to be honest, the reason why building our design portfolios often takes a lot of time is the fact that we do not document every design decision along the way, and so we need to start building the case studies later by trying our best to fetch the design files and all the stuff we need.

I find it useful to document my decisions in Figma, not only just designs but, if appropriate, also competitor analysis, problem statements, and user journeys, and leave the links to these pages within the handoff file as well. The developer might not read it, but I often hear from the developers in my team that they like it as they can also dig into what the designers think while working on the design, and they can learn tips for building a product from us as well.

3. Don’t Just Leave The Design There. Add The Details

When it comes to design, details matter — just leaving the design “as is” won’t cut it. Adding details not only helps developers better understand the design, but it can also make their lives easier. Here are some tips for adding those crucial design details to your handoff.

Number The Frame/Flow If Possible

I really like the Figma handoff template that Luis Ouriach (Medium, Twitter) created. The numbering and title pattern makes it easy for developers to understand which screen belongs to which flow immediately. However, it can be complicated to update the design later as the numbering and title need to be manually updated.

Note: While there are plugins available (like, for example, Renamed), which can help with renaming multiple frames and layers all at once, this workflow can still be inconvenient when dealing with more complicated naming patterns. For instance, updating “1. Welcome → 2. Onboarding → 3. Homepage” into “1. Welcome → 2. Onboarding → 3. Sign up → 4. Homepage” can become quite a hassle. Therefore, one alternative approach is to break down the screens into different tickets or user journeys and assign a number that matches each ticket/user journey.

Name The Layers If Possible

We talked about numbering/naming the frames, but naming the layers is equally important! Imagine trying to navigate a Figma file cluttered with layers and labels like “Frame 3123,” “Rectangle 8,” and “Circle 35.” This can be confusing and time-consuming for both designers and developers, as they need to sift through numerous unnamed layers to identify the correct one.

Well-named layers facilitate better collaboration, as team members can quickly locate and comprehend the purpose of each element. This also helps ensure consistency and accuracy when translating designs into code.

If you search around in Figma, you will find a number of plugins that can help you with naming the layers in a more systematic way.

Add The Details For Interaction: Make Use Of Figma’s Section Feature

This might seem trivial, but I consider it important. Design details shouldn’t be something like “This design does X, and if you press that, it will do Y.” Instead, it’s crucial to include details like the hover state, initial state, max width/height, and the outcome of different use cases.

For this reason, I appreciate the new section feature that Figma has released. It allows me to have a big design at the top so that developers can see all of the design at once and then look at the section details for all the design and interaction details.

Make Use Of The Interactive Prototype And FigJam Features To Show The User Flow

Additionally, try to share with the developers how the design screens connect to one another. You can use the interactive prototype feature within Figma to connect the screens and make them move so that developers can understand the logic. Alternatively, you can use FigJam to connect the screens, allowing developers to see how everything is connected at a glance.

4. The Secret Weapon Is Adding Loom Video

Loom video is a lifesaver for us. You only need to record it once, and then you can share it with anyone interested in the details of your design. Therefore, I highly recommend making use of Loom! For every design handoff file, I always record a video to walk through the design. For more complicated designs, I will record a separate video specifically describing the details so that I don’t need to waste other people’s time if they’re not interested.

To attach the Loom video, I use the Loom plugin and place it right beside the handoff file. Developers can play it as many times as needed without even disturbing you, asking you more questions, and so on.

→ Download the Loom Embed Figma plugin

5. The Biggest Fear: Version Control

In an ideal world, the design would be completely finalized before developers start coding. But in reality, design is always subject to adjustments, even after development has already begun. That’s why version control is such an important topic.

Although Figma has a branching feature for enterprise customers to create new designs in a separate branch, I find it helpful to keep a few extra things in your design file.

Have A Single Source Of Truth

Always ensure that the developer handoff file you share with your team is the single source of truth for the latest design. If you make any changes, update the file directly, and keep the original as a duplicate for reference. This will prevent confusion and avoid pointing developers to different pages in Figma.

If you have access to the branching feature in Figma, it can be highly beneficial to utilize it to further streamline your workflow. When I need to update a handoff file that I have already shared with the developers, my typical process is to create a new branch in Figma first. Then I update the developer handoff file in that branch, send it to the relevant stakeholders for review, and finally merge it back into the original developer handoff file once everything is confirmed. This ensures that the link to the developer handoff file remains unchanged for the developers.

Changelogs/Future Plan

Include a changelog in the handoff file to help developers understand the latest changes made to the design.

Similarly to changelogs, if you already know of future plans to adjust the design, write them down somewhere in Figma so that the developers can understand what changes are to be expected.

6. Make Use Of Plugins

There are also a number of plugins to help you with creating your handoff:

  • EightShapes Specs
    EightShapes Specs creates specs for your design automatically with just one click.
    → Download the EightShapes Spec Figma plugin
  • Autoflow
    Autoflow allows you to connect the screens visually without using FigJam.
    → Download the Autoflow Figma plugin
  • Style Organizer
    Style Organizer allows you to make sure all of your styles are linked to your component/style so that developers won’t need to read hex code in any case.
    → Download the Style Organizer Figma plugin
7. The Ultimate Goal Is To Have A Design System

If you want to take things a step or two further, consider pushing your team to adopt a design system. This will enable the designs created in Figma to be more closely aligned with what developers expect in the code. You can match token names and name your layers/frames to align with how developers name their containers and match them in your design system.

Here are some of the benefits of using a design system:

  • Consistency
    A design system ensures a unified visual language across different platforms, resulting in a more consistent user experience.
  • Efficiency
    With a design system in place, designers and developers can reuse components and patterns, reducing the time spent on creating and updating individual elements.
  • Collaboration
    A design system facilitates better communication between designers and developers by establishing a shared language and understanding of components and their usage.

Note: If you would like to dig deeper into the topic of design systems, I recommend reading some of the Smashing Magazine articles on this topic.

Conclusion: Keep Improving The Non-perfect

Ultimately, as I mentioned at the beginning, there’s no one-size-fits-all approach to developer handoff, as it depends on various factors such as product design and the engineers we work with. However, what we can do is work closely with our engineers, communicate with them regularly, and collaborate to find solutions that make everyone’s lives easier. Just like our designs, the key to successful developer handoff is prioritizing good communication and collaboration.

Further Reading

  • Design Handoffs,” Interactive Design Foundation
    Design handoff is the process of handing over a finished design for implementation. It involves transferring a designer’s intent, knowledge, and specifications for a design and can include visual elements, user flows, interaction, animation, copy, responsive breakpoints, accessibility, and data validations.
  • A Comprehensive Guide to Executing The Perfect Design-to-Development Handoff,” Phase Mag
  • Design Handoff 101: How to handoff designs to developers,” Zeplin Blog
    Before we had tools like Figma, design handoff was a file-sharing nightmare for designers. When UI designs were ready for developers to start building, nothing could begin until designers manually added redlines to their latest local design file, saved it as a locked Sketch or Photoshop file or a PDF, and made sure developers were working on the correct file after every update. But those design tools completely changed the way teams collaborate around UI design — including the way design handoff happens.
  • How to communicate design to developers (checklist),” Nick Babich
  • A Front-End Developer’s Ode To Specifications,” Dmitriy Fabrikant, Smashing Magazine
    In the physical world, no one builds anything without detailed blueprints because people’s lives are on the line. In the digital world, the stakes just aren’t as high. It’s called “software” for a reason: when it hits you in the face, it doesn’t hurt as much. But, while the users’ lives might not be on the line, design blueprints (also called design specifications or specs) could mean the difference between a correctly implemented design that improves the user experience and satisfies customers and a confusing and inconsistent design that corrupts the user experience and displeases customers. (Editor’s Note: Before tools like Figma were on the rise, it was even more difficult for designers and developers to communicate and so tools such as Specctr — which this article mentions — were much needed. As of today, this article from 2014 is a bit of a trip into history, but it will also give you a fairly good idea of what design blueprints are and why they are so important in the designer-developer handoff process.)
  • Everything Developers Need To Know About Figma,” Jurn van Wissen, Smashing Magazine
    Unlike most design software, Figma is free and browser-based, so developers can easily access the full design files making the developer handoff process significantly smoother. This article teaches developers who have nothing but a basic understanding of design tools everything they need to know to work with Figma.
  • Penpot, An Open-Source Design Platform Made For Designers And Developers Alike,” Mikołaj Dobrucki, Smashing Magazine
    In the ever-evolving design tools landscape, it can be difficult to keep up with the latest and greatest. In this article, we’ll take a closer look at Penpot, the first design and prototyping tool that’s fully open-source and based on open web standards, making it an ideal choice for both designers and developers. (Editor’s Note: Today, it’s not always “There’s only Figma.” There are alternatives, and this article takes a good look at one of them — Penpot.)
  • The Best Handoff Is No Handoff,” Vitaly Friedman, Smashing Magazine
    Design handoffs are inefficient and painful. They cause frustration, friction, and a lot of back and forth. Can we avoid them altogether? Of course, we can! Let’s see how to do just that.
]]>
hello@smashingmagazine.com (Ben Shih)
<![CDATA[Meet Success At Scale, A New Smashing Book By Addy Osmani]]> https://smashingmagazine.com/2023/05/success-at-scale-pre-release/ https://smashingmagazine.com/2023/05/success-at-scale-pre-release/ Thu, 25 May 2023 20:00:00 GMT Print and eBook shipping in fall 2023. Pre-order the book.]]> Today, we are very happy to announce our new book: Success at Scale, a curated collection of best-practice case studies capturing how production sites of different sizes tackle performance, accessibility, capabilities, and developer experience at scale. Case studies are from industry experts with guidance that stands the test of time.

Join Addy Osmani, your curator, as we dive into a nuanced look at several key topics that will teach you tips and tricks that may help you optimize your own sites. The book will also include short interviews with each contributor on what additional lessons, challenges, and tips they have to share some time after each case study was written.

High-quality hardcover. Curated by Addy Osmani. Cover art by Espen Brunborg. Print and eBook shipping in fall 2023. Pre-order the book.

Contents

Each section of the book is filled with case studies from real-world large-scale web applications and services, interviews with the people involved, and key takeaways to help you achieve the same success.

  • Performance includes examples of measuring, budgeting, optimizing, and monitoring performance, in addition to tips for building a performance culture.
  • Capabilities is about bridging the gap between native capabilities and the modern web. You’ll explore web apps, native apps, and progressive web applications.
  • Accessibility makes web apps viable for diverse users, including people with temporary or permanent disabilities. Most of us will have a disability at some point in our lives, and these case studies show how we can make the web work for all of us.
  • Developer Experience is about building a project environment and culture that encourage support, growth, and problem-solving within teams. Strong teams build great projects!
Who This Book Is For

This book is for professional web developers and teams who want to deliver high-quality web experiences. We explore dimensions like performance, accessibility, capabilities, and developer experience in depth. Success at Scale goes beyond beginner material to cover the pragmatic approaches required to tackle these challenges in the real world.

About the Author

Addy Osmani is an engineering leader working on Google Chrome. He leads up Chrome’s Developer Experience organization, helping reduce the friction for developers to build great user experiences.

Technical Details
  • ISBN: 978-3-910835-00-9 (print)
  • Quality hardcover, stitched binding, ribbon page marker.
  • Free worldwide airmail shipping from Germany starting in fall 2023.
  • eBook available for download in fall 2023 as PDF, ePUB, and Amazon Kindle.
  • Pre-order the book.
Community Matters ❤️

Producing a book takes quite a bit of time, and we couldn’t pull it off without the support of our wonderful community. A huge shout-out to Smashing Members for the kind, ongoing support. The eBook is and always will be free for Smashing Members as soon as it’s out. Plus, Members get a friendly discount when purchasing their printed copy. Just sayin’! ;-)

More Smashing Books & Goodies

Promoting best practices and providing you with practical tips to master your daily coding and design challenges has always been (and will be) at the core of everything we do at Smashing.

In the past few years, we were very lucky to have worked together with some talented, caring people from the web community to publish their wealth of experience as printed books that stand the test of time. Heather and Steven are two of these people. Have you checked out their books already?

Understanding Privacy

Everything you need to know to put your users first and make a better web.

Add to cart $44

Touch Design for Mobile Interfaces

Learn how touchscreen devices really work — and how people really use them.

Add to cart $44

Interface Design Checklists

100 practical cards for common interface design challenges.

Add to cart $39

]]>
hello@smashingmagazine.com (Vitaly Friedman)
<![CDATA[The Impact Of Agile Methodologies On Code Quality]]> https://smashingmagazine.com/2023/05/impact-agile-methodologies-code-quality/ https://smashingmagazine.com/2023/05/impact-agile-methodologies-code-quality/ Thu, 25 May 2023 11:00:00 GMT As software development continues to evolve, so too do the methodologies and approaches used to create it. In recent years, Agile methodologies have gained widespread adoption as a modern approach to software development, with a focus on flexibility, collaboration, and delivering working software in short increments. This is a key differentiator when it comes to other development workflows.

One of the key benefits of Agile methodologies is its impact on the quality of the code that ships. Code quality is an essential aspect of software development, as high-quality code is critical to ensure the reliability, maintainability, and scalability of any software, website, or application.

Overview Of Agile Methodologies

Agile methodologies are a set of software development approaches that prioritize flexibility, collaboration, and delivering working software in short increments. Agile methodologies aim to improve the quality of the software by allowing for frequent feedback, continuous improvement, and adaptation to changing requirements.

The Agile Manifesto, created in 2001 by a group of software developers who wanted to find a better way of developing software, outlines the core values and principles of Agile methodologies. These values include prioritizing individuals and interactions over processes and tools, working software over comprehensive documentation, customer collaboration over contract negotiation, and responding to change rather than following a concrete, long-term plan.

Agile methods break down projects into small and manageable units called sprints. Sprints are completed by cross-functional and self-organizing teams in a short period of time, usually two to four weeks. During each sprint, the team works on a specific set of tasks, and at the end of the sprint, they review their work, evaluate customer satisfaction, and identify areas for improvement. Because each sprint is focused on a specific set of tasks, the team can quickly pivot and adjust their approach if they receive new information or feedback from customers or stakeholders. This results in faster turnaround times and a more responsive development process which is essential for creating high-quality software that meets the needs of the end users.

There are several Agile methodologies that teams can choose from to develop software in a more flexible and iterative way.

  • Scrum: This is perhaps the most popular Agile methodology. It involves a small team of developers working together in short sprints to deliver a working product incrementally. Each sprint typically lasts for 2–4 weeks.
  • Kanban: This methodology focuses on continuous delivery and improving workflow efficiency. Work is broken down into smaller pieces and tracked on a visual board, and team members pull work items as they are ready to work on them. If you’ve used a Trello board before, then you know exactly how this works. Other apps, like Notion, offer similar features.
  • Extreme Programming (XP): XP is a methodology that emphasizes software quality and customer satisfaction. It involves practices such as pair programming, test-driven development, and continuous integration.
  • Lean Development: This methodology aims to reduce waste and increase efficiency in the development process. It involves continuous improvement and a focus on delivering value to the customer.
  • Crystal: This methodology is designed for small teams working on projects with a high degree of uncertainty. It involves frequent communication, regular feedback loops, and an emphasis on collaboration.
How Agile Methodologies Can Impact Code Quality

Code quality is one of the most essential aspects of any development process, as it directly impacts the success of any product. Agile methodologies have been designed to prioritize a customer-centric approach by breaking down features into smaller, manageable pieces of functionality. This allows for more frequent releases of working quality code that can be tested and reviewed to help deliver high-quality software that meets customer needs. Here are some practical ways in which Agile methodologies help promote and impact efficient code quality in development:

  • Prioritizing simplicity and efficiency.
    Agile methodologies prioritize simplicity and efficiency in software development. This means that developers are encouraged to write code that is not only functional but also easy to understand, test, debug, maintain, and modify. The goal is to create a codebase that is clean and simple, which can help reduce the potential for bugs and errors.
  • Encouraging modularization.
    The Agile process promotes the modularization of code. By breaking code down into smaller, modular components, developers can create code that is more flexible and reusable. This can save time and effort in the long run by reducing the need for repetitive or verbose code. Additionally, by optimizing the performance of each component, the developer is able to reduce the overall processing time, resulting in a more efficient codebase, breaking down features into smaller, more manageable pieces — often referred to as user stories or epics. This approach allows development teams to focus on delivering small, working pieces of functionality that can be tested and validated before being integrated into the larger codebase while also enabling them to respond quickly to changing requirements or feedback.
  • Improving readability.
    It’s important that code is legible and understood across the team, as it affects not only the developer who wrote the code but also other developers who may need to modify or maintain the code in the future. Agile methodologies help developers focus on writing code that is self-documenting and easy to understand by promoting the use of clear and concise coding practices such as self-descriptive naming conventions and avoiding complex code structures.
  • Test-Driven Development (TDD).
    TDD involves writing tests for the code before writing the code itself, which can help ensure that the code is well-structured and easy to read. This method emphasizes continuous feedback and improvement on the code, as developers are regularly provided with feedback on their work and have opportunities to make improvements as they go. By receiving feedback early on in the development process, developers can address issues and make changes to their code before they become bigger problems.
  • Continuous integration.
    This is a development practice that involves frequently integrating code changes from multiple developers into a single shared codebase. With continuous integration, code is automatically compiled, tested, and validated, which helps to catch issues early on in the development process. This approach ensures that code is always in a releasable state, which ultimately helps to improve code quality and reduce the risk of bugs or errors.

Overall, Agile methodologies can help developers write better code by promoting continuous code feedback and improvement while prioritizing simplicity and efficiency. By following these principles, developers can create code that is more efficient, maintainable, and robust, ultimately resulting in a better end product.

Key Principles Of Agile Development

At its core,

Agile methodologies value individuals and their interactions over following strict processes and tools.

This means that communication and collaboration between team members are prioritized to ensure everyone is working towards the same goals.

These processes are governed by a set of guiding principles that help the development team to create software that is tailored to the customer’s needs while ensuring high-quality delivery.

  • Customer satisfaction is the top priority.
    The goal of Agile development is to create software that meets the needs of the customer. This means that the customer is involved in every step of the process, from planning to testing.
  • Teamwork is essential.
    Cross-functional teams that work together to complete tasks are a core principle. This means that everyone on the team has a role to play, and everyone works together to achieve the same goal.
  • Flexibility is key.
    Everything about Agile development is designed to be flexible and adaptable. This means that the team can change course if needed, and the development process can be adjusted based on feedback from the customer.
  • Communication is critical.
    Open and honest communication between team members and the customer is encouraged. Everyone should feel empowered to share their ideas, concerns, and feedback.
  • Iterative development.
    Agile development involves breaking the development process down into smaller, more manageable pieces. By working on one sprint at a time, the team can make progress quickly and efficiently.
  • Continuous improvement.
    This means that the team is always looking for ways to improve the development process and make it more effective.
Prioritizing Collaboration And Communication

Effective collaboration and communication are crucial in any team-oriented project, and Agile methodologies place a particular emphasis on these values.

Prioritizing collaboration and communication ensures that everyone involved in the project is working towards the same goals and that any issues or concerns can be addressed quickly and effectively.

When collaboration and communication are prioritized, team members are encouraged to share their expertise and insights, which can lead to more creative and innovative solutions.

In an Agile environment, team members work closely together, and there is often a high level of interdependence between different areas of the project. If one team member is struggling or working in isolation, it can have a ripple effect on the rest of the team and ultimately impact the success of the project. Collaborating with other developers can help identify issues in the code that may not have been noticed otherwise. For example, another developer may notice a potential security vulnerability or identify a bug the original developer missed. Here are some of the key ways to ensure this:

  • Encourage cross-functional teams.
    Bringing together individuals with different skills and expertise can lead to stronger communication between business owners and the technical team that produces the product. I remember a time when I was working on a project with my team, and we divided the work based on each person’s strengths. This approach allowed everyone to contribute their best work to the project.
  • Break down silos.
    Silos refer to a situation where different teams or departments within an organization work in isolation from each other, without much communication or collaboration. Silos can lead to several negative outcomes, such as a lack of transparency, duplication of effort, and a slower development process. Eliminating barriers between individuals and teams would help foster collaboration by allowing individuals to share their skills and expertise.
  • Hold regular check-ins and feedback sessions.
    Scheduling consistent check-ins and feedback sessions can help ensure everyone is aligned on priorities and goals. I’ve found that this approach helps keep everyone motivated and focused on the end goal.
  • Use proper communication channels.
    Utilizing appropriate communication channels can increase the transparency and visibility of the project. In my experience, using tools like instant messaging (like Slack) and video conferencing (like Zoom) has helped facilitate collaboration and information sharing, particularly in a remote team environment.
  • Hold dedicated “Ask Me Anything”(AMA) sessions.
    AMA sessions can help frontline managers understand the rationale behind the approach and become comfortable with empowering their teams and giving up control. I remember a time when my team participated in one of these sessions, and it helped us better understand the benefits of Agile methodology because it put everyone on the same page and made everyone more confident in the overall direction.

Failing to prioritize collaboration and communication can have serious consequences for an Agile project. Miscommunications and misunderstandings can lead to delays, missed deadlines, and even project failure. Team members may become demotivated or disengaged if they feel they are working in isolation or not being heard. In the worst-case scenario, the lack of collaboration and communication can lead to a breakdown in the project team, which can be difficult to recover from.

Refactoring And Code Reviews

Refactoring refers to the process of improving the internal structure of code without changing its external behavior. It is done to enhance code readability, maintainability, and performance. On the other hand, code review is the process of examining code to identify issues or defects that may affect its quality, security, or functionality.

Refactoring

Refactoring is the process of restructuring existing code without changing its external behavior. It should be done frequently in Agile projects — often in the middle of a sprint — to keep the codebase clean and avoid technical debt. Here are some steps on how to carry out refactoring in Agile:

  • Identify the parts of the codebase that need refactoring.
  • Discuss with the team why refactoring is necessary and the benefits it can bring.
  • Prioritize the refactoring tasks based on their impact on the project.
  • Break down the refactoring tasks into small, manageable chunks.
  • Refactor the code while ensuring that it still passes all the tests.
  • Get feedback from the team and stakeholders on the refactored code.

Code Review

A code review is a process of systematically reviewing the code written by other team members. It aims to improve the code’s quality, find bugs, and ensure it adheres to coding standards. A code review should be done early and often in Agile projects to ensure that the codebase is always of high quality. Here are some steps on how to carry out a code review in Agile:

  • Assign a team member to review the code written by another team member.
  • Review the code for readability, maintainability, and adherence to the coding standards.
  • Provide feedback on the code and suggest improvements.
  • Discuss the feedback with the code author and come up with a plan to address the issues.
  • Make sure that the code changes are reviewed again after they are implemented to ensure that they meet the desired quality standards.

Overall, refactoring and code review are essential practices in Agile methodologies that help ensure the code is of high quality and meets the customer’s needs. By incorporating these practices into the development process, the team can improve collaboration, reduce technical debt, and deliver high-quality software faster.

Agile Compared To Traditional Workflows

Traditional workflows refer to development methodologies that follow a linear, sequential process, where each phase of development must be completed before moving on to the next phase, with a focus on ensuring that all requirements are clearly defined before development begins. Some examples of traditional workflows include the Waterfall model, the V-model, the Spiral model, and the Rational Unified Process. These methodologies are often referred to as “plan-driven” or “heavyweight” methodologies, as they involve extensive planning and documentation upfront, with less flexibility for changes during the development process.

Take a look at the Waterfall model, for example. This model, also known as the “classic life cycle model,” is based on a series of well-defined phases, with each phase depending on the successful completion of the previous one.

The phases of the Waterfall model typically include requirements gathering, design, implementation, testing, deployment, and maintenance. Once one phase is completed, the next phase begins, and there is no going back to the previous phase. This means that the Waterfall model follows a “top-down” approach, where each phase is dependent on the previous phase’s success. And, true to its name, the process resembles a waterfall.

One of the key characteristics of the Waterfall model is that it is heavily focused on planning and documentation. Before the development team begins coding, the project requirements and design specifications must be fully documented. This documentation is then used to guide the entire development process.

While the Waterfall model has been a popular development process for many years, it has several limitations. For instance, the linear and sequential nature of the model can be inflexible, making it challenging to incorporate changes and feedback throughout the development process. It also puts a lot of emphasis on up-front planning, which can be time-consuming and costly. Plus, we all know that even the best-laid plans don’t always go right.

As a result, many software development teams have shifted towards using Agile methodologies instead of the Waterfall model. Agile methodologies offer greater flexibility and collaboration, enabling teams to adjust their approach as they gather feedback and insights throughout the development process.

Here are some key differences between Agile methodologies and traditional workflows:

Agile Traditional
Flexibility Flexible and adaptable. Rigid and structured.
Customer involvement Prioritize customer involvement and feedback throughout the development process. Limited customer involvement, with the customer being presented with the final product at the end of the process.
Team structure Cross-functional and collaborative. Specialized and isolated.
Testing Occurs throughout the development process. Occurs the end of the development cycle.

While traditional workflows may have some advantages, such as providing a clear roadmap and a structured approach, I believe Agile methodologies are better suited for today’s fast-paced, ever-changing software development landscape. Agile methodologies offer the flexibility and adaptability necessary to meet changing requirements and deliver high-quality software products.

Conclusion

In conclusion, adopting Agile methodologies can have a significant positive impact on code quality. By prioritizing collaboration and communication, implementing test-driven development, and regularly conducting code reviews and refactoring, development teams can ensure that the code they produce is high-quality, maintainable, and meets the customer’s needs.

It’s worth noting that Agile methodologies are not without their challenges, such as the potential for scope creep. You can imagine how a flexible process that encourages frequent collaboration and feedback could lead to a project growing more legs than it needs. That said, Organizations that have adopted Agile methodologies report higher levels of customer satisfaction, faster time-to-market, and overall improved project success rates. As the industry continues to evolve, it’s likely that we will see more and more organizations embrace Agile methodologies to improve code quality and project outcomes.

Further Reading On SmashingMag

]]>
hello@smashingmagazine.com (Sarah Oke Okolo)