What I Wish I Knew About Working In Development Right Out Of School — Smashing Magazine

What I Wish I Knew About Working In Development Right Out Of School — Smashing Magazine

Victoria Johnson began a career in front-end development upon graduating from college. Now, roughly one year later, she reflects back on the decisions she made to crack into the field and find her first full-time job. There are plenty of things, she says, she would have done differently had she known then what she knows now about what it takes to transition from school to front-end development in the real world. This is her story, and she’s sharing it to provide those who are just starting out with another beginner’s perspective.

My journey in front-end web development started after university. I had no idea what I was going into, but it looked easy enough to get my feet wet at first glance. I dug around Google and read up on tons of blog posts and articles about a career in front-end. I did bootcamps and acquired a fancy laptop. I thought I was good to go and had all I needed.

Then reality started to kick in. It started when I realized how vast of a landscape Front-End Land is. There are countless frameworks, techniques, standards, workflows, and tools — enough to fill a virtual Amazon-sized warehouse. Where does someone so new to the industry even start? My previous research did nothing to prepare me for what I was walking into.

Fast-forward one year, and I feel like I’m beginning to find my footing. By no means do I consider myself a seasoned veteran at the moment, but I have enough road behind me to reflect back on what I’ve learned and what I wish I knew about the realities of working in front-end development when starting out. This article is about that.

The Web Is Big Enough For Specializations

At some point in my journey, I enrolled myself in a number of online courses and bootcamps to help me catch up on everything from data analytics to cybersecurity to software engineering at the same time. These were things I kept seeing pop up in articles. I was so confused; I believed all of these disciplines were interchangeable and part of the same skill set.

But that is just what they are: disciplines.

What I’ve come to realize is that being an “expert” in everything is a lost cause in the ever-growing World Wide Web.

Sure, it’s possible to be generally familiar with a wide spectrum of web-related skills, but it’s hard for me to see how to develop “deep” learning of everything. There will be weak spots in anyone’s skillset.

It would take a lifetime masterclass to get everything down-pat. Thank goodness there are ways to specialize in specific areas of the web, whether it is accessibility, performance, standards, typography, animations, interaction design, or many others that could fill the rest of this article. It’s OK to be one developer with a small cocktail of niche specialties. We need to depend on each other as much as any Node package in a project relies on a number of dependencies.

Feature Panel

Burnout And Imposter Syndrome Are Real

My initial plan for starting my career was to master as many skills as possible and start making a living within six months. I figured if I could have a wide set of strong skills, then maybe I could lean on one of them to earn money and continue developing the rest of my skills on my way to becoming a full-stack developer.

I got it wrong. It turned out that I was chasing my tail in circles, trying to be everything to everyone. Just as I’d get an “a-ha!” moment learning one thing, I’d see some other new framework, CSS feature, performance strategy, design system, and so on in my X/Twitter feed that was calling my attention. I never really did get a feeling of accomplishment; it was more a fear of missing out and that I was an imposter disguised as a front-ender.

I continued burning the candle at both ends to absorb everything in my path, thinking I might reach some point at which I could call myself a full-stack developer and earn the right to slow down and coast with my vast array of skills. But I kept struggling to keep up and instead earned many sleepless nights cramming in as much information as I could.

Burnout is something I don’t wish on anyone. I was tired and mentally stressed. I could have done better. I engaged in every Twitter space or virtual event I could to learn a new trick and land a steady job. Imagine that, with my busy schedule, I still pause it to listen to hours of online events. I had an undying thirst for knowledge but needed to channel it in the right direction.

We Need Each Other

I had spent so much time and effort consuming information with the intensity of a firehose running at full blast that I completely overlooked what I now know is an essential asset in this industry: a network of colleagues.

I was on my own. Sure, I was sort of engaging with others by reading their tutorials, watching their video series, reading their social posts, and whatnot. But I didn’t really know anyone personally. I became familiar with all the big names you probably know as well, but it’s not like I worked or even interacted with anyone directly.

What I know now is that I needed personal advice every bit as much as more technical information. It often takes the help of someone else to learn how to ride a bike, so why wouldn’t it be the same for writing code?

Having a mentor or two would have helped me maintain balance throughout my technical bike ride, and now I wish I had sought someone out much earlier.

I should have asked for help when I needed it rather than stubbornly pushing forward on my own. I was feeding my burnout more than I was making positive progress.

Start With The Basics, Then Scale Up

My candid advice from my experience is to start learning front-end fundamentals. HTML and CSS are unlikely to go away. I mean, everything parses in HTML at the end of the day, right? And CSS is used on 97% of all websites.

The truth is that HTML and CSS are big buckets, even if they are usually discounted as “basic” or “easy” compared to traditional programming languages. Writing them well matters for everything. Sure, go ahead and jump straight to JavaScript, and it’s possible to cobble together a modern web app with an architecture of modular components. You’ll still need to know how your work renders and ensure it’s accessible, semantic, performant, cross-browser-supported, and responsive. You may pick those skills up along the way, but why not learn them up-front when they are essential to a good user experience?

So, before you click on yet another link extolling the virtues of another flavor of JavaScript framework, my advice is to start with the essentials:

  • What is a “semantic” HTML element?
  • What is the CSS Box Model, and why does it matter?
  • How does the CSS Cascade influence the way we write styles?
  • How does a screenreader announce elements on a page?
  • What is the difference between inline and block elements?
  • Why do we have logical properties in CSS when we already have physical ones?
  • What does it mean to create a stacking context or remove an element from the document flow?
  • How do certain elements look in one browser versus another?

The list could go on and on. I bet many of you know the answers. I wonder, though, how many you could explain effectively to someone beginning a front-end career. And, remember, things change. New standards are shipped, new tricks are discovered, and certain trends will fade as quickly as they came. While staying up-to-date with front-end development on a macro level is helpful, I’ve learned to integrate specific new technologies and strategies into my work only when I have a use case for them and concentrate more on my own learning journey — establish a solid foundation with the essentials, then progress to real-life projects.

Progress is a process. May as well start with evergreen information and add complexity to your knowledge when you need it instead of drinking from the firehose at all times.

There’s A Time And Place For Everything

I’ll share a personal story. I spent over a month enrolled in a course on React. I even had to apply for it first, so it was something I had to be accepted into — and I was! I was super excited.

I struggled in the class, of course. And, yes, I dropped out of the program after the first month.

I don’t believe struggling with the course or dropping out of it is any indication of my abilities. I believe it has a lot more to do with timing. The honest truth is that I thought learning React before the fundamentals of front-end development was the right thing to do. React seemed to be the number one thing that everyone was blogging about and what every employer was looking for in a new hire. The React course I was accepted into was my ticket to a successful and fulfilling career!

My motive was right, but I was not ready for it. I should have stuck with the basics and scaled up when I was good and ready to move forward. Instead of building up, I took a huge shortcut and wound up paying for it in the end, both in time and money.

That said, there’s probably no harm in dipping your toes in the water even as you learn the basics. There are plenty of events, hackathons, and coding challenges that offer safe places to connect and collaborate with others. Engaging in some of these activities early on may be a great learning opportunity to see how your knowledge supports or extends someone else’s skills. It can help you see where you fit in and what considerations go into real-life projects that require other people.

There was a time and place for me to learn React. The problem is I jumped the gun and channeled my learning energy in the wrong direction.

If I Had To Do It All Over Again…

This is the money question, right? Everyone wants to know exactly where to start, which classes to take, what articles to read, who to follow on socials, where to find jobs, and so on. The problem with highly specific advice like this is that it’s highly personalized as well. In other words, what has worked for me may not exactly be the right recipe for you.

It’s not the most satisfying answer, but the path you take really does depend on what you want to do and where you want to wind up. Aside from gaining a solid grasp on the basics, I wouldn’t say your next step is jumping into React when your passion is web typography. Both are skill sets that can be used together but are separate areas of concern that have different learning paths.

So, what would I do differently if I had the chance to do this all over again?

For starters, I wouldn’t skip over the fundamentals like I did. I would probably find opportunities to enhance my skills in those areas, like taking the FreeCodeCamp’s responsive web design course or practice recreating designs from the Figma community in CodePen to practice thinking strategically about structuring my code. Then, I might move on to the JavaScript Algorithms and Data Structures course to level up basic JavaScript skills.

The one thing I know I would do right away, though, is to find a mentor whom I can turn to when I start feeling as though I’m struggling and falling off track.

Or maybe I should have started by learning how to learn in the first place. Figuring out what kind of learner I am and familiarizing myself with learning strategies that help me manage my time and energy would have gone a long way.

Oh, The Places You’ll Go!

Front-end development is full of opinions. The best way to navigate this world is by mastering the basics. I shared my journey, mistakes, and ways of doing things differently if I were to start over. Rather than prescribing you a specific way of going about things or giving you an endless farm of links to all of the available front-end learning resources, I’ll share a few that I personally found helpful.

In the end, I’ve found that I care a lot about contributing to open-source projects, participating in hackathons, having a learning plan, and interacting with mentors who help me along the way, so those are the buckets I’m organizing things into.

Open Source Programs


Developer Roadmaps


Whatever your niche is, wherever your learning takes you, just make sure it’s yours. What works for one person may not be the right path for you, so spend time exploring the space and picking out what excites you most. The web is big, and there is a place for everyone to shine, especially you.

Smashing Editorial
(gg, yk, il)

Weekly News for Designers № 718

Adobe Stock Offer

What Removing Object Properties Tells Us About JavaScript
Removing properties in JavaScript might not sound thrilling, yet various methods exist to accomplish this task.
Removing Object Properties Tells Us About JavaScript

Protomaps Open-Source World Map
A new, free, open source map of the world, deployable as a single static file on cloud storage.
Protomaps Open-Source World Map

Ideas for Image Motion Trail Animations
Some examples for mouse or touch responsive animations where images are shown along the path of the user motion.
Ideas for Image Motion Trail Animations

2023 Design Collaboration Report
370+ designers across different industries contribute their thoughts on design collaboration
2023 Design Collaboration Report

The Grumpy Designer Ponders What It Means To ‘Learn’ AI
What should web designers be learning about AI? Do we need to learn anything at all? The Grumpy Design has a few unscientific ideas to share with you.
The Grumpy Designer Ponders What It Means To Learn AI

CSS Findings From Photoshop Web Version
Ahmad Shadeed dives into the CSS of the new web version of Photoshop.
CSS Findings From Photoshop Web Version

The Three Cs
When serving and storing files on the web, there are a number of different things we need to take into consideration: Concatenate, Compress, Cache.
The Three Cs

Introduction to Web Sustainability
Learn how you you can contribute to building a greener and sustainable web.
Introduction to Web Sustainability

Scroll-Driven State Transfer
Scroll-Driven State Transfer

Best Photographer Logo Templates in 2023
A collection of templates for creating a stunning logo for photographers, or inspiration for designing your own.
Best Photographer Logo Templates in 2023

Design Books for Non-Designers
A collection of books to give non-designers a better understanding of design.
Design Books for Non-Designers

Solid.js Creator Outlines Options to Reduce JavaScript Code
Ryan Carniato, the mind behind Solid.js, believes it’s time for a leaner JavaScript. Here’s his guide to trimming the code.
Solid.js Creator Outlines Options to Reduce JavaScript Code

Burn Your Toast
When using a toast component, the key question is: Is the content necessary?
Burn Your Toast

An Actionable And Reliable Usability Questionnaire With Only 7 Items: Inuit — Smashing Magazine

An Actionable And Reliable Usability Questionnaire With Only 7 Items: Inuit — Smashing Magazine

Inuit (short for “Interface Usability Instrument”) is a new questionnaire you can use to assess the usability of your user interface. It has been designed to be more diagnostic than existing usability instruments like, e.g., SUS and for use with machine learning, all the while asking fewer questions than other questionnaires. This article explores how and why Inuit has been developed and why we can be sure that it actually measures usability, and reliably so.

A lot of contemporary usability evaluation relies on easily measurable and readily available metrics like conversion rates, task success rates, and time on task, even though it’s questionable how well these are suited for reliably capturing a concept as complex as usability in its entirety.

The same holds for user experience. When an instrument is used to measure usability, e.g., in controlled user studies or via live intercepts, it’s often the simple single ease question, which is generally not a bad choice, but has its limits.

Note: For more information on usability evaluation, you can check the article “Current Practice in Measuring Usability: Challenges to Usability Studies and Research” by Kasper Hornbæk and “Growth Marketing Considered Harmful” by Maximilian Speicher.

Ultimately, when you intend to precisely and reliably measure the usability of a digital product, there’s no way around a scientifically well-founded instrument or, in everyday terms, a “questionnaire.” The most famous one is probably SUS, the System Usability Scale, but there are also some others around. Two examples are UMUX, the Usability Measure for User Experience, and SUMI, the Software Usability Measurement Inventory.

To join this party, in this article, we introduce Inuit (the Interface Usability Instrument), a new usability questionnaire. We will share how and why it was developed and how it’s different from the questionnaires mentioned above.

To immediately cut to the chase: With a scale from 1 (“completely disagree”) to 5 (“completely agree”), Inuit looks as follows. The parts in square brackets can be adapted to your specific interface, e.g., products in an online shop, articles on a news website, or results in a search engine.

Q1 I found [the information] I was looking for.
Q2 I could easily understand [the provided information].
Q3 I was confused using [the interface].
Q4 I was distracted by elements of [the interface].
Q5 Typography and layout added to readability.
Q6 There was too much information presented in too little space.
Q7 [My desired information] was easily reachable.

The Inuit metric (a score between 0 to 100, analogous to SUS) can then be calculated as follows:

(Q1 + Q2 + Q5 + Q7 – Q3 – Q4 – Q6 + 11) * 10028

Why 11 and 28?

We have seven items rated on a scale from 1 to 5, but for some (Q1, Q2, Q5, Q7), 5 is the best rating, and for some (Q3, Q4, Q6), 1 is the best rating. Hence, we need to subtract the latter from 6 when we add up everything: Q1 + Q2 + Q5 + Q7 + (6-Q3) + (6-Q4) + (6-Q6) = Q1 + Q2 + Q5 + Q7 – Q3 – Q4 – Q6 + 18. This gives us an overall score between 7 and 35.
Now, we want to normalize this to a score between 0 and 100. For this, we first subtract 7 for a score between 0 and 28: Q1 + Q2 + Q5 + Q7 – Q3 – Q4 – Q6 + 18 – 7 = Q1 + Q2 + Q5 + Q7 – Q3 – Q4 – Q6 + 11. Finally, for a score between 0 and 100, we need to divide everything by 28 and multiply by 100: (Q1 + Q2 + Q5 + Q7 – Q3 – Q4 – Q6 + 11) * 100/28.

You might have noticed that compared to, e.g., SUS with 10, Inuit consists of only 7 questions. Apart from that, it has two more advantages:

  • Inuit has been designed to provide training data for machine-learning models that can then automatically predict usability from user interactions or web analytics data.
  • Its items (i.e., the questions) are diagnostic, at least to a certain degree. This means you see what’s wrong with your interface simply by looking at the results from the questionnaire. Have a bad rating for readability (Q5)? You should make the text in your interface more readable.

Now, at this point, you can either accept all this and simply get going with Inuit to measure the usability of your digital product (we’d be delighted). Or, if you’re interested in the details, you’re very welcome to keep reading (we’d be even more delighted).

Feature Panel

“So, Why Did You Develop Yet Another Usability Questionnaire?”

You probably already guessed that Inuit wasn’t developed just for fun or because there aren’t enough questionnaires around. But to answer this, we have to reach back a bit.

In 2014, Max was a Ph.D. student busy working on his dissertation. The goal of it all was to find a way to determine the usability of an interface automatically from users’ interactions, such as what they do with the mouse cursor and how they scroll, rather than making participants in a user study fill out pages and pages of questions. Additionally, the cherry on top should be to also automatically propose optimizations for the interface (e.g., if user interactions suggest the interface is not readable, make the text larger).

To be able to achieve this, however, it was first necessary to determine if and how well certain interactions (mouse cursor movements, mouse cursor speed, scrolling behavior, and so on) predict the usability — or rather its individual aspects — of an interface. This meant collecting training data through users’ interactions with an interface and their usability assessments of that interface. Then, one could investigate how well (combinations of) tracked interactions predict (aspects of) usability using regression and/or machine-learning models. So far, so good, as far as the theory is concerned.

In practice, one important decision that would have huge implications for the project was how to collect the usability assessments mentioned above when gathering the training data. Since usability is a latent variable, meaning it can’t be observed directly, a proper instrument (i.e., a questionnaire) is necessary to assess it. And the most famous one is undeniably the System Usability Scale (SUS). It should’ve been an obvious choice, shouldn’t it?

A closer look showed that, while SUS would be perfectly well suited to train statistical models to infer usability from interactions, it simply wasn’t the perfect fit. This was the case mainly for two reasons:

  1. First, many questions contained in SUS (“I think that I would like to use this system frequently,” “I found the various functions in this system were well integrated,” and “I felt very confident using the system,” among others) describe the effects of good or bad usability — users feel confident because the system is well usable and so on. But they don’t describe the aspects of usability that cause them, e.g., bad understandability. This makes it difficult to know what should be done to make it better. What exactly should we change to make users feel more confident? The questions are not diagnostic or “actionable” and require further qualitative research to uncover the causes of bad ratings. It’s the same for UMUX and SUMI.
  2. Second, with just 10 items, SUS is already a very small questionnaire. However, the fewer items, the less friction and the more motivated users are to actually answer. So, is ten really the minimum, or would a proper questionnaire with fewer items be possible?

With these considerations in mind, Max went on and ultimately developed Inuit, the instrument presented in the introduction. He ended up with seven items that were better suited for the needs of his Ph.D. project and more actionable than those of SUS.

“How do you know this actually measures usability?”

Inuit was developed in a two-step process. The first step was a review of established guidelines and checklists with more than 250 rules for good usability, which were filtered based on the requirements above and resulted in a first draft for the new usability instrument. This draft was then discussed and refined in expert interviews with nine usability professionals.

The final draft of Inuit, with the seven factors informativeness (Q1), understandability (Q2), confusion (Q3), distraction (Q4), readability (Q5), information density (Q6), and reachability (Q7), was evaluated using a confirmatory factor analysis (CFA).

CFA is a method for assessing construct validity, which means it “is used to test whether measures of a construct are consistent with a researcher’s understanding of the nature of that construct” or “to test whether the data fit a hypothesized measurement model.”
— Wikipedia

Put very simply, by using a CFA, we can check how well a theory matches the practice. In our case, the “construct” or “hypothesized measurement model” (theory) was Inuit, and the data (practice) came from a user study with 81 participants in which four news websites were evaluated using an Inuit questionnaire.

In a CFA, there are various metrics that show how well a construct fits the data. Two well-established ones are CFI, the comparative fit index, and RMSEA, the root mean square error of approximation — both range from 0 to 1.

For CFI, 0.95 or higher is “accepted as an indicator of good fit” (Wikipedia). Inuit’s value was 0.971. For RMSEA, “values less than 0.05 are good, values between 0.05 and 0.08 are acceptable” (Kim et al.). Inuit’s value was 0.063. This means our theory matches the practice, or Inuit’s questions do indeed measure usability.

Case Study #1

Inuit was first put into practice in 2014 at Unister GmbH, which at that time ran travel search engines like fluege.de and reisen.de, and was developing an entirely new semantic search engine. The results page of this search engine, named BlueKiwi, was evaluated in a user study with 81 participants using Inuit.

In this first study, the overall score averaged across all participants was 59.9. Ratings were especially bad for informativeness (Q1), information density (Q6), and reachability (Q7). Based on these results, BlueKiwi’s search results page was redesigned.

Among other things, the number of advertisements was reduced (better reachability), search results were displayed more concisely (better informativeness), and everything was more clearly aligned and separated (better information density). See the figure below for the full list of changes.

Two variants of the search results page, before and after adjustments were made based on the Inuit results
Adjustments made to the search results page based on the Inuit results. (Image source: www.researchgate.net) (Large preview)

After the redesign, we ran another study, in which the overall Inuit score increased to 67.5 (+11%), with improvements in every single one of the seven items.

“Why Wait 9 Years To Write This Article?”

There were various factors at play. One was what’s called the research–practice gap. It’s often difficult for academic work to gain traction outside the academic community. One reason for this is that work that is part of a Ph.D. project is often a little neglected after it has served its purpose — being published in a research paper, included in a thesis, and presented at a Ph.D. defense — which is pretty much exactly what happened to Inuit.

Case Study #2

Another factor, however, was that we wanted to put the instrument into practice in a real-world industry setting over a longer period of time first, and we got the chance to do that only relatively recently.

We ran a longitudinal study over a period of almost two years in which we ran quarterly benchmarks of multiple e-commerce websites using both SUS and Inuit, with a total of 6,368 users. The results of these benchmarks were included in the dashboard of product KPIs and regularly shared with the team of 6 product managers. After roughly two years of conducting and sharing benchmarks, we interviewed the product managers about their use of the data, challenges, wishes, and potential for improvement.

What a high-level analysis showed was that all of the product managers, in one way or another, described Inuit as more intuitive to understand, less abstract, and more actionable compared to SUS when looking at both instruments as a whole.

They found most of Inuit’s items more specific and easier to interpret and, therefore, more relevant from a product manager’s perspective. SUS, in contrast, was described as, e.g., “good for [the] overall score” and the bird’s eye view. Virtually all product managers, however, wished for even more specific insights into where exactly on the website usability problems occur. One suggested building an optimal instrument by combining certain items from both SUS and Inuit.

As part of the analysis, we computed Cronbach’s α for Inuit (based on 3190 answers) as well as SUS (based on 3178 answers).

Cronbach’s α is a statistical measure for the internal consistency of an instrument, which can be interpreted as “the extent to which all of the items of a test measure the same latent variable [i.e., usability].”
— Wikipedia

Values of 0.7 or above are generally deemed acceptable. Inuit reached a value of 0.7; SUS a value of 0.8.

To top things off, Inuit and SUS showed a considerable (Pearson’s r = 0.53) and highly significant (p < 0.001) correlation when looking at overall scores aggregated over the different e-commerce websites and tasks the study participants had to complete.

In layman’s terms, When the SUS score goes up, the Inuit score goes up; when the SUS score goes down, the Inuit score goes down. Both questionnaires measure the same thing (with a very, very rough approximation of INUIT = 0.6 × SUS + 17).

Since these first results were so encouraging, we decided to write this general, more practice-oriented overview article about Inuit now. A deeper analysis of our big dataset, however, is yet to be conducted, and our current plan is to report findings in much more detail separately.

“Why Do You Think Inuit Is Better Than SUS?”

We don’t think so (or that it’s better than any scientifically founded usability instrument, for that matter). There are many ways to measure the same latent variable, in this case, usability. Both questionnaires, SUS and Inuit, have proven that they can measure the usability of an interface. Still, they were developed in different contexts and with different goals and requirements in mind.

So, to address the question of when it’s better to use which, as true researchers, we have to say “it depends” (annoying, isn’t it?).

SUS, which has been around since the 1990s, is probably the most popular and well-established usability instrument. It’s been studied and validated over and over, which Inuit, of course, can’t compete with yet and still has a long way to go. If the goal is to compare scores at a high level and even tap into public benchmark numbers for orientation, SUS would be preferable.

However, by design, Inuit has two advantages over SUS:

  1. Inuit has only seven items and is still a “complete” usability instrument.
    30% fewer questions can be a major factor when it comes to motivating users to fill out a questionnaire. Assuming that a big part of remote online studies is done quickly in passing and with short attention spans, designing efficient studies that generate reliable output and minimize effects like participant fatigue can be a major challenge for researchers.
  2. Inuit’s items have been specifically designed to be more actionable for practitioners and lend themselves better to manual analysis and inferring potential interface optimizations.
    As we’ve learned in our second case study, talking to actual product managers revealed that for them, the results of a usability assessment should always be as specific as possible. Comparing the items of both, Inuit points to more concrete areas to improve than SUS, which was perceived as rather vague.

“Where Can I Use Inuit?”

Generally, in any scenario that involves an interface and a task — either defined by you or the user themselves. In the studies mentioned and described above, we could demonstrate that Inuit works well in controlled as well as natural-use settings and with news websites, search engines, and e-commerce shops.

Now, of course, we can’t evaluate Inuit with any possible kind of interface, and that is part of the reason for this article. Inuit has been around and publicly available since 2014, and we have no idea if and how it has been used by other researchers, but if you do, please let us know about it. We’d be thrilled to hear about your experience and results.

The questions presented at the beginning of the article are relatively focused on finding information because that’s where Inuit is historically coming from and because most of the things users do involve the finding of information of some kind. (Please keep in mind that information doesn’t have to be text. On the contrary, most information is non-textual.) But those questions can be adapted as long as they still reflect the underlying aspects of usability, which are informativeness, understandability, confusion, distraction, readability, information density, and reachability.

Say, for instance, you want to evaluate a module from an e-learning course, e.g., in the form of an annotated video with a subsequent quiz. To accommodate the task at hand, Q1 could be rephrased to “I had all the information necessary to complete the module” and Q7 to “All the information necessary to complete the module was easily reachable.”


There are plenty of usability questionnaires out there, and we have added a new one to the pool — Inuit. Why? Because sometimes, you find yourself in a situation where none of the existing questionnaires is the perfect fit. Inuit has been designed to be more diagnostic than existing usability instruments like, e.g., SUS and for use with machine learning, all the while asking fewer questions than other questionnaires. So, if any of this seems relevant to your use cases or context of work, why not give it a try?

From a scientific and statistical point of view, in a confirmatory factor analysis (CFA), Inuit has demonstrated that its questions do indeed measure usability. On top of that, it’s consistent and correlates well with SUS, based on data from a large-scale, longitudinal user study.

Note: If you want to dive deeper into the science behind Inuit, e.g., how exactly the items/questions were chosen, you can read the corresponding research paper “Inuit: The Interface Usability Instrument,” which was presented at the 2015 HCI International Conference. If you want to learn more about how Inuit can be used to train machine-learning models, read “Ensuring Web Interface Quality through Usability-Based Split Testing.” And finally if you want to see how Inuit can be used as the basis for a tool that automatically proposes optimizations for an interface, you can refer to “S.O.S.: Does Your Search Engine Results Page (SERP) Need Help?” which was presented at the 2015 ACM Conference on Human Factors in Computing Systems.


  • “SUS: A ‘Quick and Dirty’ Usability Scale,” John Brooke (Usability evaluation in industry)
  • “Confirmatory and exploratory factor analysis for validating the phlegm pattern questionnaire for healthy subjects,” Kim, Hyunho, Boncho Ku, Jong Yeol Kim, Young-Jae Park, and Young-Bae Park (Evidence-Based Complementary and Alternative Medicine)
  • SUMI Questionnaire Homepage, Jurek Kirakowski
  • “10 Things to Know about the Single Ease Question (SEQ),” Jeff Sauro (MeasuringU)
  • “Measuring Usability: From the SUS to the UMUX-Lite,” Jeff Sauro (MeasuringU)
  • “Ensuring web interface quality through usability-based split testing,” Speicher, Maximilian, Andreas Both, and Martin Gaedke (International Conference on Web Engineering)
  • “Inuit: the interface usability instrument,” Speicher, Maximilian, Andreas Both, and Martin Gaedke (Design, User Experience, and Usability: Design Discourse)
  • “S.O.S.: Does Your Search Engine Results Page (SERP) Need Help?,” Speicher, Maximilian, Andreas Both, and Martin Gaedke (Proceedings of the 33rd Annual ACM Conference on Human Factors in Computing Systems)
  • “Conversion rate & average order value are not UX metrics,” Maximilian Speicher (UX Collective)
  • “So, How Can We Measure UX?,” Maximilian Speicher (ACM Interactions)
  • “Growth Marketing Considered Harmful,” Maximilian Speicher
  • “Current Practice In Measuring Usability: Challenges to Usability Studies and Research,” Kasper Hornbæk
  • Latent variable, Wikipedia
  • Confirmatory factor analysis, Wikipedia
  • Internal consistency, Wikipedia
Smashing Editorial

8 CSS & JavaScript Snippets for Creating Notification UIs

Website notifications have become commonplace. We see them in eCommerce, membership communities, and social media. They’re hard to escape.

And they’re also important for users. Notifications provide details regarding orders, messages, and other account information.

Thus, it’s interesting to see how notification UIs are evolving. Designers are using their creativity and adding personality. They’re proving that notifications can have both form and function.

With that, let’s take a look at eight unique notification UIs. They use CSS and (in some cases) JavaScript to go beyond the basics.

Neon Notification System with hover Effects by CleverYeti

This notification UI is perfect for websites using dark mode. The look is elegant while also being easy to digest. We’ll give bonus points for including some smooth CSS animation effects.

See the Pen Neon notification system by CleverYeti

Vertical Timeline Notifications by Alina N.

Here’s a neat way to organize multiple notifications. This timeline layout makes each item stand out. The spacing between entries keeps things easy to follow. The search field is also a welcome addition.

See the Pen Vertical Timeline – Notifications by Alina N.

Notification Badge Animation by Valery Alikin

Want to add an element of fun? This bright little UI resembles a “Minions” character. And if the colors don’t get your attention, the splatter animation will do the job.

See the Pen Notification Badge Animation by Valery Alikin

Project Notifications by Landon Messmer

Perhaps the “bell” icon is a bit overused. But in this case, it’s more about what’s inside. This notifications panel is beautiful and functional. It features a clean look and some handy features.

See the Pen Project Notifications by Landon Messmer

Error, Success, Warning, and Alert Notifications by Swarup Kumar Kuila

This notification UI brings a simple yet high-tech aesthetic. With bright colors and simple animation, it’s sure to draw attention. It’s powered by CSS, with an assist from the popular Font Awesome icon library.

See the Pen error, success, warning and alert Messages by Swarup Kumar Kuila

Info, Warning, and Alert Site Components by Dom Jay

These notification components are a fit for long-form content. Use them to display important details in an online course. Or as a call-out box within a blog post. Alerts aren’t just for user-specific info, after all.

See the Pen Site Component – Info/Warning/Alert by Dom Jay

Success, Error, Alert Flat Notifications by AbrarK

Click a button and watch as a colorful notification appears. This presentation uses a flat style reminiscent of Facebook and other popular services. It also features some slick animation.

See the Pen Flat Notify by AbrarK

Pop-Up Social Media Notification by Nooray Yemon

Here’s a highly stylized take on a notification UI. It shows that notifications can go beyond utility. They can also be a branding opportunity.

See the Pen Pop up social feed notification by Nooray Yemon

Serve Notice with These Awesome UI Examples

Notification UIs are now a staple of web design. They’re impacting all types of websites. Odds are that you’ll need to consider them in an upcoming project.

We can certainly stick with a generic look. But there’s an opportunity to do much more. CSS and JavaScript enable us to create a unique user experience. The examples above are just the tip of the iceberg.

Want to see even more outstanding notification UIs? Check out our CodePen collection!

Related Topics

The Grumpy Designer Ponders What It Means To ‘Learn’ AI

Artificial intelligence (AI) is just starting to impact society. But that hasn’t stopped people from making bold proclamations.

I’m particularly fond of the Doomsday scenarios. You know, the very darkest of sci-fi fantasies. The possibility of working for a machine sounds modern. I wonder how well it pays?

Then there’s the idea that we must “learn” AI. The term “prompt engineer” has been thrown around. This one sounds perfect for a grumpy designer! It also seems like a job with a short shelf life.

We used to write code to direct a computer. But now we tell it what to do. It’s less work for more money. Who wouldn’t want this job?

But seriously. What should web designers learn about AI? Do we need to learn anything at all? I have a few unscientific ideas. Just take them with a grain of salt. I haven’t run any of this by ChatGPT yet.

AI Prompts Test Your Communication Skills

The idea of a career in prompt engineering sounds strange. But I admit that the job requires some skill. You might have issues if you’re not a gifted communicator.

Generative code seems like an area of relevance. Large language models (LLMs) like ChatGTP and Bard can accept vague instructions. That will likely produce some basic code. It serves as a foundation to build upon.

But complex code needs clear, detailed instructions. Not everyone possesses this ability.

Just think of the clients who have trouble explaining a design feature. Do we expect these people to generate the next great WordPress plugin?

The results may suffer if a prompt is too light or wordy. LLMs will likely improve. But there will still be hurdles for those who struggle with clear communication.

Your primary language may also be a barrier. What if a model isn’t well-versed in your language? That’s going to make prompts more difficult.

Therefore, it’s wise to brush up on your written communication skills. They’ll come in handy.

It's important to clearly communicate your needs to AI tools.

Using APIs To Build AI Apps

Several AI tools either have or will release an application programming interface (API). The concept should be familiar to web designers. The likes of Google and social media providers have similar offerings.

You can tap into the power of ChatGPT to build an application. This is useful if your project needs one-off functionality. Or if you have a killer idea that will make billions. That may be far-fetched. It seems like most people are using it to generate text thus far.

Learning one of these APIs won’t apply to everybody. It’s more likely that we’ll use apps created by others. For instance, a WordPress plugin that adds a chatbot to your site.

Understanding how an API works is still valuable, though. And there’s a market for niche applications. Perhaps the real money will be in helping organizations implement AI.

You can use an API to add articificial intelligence capabilities to your application.

When To Use AI Tools

It’s tempting to use AI for just about every task. Just like your mother told you, the machine knows best.

But the reality is that AI isn’t good at everything. It struggles with accuracy. It even hallucinates occasionally.

Learning a tool’s strengths and weaknesses is vital. The facts, figures, and code generated by AI may look great. But how do you know for sure?

Practicing blind faith in these tools is not recommended. If accuracy matters, take the time to double-check the results. You might want to regenerate a response as well. AI tools will often provide multiple answers for the same prompt.

Thus, treat AI like that shady friend you had in high school. Feel free to hang out together. But don’t rely on it for anything too important. It will only get you into trouble.

Experiment with AI tools to determine their strengths and weaknesses.

Learn How To Get What You Want From AI

Like any tool, AI has a learning curve. But there’s no need to study every aspect. It’s more important to learn how to produce the desired results.

Your communication skills are vital. And tools that understand your instructions also play a role. Your success depends on both items.

What if you struggle to write AI prompts? There are a lot of guides popping up that can help.

And you can do more than follow their examples. Pay close attention to the way each prompt is worded. Experiment with similar language in your virtual conversations.

It also helps to be a bit grumpy skeptical. This will help you better understand the limitations of these tools. They don’t “know” everything. And, like the people they are supposedly replacing, they make mistakes.

The machines aren’t taking over yet. But it’s worth learning how to take advantage of them. You may end up as the more powerful being.

Related Topics

How To Choose Typefaces For Fintech Products: Our Best Practices Guide (Part 1) — Smashing Magazine

How To Choose Typefaces For Fintech Products: Our Best Practices Guide (Part 1) — Smashing Magazine

Fintech products are systems that are overloaded by many types of data like numerals, texts, spreadsheets, and so on. Working with these products requires the designer to pay a high level of attention and responsibility, as he becomes a provider between user and data. Daria shares her approach to digital product typography and reviews the key points to consider when choosing typefaces.

Hi! My name is Daria, and for the last two years, I’ve been working at Devexperts. I have experience creating various products, from crypto wallets to exchange administration platforms. Our target audience is diverse from professional and non-professional brokers, novice and experienced traders to dealing staff and trade desk operators. All of them use financial tools daily, and some of them — with high intensity.

Сontinuous use of the product under high tension conditions (which is common when we talk about the financial sphere) might cause eyestrain and general weariness. Consequently, users might make poor decisions and become inattentive to details, which will lead to harming their investment portfolios. My goal, as a designer, is to prevent these drawbacks and make daily operations comfortable.

The main designer’s instrument to work with the product is the UI kit, a collection of reusable components, such as controls, color palettes, effects, and text styles. Typography is an important part of it as it defines a text style library and conveys the most valuable and powerful instrument — the information — from platform to user.

To explain my approach to digital product typography, I invite you to go over each step, from research to implementation. Before diving deep, let’s cover the basic terms.

Basic Terminology Before We Begin

A typeface (or a font family) is a range of graphically similar fonts with a common visual idea behind them. For example, Helvetica is a typeface.

A font is part of a typeface. It is a collection of symbols that might differ in weight (Semibold/Bold/Black), might be upright or italic (Regular/Cursive/Italic), have various widths (Narrow/Wide), and so on.

A definition of a typeface/font family and a font
The definition of a typeface/font family and a font. (Image credit: Daria Karpenko) (Large preview)

Please keep in mind that these two terms are often interchangeable, even when used by type designers.

Now that we have our basic terms defined let’s go over our suggestions on the workflow when creating or choosing a font for a fintech product.

Feature Panel

Step 1: Detect Data Types And User Flows

If you’re a fintech designer in the trading sphere, you’ll mainly deal with spreadsheets. Usually, they represent fundamental page units with data on markets.

Other standard data media include product cards and forms for order entry, cancellation, adjustment, and so on. Product cards provide users with more detailed information on trading instruments, while forms allow users to interact with markets. These interactions are user flows.

What do all these different kinds of information have in common? They’re all displayed as texts. In this case, the core design values are the following:

  • An appropriate typeface,
  • Working with typography,
  • A layout,
  • An accurate representation of information.
Examples of data-overloaded interface, exchange administration platform
Examples of data-overloaded interface, exchange administration platform. (Image credit: Devexperts) (Large preview)

In the picture above, you can see the example of a platform for exchange administrators with all typical types of data. A spreadsheet displays a list of trades and dozens of their parameters. At the back is an example of a trading instrument card (currency pair) with dynamic quotes that update in real time. The chart displays trade history with prices and volumes, and the Market Depth graph at the right shows the current state of the market — the volume of Buy and Sell orders and their prices.

Now when we have defined the most common types of data in fintech products, let’s zoom in and see what specific circumstances we need to consider in our product.

Step 2: Consider Your Requirements

Best design practices facilitate current requirements and consider potential scenarios of product evolution. When working on a fintech product, you should also consider features such as language support, special characters, and use conditions. Let’s have a look at these spheres and highlight the most considerable points in each of them.

Language Support

Forecasting specific typographical cases at the start of a project might be a great investment for future development. For instance, product localization and translation to various languages is one such case. Should your product support one or multiple languages? If multiple, try to understand the average difference in word lengths.

Distribution of word lengths in various languages
Distribution of word lengths in various languages. (Source: www.ravi.io) (Large preview)

For example, German words are generally much longer than English ones. Compared to these two, French and Spanish words are somewhere in between. Consequently, French or German versions need more text space in design than the English one, which will cause layout changes.

Comparison of the same modal window for English and French language settings
Compare the same modal window for English and French language settings. (Image credit: Devexperts) (Large preview)

Many languages also use diacritics (additional typographical signs above and under the letters). These writing systems may require more space between lines to make texts more legible.

Spacing differences depending on the use of diacritics
Spacing differences depending on the use of diacritics. (Image credit: Daria Karpenko) (Large preview)

Logographic or logosyllabic writing systems are quite the opposite: unlike Latin-based languages, they are usually very compact. Few font families support both hieroglyphic and Latin languages, though. So, if you can’t find an appropriate font for your needs, you’ll have to pick out two different typefaces that will match visually.

Having information on supported writing systems and future localizations will allow you to choose a typeface with all the necessary languages. Contact the product owner or manager to determine which languages may appear in your product. This way, you will develop a design for all corner cases and create a visual system fit for all localizations.

Required Symbols

Another critical aspect to consider from the start is special characters. For example, in the fintech sphere, the designer deals with trading markets and often needs to use a variety of currency symbols, math formulas, and decimals.

USD symbols
USD symbols in our products. (Image credit: Devexperts) (Large preview)

There are plenty of ways to find out what glyphs a typeface includes. You can learn this information on Type Foundries websites and typeface marketplaces. Check the type specimen to explore all supported features, glyphs, languages, and characteristics.

If you already have a typeface on your laptop, it’s easy to check glyphs in a Font book if you use macOS or a Character map if you operate Windows.

Below, we have an example from macOS. Select the font and switch to the repertoire preview mode to find an entire spreadsheet.

A spreadsheet with PT Serif Regular glyphs
A spreadsheet with PT Serif Regular glyphs. (Large preview)

You can also make use of a font-managing app. One of its perks is the font library management according to characteristics. There are many such apps, so it should be easy to find the one that suits your taste and budget. A good option might be Typeface or FontBase if you’re looking for a free app.

Another challenge with glyphs that awaits you is Figma. Unlike Adobe products, it doesn’t offer a glyph panel. It means you don’t see the full contents of a font and can’t choose a symbol you need directly on Figma. However, you can copy the required glyph from the Font Book or Character map and paste it into your design.

Use Conditions

When you start a new project, consider what type of information it represents and the most common use conditions. For example, if it’s an article, it’s mostly text information with extensive accented quotes, or the text consists of short paragraphs broken down by photos. We presume that the user will most likely read it in a calm setting, as it is the most comfortable way to concentrate on a long text and will need half an hour to finish it.

Let’s review a drastically different case. We’re designing a product for professional use overloaded with data comprising mostly repetitive figures. Users operate various gadgets to work with the data all day in varying conditions. They can get tired and lack attention but must analyze the data and react swiftly.

When we work with fintech products, we mainly work with the second scenario. Our average user deals with numbers and important operations. Investors analyze markets and trade securities, while trade desk operators manage orders. Even a slight hiccup in data performance may cause a great mistake in operations. A carefully selected typeface and precise layout will help you support your user and make their workflow convenient.

Step 3: Discover Typeface Options

Understanding typeface features will help you decide on the most suitable fonts. For this reason, we’ll first break down the categories and characteristics of typefaces.


Typeface classification has been a subject of discussion for typographers for a long time. (For further information, you can read “The trouble with font classifications” and “Talking about type:
from Aristotle to Arial.”) For years the common classification, adopted by Association Typographique Internationale was the Vox-ATypI system, invented by Maximilien Vox in 1954. This system is based on two characteristics of type: visual traits and the historical period of its appearance. It included nine categories that were later expanded to 11 by ATypI.

Vox-ATypI Classification system
Vox-ATypI Classification system. (Image credit: Daria Karpenko) (Large preview)

However, typography has developed dramatically, and this system does not reflect modern type design, as it simply ignores a big part of contemporary typefaces. Most modern typefaces tend to have a mix of characteristics from different historical periods and refer to several categories. The endorsement of this system was withdrawn in 2021, and, at the current moment, the association is working on a new system that would meet needs of modern typographers.

The Vox system might be useful for the exploration of type design history and research if you want to dive deep into this theme. However, for the daily use of a product designer, classification according to visual characteristics rather than historical periods might be more helpful. The division into four abstract meta-groups, based on the Vox system with some additions from modern typeface design, will help you to classify typefaces clearly enough.

Classification system by Allan Haley
Classification system by Allan Haley. (Image credit: Daria Karpenko) (Large preview)

Note: If you’d like to dig further, you may want to check other classification systems.

Typeface Proportions

Typefaces fall under two headings: proportional or monospaced, according to the ratio of proportions within the font.

Proportional types have varying widths of letters according to tradition, rules, and the ideas of their designers.

Monospaced fonts, in contrast, comprise characters of equal width. Design of the glyphs in monospaced typeface may vary: wide characters, such as m or w, may have narrow proportions to fit the width. Such thin letters as i or l may have a lot of empty space around or long serifs to fill this space. In any case, the width of each letter, including its space around, will be equal.

Proportional typefaces are natural and traditionally used for daily needs. Monospaced fonts are usually used in specific cases when it is essential to set text in accurate columns with each symbol below the other, such as tables with numerals, sales checks, programming code, and so on.

Comparison of proportional and monospaced fonts
Comparison of proportional and monospaced fonts. (Image credit: Daria Karpenko) (Large preview)


An essential measurement we constantly work with is the size of a typeface or its x-height. It’s the height of the actual letter in lowercase from the baseline to the mean line that equals character x. Capital height refers to the size of uppercase letters. Small caps are the characters whose height is between lowercase and caps. They are a great instrument for particular cases.

Ascenders and descenders are the elements that go above the mean line and beyond the baseline. The long stem in h is an ascender, and the falling tail in y is a descender.

A counter is an enclosed space inside a letter. For example, in o’s or q’s, the white inner space is a counter.

Serifs are short strokes at the end of stems that differentiate Serif typefaces from Sans Serifs.

Visualized font parameters
Visualized font parameters. (Image credit: Daria Karpenko) (Large preview)

An aperture describes to what extent symbols are open. A large aperture means that such letters as c or s are open and have a lot of space between their strokes. When these symbols are tight and closed, the aperture is small. This characteristic specifies legibility: a large aperture helps distinguish similar letters in small sizes, such as c and o.

Aperture examples
Aperture examples. (Image credit: Daria Karpenko) (Large preview)

Contrast is a difference between stroke thickness in vertical and horizontal stems.

Contrast examples
Contrast examples. (Image credit: Daria Karpenko) (Large preview)

These terms are basic but essential for understanding how the characteristics of a typeface affect its legibility. If you want to dig into this word, check out the Monotype terms dictionary.

Step 4: Define The Typeface’s Purpose

All typefaces may be divided into “display” (or “headline”) and “text” groups. However, there are existing cases when typeface works well both as text and display. (For more information, check “Text v. Display” and “Selecting Display Type: Getting Started.”) Both these sides have their character and tone of voice, but they’re meant to be used in different contexts and situations.

Scale of typefaces
Scale of typefaces. (Image credit: Daria Karpenko) (Large preview)

Display fonts are better suited for headings, accents, and other cases for large font-size texts. They often have tighter spacing and sophisticated shapes that fade and may cause visual noise in small sizes. But it never hurts to consider the circumstances of use. Users might need more time to recognize a display font with complicated shapes, even in a large size.

High contrast is also a font characteristic of the display role and long ascenders and descenders.

Differences between text and display fonts from the same font family
Differences between text and display fonts from the same font family. (Image credit: Daria Karpenko) (Large preview)

Alternatively, body text fonts have a simplified appearance and a high level of legibility in small sizes. When choosing a typeface for a body text, go for one with low or no contrast. In this case, high contrast will affect readability: thin strokes will cause visual vibration and negatively influence readability. Large aperture and open shapes are also a good choice as they help keep similar letters distinct, such as c and o.

Different graphics in similar letters, such as i, l, and I, help perceive letters, especially without context (when used in tickers, codes, sets of symbols, and so on). Most text fonts have enlarged x-height and counters, a slight difference in heights between lowercase and caps, and short ascenders/descenders. Thus, it saves vertical space for lengthy texts and ensures their legibility.

Comparison of font families in the same point size
Comparison of font families in the same point size. (Image credit: Daria Karpenko) (Large preview)

If you have plenty of data and need to save horizontal space, a font with compact proportions may be a suitable solution. But avoid overly narrow fonts and consider a font size because you might need to enlarge it a little and add positive tracking, i.e., increased letter spacing (for further information about tracking, don’t forget to check Part 2).

Body text in a nice, readable font often needs strong visual accents in your design. You can approach this necessity by differentiating fonts depending on the type of information. For example, you may use Serif for headers and Sans-Serif for body text. The ultimate working solution, though, is to pick a single Font family because it has the same shapes and proportions. For example, IBM Plex has a variety of styles and supports several writing systems.

IBM Plex font family
IBM Plex font family. (Image source: www.ibm.com) (Large preview)

A typeface name itself can tell you about its purpose. Terms “Text,” “Display,” “Compact,” and “Caption” in the font’s name will help you make the right choice.

In my projects, I usually use Sans-Serif fonts with low contrast for practical reasons. Sans Serif doesn’t have a bright appearance or small details. Consequently, it draws little attention to itself and reduces visual noise, making the text easier to read. As a result, the user quickly receives, understands, and processes information.

Prepare For Part 2!

This article reviewed the key points to consider when choosing typefaces. We also reviewed all font parameters and started investigating how to choose the most suitable font for various scenarios.

The next part will be all about applying the fonts we chose. We’ll discuss how to work with texts and tables and what to pay attention to when handling special characters.

Another major topic will be readability improvement through the length of text lines, line spacing, letter spacing, and tracking.

We’ll also touch upon the topic of color contrast. It goes hand in hand with caring for users with all kinds of needs and work conditions.

Stay tuned!

Smashing Editorial

Weekly News for Designers № 716

Adobe Stock Offer

The Ultimate Low-Quality Image Placeholder Technique
Harry Roberts discusses whether Low-Quality Image Placeholders and LCP play nicely together?
The Ultimate Low-Quality Image Placeholder Technique

The Problem With WordPress Is Positioning, Not Plugins
Geoff Graham explores a divide between WordPress.org and WordPress.com, shedding light on the blurred lines between open-source ideals and commercial interests.
The Problem With WordPress Is Positioning, Not Plugins

Animating Multi-Page Navigations with Browser View Transitions & Astro
A beginner-friendly guide that walks you through the use of the Browser View Transitions API with Astro for a smoother navigation experience.
Animating Multi-Page Navigations with Browser View Transitions Astro

Here’s What It Was like to Build a Website in the 90s
Building a website in the 90s was undeniably different. There were mistakes, but that’s OK, as the lessons learned have brought us to the present.
What It Was like to Build a Website in the 90s

Photoshop is Now on the Web!
Bringing Photoshop to the web represents an enormous milestone in bringing highly complex and graphically intensive software to the browser.
Photoshop is Now on the Web!

Limit the Reach of Your Selectors with the CSS ‘@scope’ at-rule
Learn how to use @scope to select elements only within a limited subtree of your DOM.
Limit the Reach of Your Selectors with the CSS @scope at-rule

Building Motion for the Web: Recipes for Design Success
Explore how to borrow ideas from the animation industry, adapt them to the unique challenges of the web, and find the perfect balance between creativity and efficiency.
Building Motion for the Web: Recipes for Design Success

How Long Does It Take to Build a Website?
We look at the many factors that can impact a web design project’s launch date and share tips for determining its length more accurately.
How Long Does It Take to Build a Website

Table of Contents: The Ultimate Design Guide
When designing a table of contents, carefully compare different placement and styling options to maximize usability.
Table of Contents: The Ultimate Design Guide

The Future of CSS: Easy Light-Dark Mode Color Switching with light-dark()
Learn how you can now use the CSS utility function named light-dark() to switch between Light and Dark modes.
The Future of CSS: Easy Light-Dark Mode Color Switching with light-dark

10 Principles for a Worthy Design Career
Dan Mall shares his freelance experience into ten pieces of advice for tackling complex tasks, preventing burnout, and fostering continuous learning.
Dan Mall 10 Principles for a Worthy Design Career

How to Build a 404 Page with the WordPress Site Editor
We show you how the WordPress Site Editor can help you build a custom 404 page. Don’t take your 404 page for granted!
How to Build a 404 Page with the WordPress Site Editor

mobilecn Ui Kit
A UI library of customizable and easy-to-use components for building mobile apps quickly.

An Anchored Navbar Solution
An Anchored Navbar Solution

The Frustrating State of the Word
The thoughts of William Bay on WordPress leadership, community, development process, and the resulting product lately.
The Frustrating State of the Word

Everything I Know About UX Research I First Learned From Lt. Columbo — Smashing Magazine

Everything I Know About UX Research I First Learned From Lt. Columbo — Smashing Magazine

Working in the area of UX sometimes feels like a crime drama. Can’t believe it? Then look at these fun parallels between modern UX practices and a classical TV detective.

If you don’t know Lieutenant Columbo, I envy you. I wish I could erase my memory and watch this TV masterpiece for the first time again. Columbo, a Los Angeles homicide detective, has become a cult character in American crime drama in the 1970s. Each episode of this show reveals the murderer from the first minute, and the main mystery is how Columbo proves their guilt and distinguishes between lies and the truth.

When I reflect back on this series, it becomes apparent that the UX area has so much in common with crime scene investigation: the truth is unknown, people tend to disguise their real needs, and you have to discover missing facts as soon as possible to build and launch something useful. I’ve never specialized in UX research, but it has been part of my job as a designer for years. When I started, we rarely had the luxury of a dedicated researcher on a team.

So, let’s see what we can learn from a classical fictional character and apply it in the UX area.

Lesson 1: Understate Your Role To Users

It’s not a secret that people behave differently in the vicinity of police, state officials, or management. Columbo understood that if a suspect or witness realized who he was, they would try to disguise or tweak facts (either consciously or subconsciously). That’s why our hero preferred to blend in and keep his position out of sight as long as possible.

Pictures of Columbo in two situations, such as during the UX research and in a job interview, with speech balloons where it states, ‘My name is Frank, and I am a researcher. Today we will talk about..’
Understate your role to users. (Large preview)

For instance, in the episode “By Dawn’s Early Light” (S4E3), the commandant of a military academy murders the chairman of the board. So, Columbo stayed in the barracks for several days and talked with cadets informally until he exposed the killer.

Sometimes, such an approach has caused funny situations. In the episode “Negative Reaction” (S4E2), Columbo was mistaken for a hobo at St. Matthew’s Mission. Lieutenant patiently accepted the nun’s caring and ate a bowl of stew, and only when she suggested a new raincoat instead of Columbo’s beloved old one, he revealed his purpose.

UX research is no less challenging because we explore human behavior but inevitably influence the findings since we are humans, too. Designers often run the risk of receiving twisted information when they forget to tackle users’ fears and insecurity, for example:

  • Interviewees believe their boss sent you to assess their skills;
  • Users think you created this design, and now they try not to offend you;
  • Customers worry that you’ll judge their computer literacy.

Understating your official role gives you precious moments to talk with people more sincerely. In contrast, here is a perfect intro to annihilate research accuracy: “Hello! I’m a Senior UX Designer and Product Manager. Today, I’ll conduct a usability testing session and jobs-to-be-done interview to identify UX gaps in our design…” After hearing that, people would probably flood you with socially expected answers.

Instead, designers should keep their fancy titles to themselves. Try to start a usability testing session humbly, “My name is <…>, and I was asked to check whether this website is useful and clear to you.” Don’t make people think you designed it (even if you did).

And here is an intro phrase I recommend using for a user interview, “I’m a researcher, and today I’d like to ask you a couple of questions about <…>.” Give a simple description without redundant details that may scare people and increase tension.

Depending on the situation, you can even say, “I didn’t design this, so I won’t be offended if you criticize it; please be honest with your feedback!” But it’s on the thin edge between ensuring less biased research and lying.

Feature Panel

Lesson 2: “You Don’t Know My Boss…”

Lieutenant Columbo usually dealt with wealthy and mighty criminals who were sure they would go unpunished. So, he played the role of a “little man” and wasn’t ashamed of it. He realized that exposing his authority would only make people stay within their own shells. Not only did he hide his intellect, but he also encouraged others to feel superior towards him so that people behaved more freely and revealed their true motives.

A picture of Colombo who looks messy signed with a phase ‘Conducting field research’. On the right side, Columbo, wearing a suit, signed with the phrase ‘Presenting research results to the stakeholders’.
Research is not meant to show off. (Large preview)

Columbo looked messy — in a creased beige raincoat, with a cigar, driving an old Peugeot — and concealed his shrewd mind behind this slack appearance and sloppy communication manner. He often told naive stories about his wife and appeared henpecked:

Columbo: I’m a worrier. I mean, little insignificant details, I lose my appetite, I can’t eat. My wife, she says to me, “You know, you can really be a pain.”

Another quote is about the “strict” boss, although it’s apparent from the series that the Lieutenant was a self-organized expert:

Columbo: You’re a celebrity. Because of you, my boss, he won’t let me close up this case until I have covered everything. Every loose end gotta be tied up.

As a newbie designer, I was indoctrinated about the value of presentation skills, making a positive first impression, and the necessity of defending design decisions. However, later, these conventions played a cruel joke on me.

In UX research, a common misconception is that you should look confident and competent in front of users. Let me get this straight: conducting research is not the same as presenting designs to top management. During any research, the goal is to make people feel relaxed so that they tell you the truth. However, at a presentation, the main task is to assure everyone that your decision is well-informed and your input helps steer the business in the right direction.

Research is not meant to show off. You see a user for the first and probably the last time in your life; they won’t influence your career; they aren’t here to be impressed. Behave humbly while staying in control of the session. Yes, you may come across as an ordinary person, but it’ll pay off and bring more insights compared to “boss-subordinate” or “expert-noob” paradigms. I’m not saying one should literally look messy like Columbo. The idea is to blend in, for instance:

  • Match interviewees’ dress code (within reason, of course).
    Try not to appear much more official or extravagant than a person in front of you, and you’d better keep that creative “Helvetica” T-shirt and “You ≠ user” pin for a UX meetup.
  • Avoid design jargon or terminology you have to explain.
    However, a reasonable dose of your interviewees’ professional lingo will boost communication if you work on a specialized topic.
  • Behave neutrally but naturally.
    It means balancing impartiality and separation from the subject with normal human behavior and empathy (simply saying, not being a robot).

We call this approach “user safari” nowadays, but Lieutenant Columbo had been practicing it long before it became designers’ mainstream. If you want to understand your suspects (in our case, users), observe their behavior in a “natural habitat,” and don’t miss a chance to try users’ occupations. It’s better to see once than to hear a thousand times, right?

Pictures of Colombo in different roles, such as a doctor, a sommelier, a photographer, a vagabond, and so on
A researcher is a master of many trades. (Large preview)

For example, in the episode “Any Old Port in a Storm” (S3E2), a wine connoisseur kills his brother to prevent him from selling the family winery. Columbo had to turn into a sommelier enthusiast for a while to investigate this crime and recognize unusual evidence, which would have been overlooked without specialized knowledge.

The episode “Negative Reaction” (S4E2) features a talented photographer and Pulitzer Prize winner who kills his wife and blames her death on a failed kidnapping. Columbo gets a camera and learns the basic principles of photography to convict the criminal. The detective had absolutely no proof, but owing to the newly gained knowledge, he set a cunning trap so that the murderer gave himself away.

Now, UX research. Of course, we shouldn’t literally follow the TV series and get expensive equipment just to step into users’ shoes. Fortunately, one can empathize much more easily nowadays. I mean observation studies and contextual inquiries when you can access users or documentaries, YouTube blogs, and professional communities if you want to prepare to face real users and avoid surface-level questions.

For example, several years ago, I was preparing for interviews with drilling engineers — future users of a new app suite for drilling planning. So, I watched “Deepwater Horizon,” a U.S. movie about a historical oil spill disaster in the Gulf of Mexico. This movie was recommended by a subject matter expert from the client’s side; he told me it realistically showed a drilling rig in action. As a result, I understood the technical jargon and used interviews with engineers to figure out really unobvious facts, not Wikipedia-level basics.

Another vivid example is a project I heard about from my former colleagues, who conducted product discovery for a Middle East logistics company several years ago. So, during an on-site, the discovery team observed the actual work of delivery crews and eventually witnessed a problem that couriers didn’t dare to report to their superiors. The app was designed for European address conventions and didn’t consider Middle-Eastern reality. Couriers only simulated using the navigation feature because the app required it to proceed to the next step. Frankly, I don’t believe this could’ve been learned from interviewing users or workshops with the client’s management.

Lesson 4: “Uhh… Just One More Thing!”

I guess Columbo used this catchphrase in each of the 69 episodes. In some cases, Lieutenant sounded like a narrow-minded, forgetful cop; sometimes, the question that followed “just one more thing” made a suspect worry. But does it have anything to do with UX research?

A picture of Colombo next to a chart pie, where around 40 percent relate to insights after all scripted questions, and the rest 60 percent relate to insights after contextual questions
“Uhh… Just one more thing!” (Large preview)

If we translate this phrase into modern language, we are talking about the skill of asking follow-up questions and improvising in pursuit of UX insights. Of course, our task in tech is way simpler than Columbo’s: we don’t have to provoke criminals to obtain irrefutable evidence for trial. But what detectives and UX folks share is the sense of valuable information and information buzz. This feeling pushes us to step aside from protocols and scripts and dig deeper.

“I have always found that plans are useless, but planning is indispensable.”
— Dwight Eisenhower

Even the best script for an interview, usability testing, or workshop won’t take into account all nuances.

In qualitative research, you cannot just read prepared questions out loud and call it a day; otherwise, it would’ve been already outsourced to robots.

I learned that what you want to know doesn’t equal the questions you ask.

  • Research questions are something you want to learn to make better design decisions. You keep them secret from respondents; they are only for your team’s internal use. For example, Will they buy this app? What is their top problem? Why are we worse than our competitors? In Columbo’s terms, they are equivalent to “Who is the murderer?”
  • Interview questions are what you actually ask. They are formulated in a certain way because not every answer can be retrieved directly. For example, Please tell me about the last time you ordered grocery delivery. How often do you buy non-fiction books online? They resemble Columbo’s “What did you do after 10 PM last Friday?”

While research questions are agreed upon with the team in advance, interview questions are left to the researcher’s discretion. For example, in one case, you ask a single “Tell me about the last time…” question and get tons of data from a talkative and relaxed person. But another respondent will give you a tiny piece of a puzzle at a time, and you’ll need to ask more granular questions, “What did you order? How did you choose? What payment did you choose? Why this option?” and so on.

Lesson 5: Don’t Take Words At Face Value

Why is “Columbo” so fun to watch? Because the Lieutenant always allows his suspects to justify themselves and compose plausible explanations in a naive attempt to ward off suspicion. I think the suspects should’ve kept silent instead of trying to divert Columbo’s investigation.

A dialogue between Columbo as an interviewer and a user, where a user says that he like a feature. Columbo asks when the user last time used a feature. And a user replies that he actually hasn’t used it yet.
Don’t take words at face value. (Large preview)

The iconic dialog between Columbo and Paul Gerard shows how early one can recognize lies. The episode “Murder Under Glass” (S7E2) tells about a food critic who extorted money from restaurant owners in exchange for positive reviews and poisoned one of them for fear of exposure.

Paul Gerard: When did you first suspect me?
Columbo: As it happens, sir… about two minutes after I met you.
Paul Gerard: That can’t be possible.
Columbo: Oh, you made it perfectly clear, sir, the very first night when you decided to come to the restaurant directly after you were informed that Vittorio was poisoned.
Paul Gerard: I was instructed to come here by the police.
Columbo: And you came, sir.
Paul Gerard: Yes.
Columbo: After eating dinner with a man that had been poisoned. You didn’t go to a doctor. You came because the police instructed you. You didn’t go to a hospital. You didn’t even ask to have your stomach pumped. Mr. Gerard, that’s the damnedest example of good citizenship I’ve ever seen.

Surprisingly, this strongly relates to UX.

All people lie. Influential stakeholders try to push forward their ideas. Some people desire to appear more knowledgeable than they are. Others are afraid to share opinions if they don’t know how they’ll be used. You can also find yourself in the center of office politics when officially declared messages contradict actual goals.

Due to classical UX doctrines, designers are called “user advocates” and broadcasters of the “user’s voice,” but it doesn’t mean we should listen to people indiscriminately.

If a person craves a feature but has zero examples of how something similar has helped them in the past, it might be an exaggeration. If a business owner says an app is successful but has only feedback from her colleagues, it may be overly optimistic. And so on. When we notice information discrepancies, the best choice is to continue asking questions, and then, maybe, your interlocutor will start to doubt their own words. For example,

Product owner: Hey, Ann! We need to have an export feature so that users can download nice-looking PDF reports.
Designer: Just for my understanding. Can you please explain the context of this feature idea?
Product owner: Well, I think it’s pretty clear. Export is a standard thing for engineering applications. Probably, there should be a button or icon above the dashboard; a user clicks, and then a PDF with our logo…
Designer: Jack, sorry for interrupting. I’m asking this not out of curiosity but because I want to get it right. If you remember the user interviews last month, engineers usually copy-paste data from the dashboard into a PowerPoint template with their company’s branding…
Product owner: That’s a very good question. I need to double-check it.

So, Columbo teaches us to trust but verify. Carefully listen to what you’re told, don’t show skepticism or suspicion, and continue asking questions until you reach the root cause of a problem.


Of course, the lessons I deduced from TV series aren’t even close to being comparable with mature research methodologies and UX culture. Unlike the time when I started my design career, today, I see more and more dedicated researchers who take care of insights that steer businesses in the right direction. So, I hope this article entertains you with unusual parallels between UX and fictional crime investigation.

If Lieutenant Columbo were a UX guru like Don Norman or Jacob Nielsen, he would probably give us the following advice:

  1. Don’t flash your fancy UX title without necessity.
  2. Don’t show off in front of users; this is not a job interview or top management presentation.
  3. Strive to observe users in context, in their “natural habitat.”
  4. Have plenty of contextual and follow-up questions up your sleeve.
  5. All people lie (often unintentionally). Double-check their words.

Further Reading

Smashing Editorial

The Unpredictable Life of a Freelance Web Designer

I’ve been a freelance web designer since 1999. And I know where I’ll be most days. I’m usually here at my desk, plugging away at projects.

But that’s where the predictability ends. That’s because my to-do list is subject to change. One request from a client can disrupt everything. No matter how much I plan. My schedule is in a constant state of flux.

I’ve learned to accept the situation. Or have I? A recent tweet made me think about how unpredictable my days are. And I’m not the only one dealing with uncertainty. Other freelancers have shared their frustrations as well.

Sometimes web designers need to shift gears faster than a Ferrari. That’s just reality. So, how do we cope with it? And what can we do to lessen the need? Here are a few thoughts on dealing with an unpredictable life.

No Routine Is Safe

I love having a routine. I find security in knowing what I’ll be doing each day. Maybe a down-to-the-minute itinerary is boring. But cold comfort is the payoff.

But working with clients throws a wrench into your schedule. You can’t predict when they’ll need something. When they do, it can leave you scrambling.

Oddly enough, it seems like these requests come in bunches. For example, there are some clients I hear from once a year (if that). And yet there are days when I’ll receive messages from several of them. Maybe it has something to do with the alignment of the stars.

Sometimes their requests are a minor disruption. But others can quickly lead you down a rabbit hole. Troubleshooting a broken website is a classic example. This type of situation can quickly eat up chunks of your time.

This results in a domino effect. You’re suddenly behind schedule. And that thing you needed to get done today must wait until tomorrow. It’s a frustrating feeling, for sure.

A client emergency can disrupt your schedule

Adjust and Prioritize Your Projects

I’m far from perfect when dealing with disruptions. But I have learned a few lessons, too. Prioritizing projects is chief among them.

It’s important to consider how a request fits into your queue. Is it an emergency? Are you working on a tight deadline? How much revenue does your client generate?

Each of the above can help you determine the order of importance. For instance, a low-revenue client who needs a simple text change shouldn’t be a priority. That’s not to say you should ignore their needs. Just don’t drop everything you’re doing for them.

Adjusting the expectations of you and your client is also worth doing. Setting aggressive deadlines is likely to blow up in your face. Therefore, add extra time when estimating a project. Do your best to prepare for the unexpected.

It’s an absolute must for solo freelancers. You don’t have a colleague to pick up the slack. Thus, give yourself room to breathe. Time lost to an emergency won’t be as big of a burden.

Prioritize tasks based on importance, client revenue, and deadlines

Dealing with the Ups and Downs

Unpredictability takes a mental toll on freelancers. It’s easy to feel like you can’t accomplish your goals. Frequent interruptions can grind progress to a halt.

You might become hesitant to book new projects. Making that commitment is difficult when you’re already struggling. Who wants to add fuel to the fire?

Learning to cope is a process. But several things can help.

First, take a moment to collect yourself when switching gears. Get away from your computer for a bit. You’ll be able to clear your head before starting something new.

It’s also worth looking at efficiency. Are there any workflow changes that will make things easier? For example, you might find an AI tool that helps you troubleshoot code. Getting things done faster may result in less stress.

Accept the reality of freelancing. You’re here to serve clients. And their needs won’t always be convenient. Therefore, take them as they come.

Finally, give yourself some grace. It’s OK to get frustrated. That’s part of the journey. However, don’t let it take over your life. You’ll find your way back to that to-do list in time.

Find healthy ways to cope with stress

Don’t Let It Go to Your Head

You never know what each day will bring. For web designers, that means our schedules can change in an instant. One email can lay waste to our best-laid plans.

Eliminating this unpredictability isn’t realistic. Thus, we must learn to adapt. Being clear-headed about it is your best weapon. Mindlessly rushing through the difficulties won’t help.

I can attest that some days are challenging. But you can learn to put them behind you. Perhaps a career in web design should come with a warning label: Your day may not go as planned.

Related Topics

A High-Level Overview Of Large Language Model Concepts, Use Cases, And Tools — Smashing Magazine

A High-Level Overview Of Large Language Model Concepts, Use Cases, And Tools — Smashing Magazine

While AI remains a collective point of interest — or doom, depending on your outlook — it also remains a bit of a black box. What exactly is inside an AI application that makes it seem as though it can hold a conversation? Discuss the concept of large language models (LLMs) and how they are implemented with a set of data to develop an application. Joas compares a collection of no-code and low-code apps designed to help you get a feel for not only how the concept works but also to get a sense of what types of models are available to train AI on different skill sets.

Even though a simple online search turns up countless tutorials on using Artificial Intelligence (AI) for everything from generative art to making technical documentation easier to use, there’s still plenty of mystery around it. What goes inside an AI-powered tool like ChatGPT? How does Notion’s AI feature know how to summarize an article for me on the fly? Or how are a bunch of sites suddenly popping up that can aggregate news and auto-publish a slew of “new” articles from it?

It all can seem like a black box of mysterious, arcane technology that requires an advanced computer science degree to understand. What I want to show you, though, is how we can peek inside that box and see how everything is wired up.

Specifically, this article is about large language models (LLMs) and how they “imbue” AI-powered tools with intelligence for answering queries in diverse contexts. I have previously written tutorials on how to use an LLM to transcribe and evaluate the expressed sentiment of audio files. But I want to take a step back and look at another way around it that better demonstrates — and visualizes — how data flows through an AI-powered tool.

We will discuss LLM use cases, look at several new tools that abstract the process of modeling AI with LLM with visual workflows, and get our hands on one of them to see how it all works.

Large Language Models Overview

Forgoing technical terms, LLMs are vast sets of text data. When we integrate an LLM into an AI system, we enable the system to leverage the language knowledge and capabilities developed by the LLM through its own training. You might think of it as dumping a lifetime of knowledge into an empty brain, assigning that brain to a job, and putting it to work.

“Knowledge” is a convoluted term as it can be subjective and qualitative. We sometimes describe people as “book smart” or “street smart,” and they are both types of knowledge that are useful in different contexts. This is what artificial “intelligence” is created upon. AI is fed with data, and that is what it uses to frame its understanding of the world, whether it is text data for “speaking” back to us or visual data for generating “art” on demand.

Use Cases

As you may imagine (or have already experienced), the use cases of LLMs in AI are many and along a wide spectrum. And we’re only in the early days of figuring out what to make with LLMs and how to use them in our work. A few of the most common use cases include the following.

  • Chatbot
    LLMs play a crucial role in building chatbots for customer support, troubleshooting, and interactions, thereby ensuring smooth communications with users and delivering valuable assistance. Salesforce is a good example of a company offering this sort of service.
  • Sentiment Analysis
    LLMs can analyze text for emotions. Organizations use this to collect data, summarize feedback, and quickly identify areas for improvement. Grammarly’s “tone detector” is one such example, where AI is used to evaluate sentiment conveyed in content.
  • Content Moderation
    Content moderation is an important aspect of social media platforms, and LLMs come in handy. They can spot and remove offensive content, including hate speech, harassment, or inappropriate photos and videos, which is exactly what Hubspot’s AI-powered content moderation feature does.
  • Translation
    Thanks to impressive advancements in language models, translation has become highly accurate. One noteworthy example is Meta AI’s latest model, SeamlessM4T, which represents a big step forward in speech-to-speech and speech-to-text technology.
  • Email Filters
    LLMs can be used to automatically detect and block unwanted spam messages, keeping your inbox clean. When trained on large datasets of known spam emails, the models learn to identify suspicious links, phrases, and sender details. This allows them to distinguish legitimate messages from those trying to scam users or market illegal or fraudulent goods and services. Google has offered AI-based spam protection since 2019.
  • Writing Assistance
    Grammarly is the ultimate example of an AI-powered service that uses LLM to “learn” how you write in order to make writing suggestions. But this extends to other services as well, including Gmail’s “Smart Reply” feature. The same thing is true of Notion’s AI feature, which is capable of summarizing a page of content or meeting notes. Hemmingway’s app recently shipped a beta AI integration that corrects writing on the spot.
  • Code and Development
    This is the one that has many developers worried about AI coming after their jobs. It hit the commercial mainstream with GitHub Copilot, a service that performs automatic code completion. Same with Amazon’s CodeWhisperer. Then again, AI can be used to help sharpen development skills, which is the case of MDN’s AI Help feature.

Again, these are still the early days of LLM. We’re already beginning to see language models integrated into our lives, whether it’s in our writing, email, or customer service, among many other services that seem to pop up every week. This is an evolving space.

Feature Panel

Types Of Models

There are all kinds of AI models tailored for different applications. You can scroll through Sapling’s large list of the most prominent commercial and open-source LLMs to get an idea of all the diverse models that are available and what they are used for. Each model is the context in which AI views the world.

Let’s look at some real-world examples of how LLMs are used for different use cases.

Natural Conversation
Chatbots need to master the art of conversation. Models like Anthropic’s Claude are trained on massive collections of conversational data to chat naturally on any topic. As a developer, you can tap into Claude’s conversational skills through an API to create interactive assistants.

Developers can leverage powerful pre-trained models like Falcon for sentiment analysis. By fine-tuning Falcon on datasets with emotional labels, it can learn to accurately detect the sentiment in any text provided.

Meta AI released SeamlessM4T, an LLM trained on huge translated speech and text datasets. This multilingual model is groundbreaking because it translates speech from one language into another without an intermediary step between input and output. In other words, SeamlessM4T enables real-time voice conversations across languages.

Content Moderation
As a developer, you can integrate powerful moderation capabilities using OpenAI’s API, which includes a LLM trained thoroughly on flagging toxic content for the purpose of community moderation.

Spam Filtering
Some LLMs are used to develop AI programs capable of text classification tasks, such as spotting spam emails. As an email user, the simple act of flagging certain messages as spam further informs AI about what constitutes an unwanted email. After seeing plenty of examples, AI is capable of establishing patterns that allow it to block spam before it hits the inbox.

Not All Language Models Are Large

While we’re on the topic, it’s worth mentioning that not all language models are “large.” There are plenty of models with smaller sets of data that may not go as deep as ChatGPT 4 or 5 but are well-suited for personal or niche applications.

For example, check out the chat feature that Luke Wrobleski added to his site. He’s using a smaller language model, so the app at least knows how to form sentences, but is primarily trained on Luke’s archive of blog posts. Typing a prompt into the chat returns responses that read very much like Luke’s writings. Better yet, Luke’s virtual persona will admit when a topic is outside of the scope of its knowledge. An LLM would provide the assistant with too much general information and would likely try to answer any question, regardless of scope. Members from the University of Edinburgh and the Allen Institute for AI published a paper in January 2023 (PDF) that advocates the use of specialized language models for the purpose of more narrowly targeted tasks.

So far, we’ve covered what an LLM is, common examples of how it can be used, and how different models influence the AI tools that integrate them. Let’s discuss that last bit about integration.

Many technologies require a steep learning curve. That’s especially true with emerging tools that might be introducing you to new technical concepts, as I would argue is the case with AI in general. While AI is not a new term and has been studied and developed over decades in various forms, its entrance to the mainstream is certainly new and sparks the recent buzz about it. There’s been plenty of recent buzz in the front-end development community, and many of us are scrambling to wrap our minds around it.

Thankfully, new resources can help abstract all of this for us. They can power an AI project you might be working on, but more importantly, they are useful for learning the concepts of LLM by removing advanced technical barriers. You might think of them as “low” and “no” code tools, like WordPress.com vs. self-hosted WordPress or a visual React editor that is integrated with your IDE.

Low-code platforms make it easier to leverage large language models without needing to handle all the coding and infrastructure yourself. Here are some top options:


Chainlit is an open-source Python package that is capable of building a ChatGPT-style interface using a visual editor.

Source: GitHub.


  • Visualize logic: See the step-by-step reasoning behind outputs.
  • Integrations: Chainlit supports other tools like LangChain, LlamaIndex, and Haystack.
  • Cloud deployment: Push your app directly into a production environment.
  • Collaborate with your team: Annotate dataset and run team experiments.

And since it’s open source, Chainlit is freely available at no cost.


LLMStack visual editing interface
Source: LLMStack. (Large preview)

LLMStack is another low-code platform for building AI apps and chatbots by leveraging large language models. Multiple models can be chained together into “pipelines” for channeling data. LLMStack supports standalone app development but also provides hosting that can be used to integrate an app into sites and products via API or connected to platforms like Slack or Discord.

LLMStack is also what powers Promptly, a cloud version of the app with freemium subscription pricing that includes a free tier.


Source: FlowiseAI

What makes FlowiseAI unique is its drag-and-drop interface. It’s a lot like working with a mind-mapping app or a flowchart that stitches apps together with LLM APIs for a truly no-code visual editing experience. Plus, Flowise is freely available as an open-source project. You can grab any of the 330K-plus LLMs in the Hugging Face community.

Cloud hosting is a feature that is on the horizon, but for now, it is possible to self-host FlowiseAI apps or deploy them on other services such as Raleway, Render, and Hugging Face Spaces.

Stack AI

Stack AI visual editing interface
(Large preview)

Stack AI is another no-code offering for developing AI apps integrated with LLMs. It is much like FlowiseAI, particularly the drag-and-drop interface that visualizes connections between apps and APIs. One thing I particularly like about Stack AI is how it incorporates “data loaders” to fetch data from other platforms, like Slack or a Notion database.

I also like that Stack AI provides a wider range of LLM offerings. That said, it will cost you. While Stack AI offers a free pricing tier, it is restricted to a single project with only 100 runs per month. Bumping up to the first paid tier will set you back $199 per month, which I suppose is used toward the costs of accessing a wider range of LLM sources. For example, Flowise AI works with any LLM in the Hugging Face community. So does Stack AI, but it also gives you access to commercial LLM offerings, like Anthropic’s Claude models and Google’s PaLM, as well as additional open-source offerings from Replicate.


Source: Voiceflow

Voiceflow is like Flowise and Stack AI in the sense that it is another no-code visual editor. The difference is that Voiceflow is a niche offering focused solely on developing voice assistant and chat applications. Whereas the other offerings could be used to, say, train your Gmail account for spam filtering, Voiceflow is squarely dedicated to developing voice flows.

There is a free sandbox you can use to test Voiceflow’s features, but using Voiceflow for production-ready app development starts at $50 per month for individual use and $185 per month for collaborative teamwork for up to three users.

“The Rest”

The truth is that no-code and low-code visual editors for developing AI-powered apps with integrated LLMs are being released all the time, or so it seems. Profiling each and every one is outside the scope of this article, though it would certainly be useful perhaps in another article.

That said, I have compiled a list of seven other tools in the following table. Even though I have not taken the chance to demo each and every one of them, I am providing what information I know about them from their sites and documentation, so you have a wider set of tools to compare and evaluate for your own needs.

Name Description Example Uses Pricing Documentation
Dify “Seamlessly build & manage AI-native apps based on GPT-4.” Chatbots, natural language search, content generation, summarization, sentiment analysis. Free (open source) Documentation
re:tune “Build chatbots for any use case, from customer support to sales and more.”
“Connect any data source to your chatbot, from your website to hyper-personalized customer data.”
Customer service chatbots, sales assistants. $0-$399 per month with lifetime access plans available. Roadmap
Botpress “The first next-generation chatbot builder powered by OpenAI. Build ChatGPT-like bots for your project or business to get things done.” Chatbots, natural language search, content generation, summarization, sentiment analysis. Free for up to 1,000 runs per month with monthly pricing for additional runs in $25 increments. Documentation
Respell “Respell makes it easy to use AI in your work life. Our drag-and-drop workflow builder can automate a tedious process in minutes. Powered by the latest AI models.” Chatbots, natural language search, content generation, summarization, sentiment analysis. A free starter plan is available with more features and integrations starting at $20 per month. Documentation
Superagent “Make your applications smarter and more capable with AI-driven agents. Build unique ChatGPT-like experiences with custom knowledge, brand identity, and external APIs.” Chatbots, legal document analysis, educational content generation, code reviews. Free (open source) Documentation
Shuttle “ShuttleAI is comprised of multiple LLM agents working together to handle your request. Starting from the beginning itself, they expand upon the user’s prompt, reason about the project, and define a plan of action.” Creating a social media or community platform; developing an e-commerce site/store; making a booking/reservation system; constructing a dashboard for data insights. Free with custom pricing options while Shuttle Pro is in a beta trial. Documentation
Passio “Ready to use Mobile AI Modules and SDK for your brand. Our Mobile AI platform supports complete end-to-end development of AI-powered applications, enabling you to rapidly add computer vision and AI-powered experiences to your apps.” Food nutrition analysis, paint color detection, object identification. Free Blog

Example: AI Career Assistant With FlowiseAI

Let’s get a feel for developing AI applications with no-code tools. In this section, I will walk you through a demonstration that uses FlowiseAI to train an AI-powered career assistant app trained with LLMs. The idea is less about promoting no-code tools than it is an extremely convenient way to visualize how the components of an AI application are wired together and where LLMs fit in.

Why are we using FlowiseAI instead of any other no-code and low-code tools we discussed? I chose it primarily because I found it to be the easiest one to demo without additional pricing and configurations. FlowiseAI may very well be the right choice for your project, but please carefully evaluate and consider other options that may be more effective for your specific project or pricing constraints.

I also chose FlowiseAI because it leverages LangChain, an open-source framework for building applications using large language models. LangChain provides components like prompt templates, LLMs, and memory that can be chained together to develop use cases like chatbots and question-answering.

To see the possibilities of FlowiseAI first-hand, we’ll use it to develop an AI assistant that offers personalized career advice and guidance by exploring a user’s interests, skills, and career goals. It will take all of these inputs and return a list of cities that not only have a high concentration of jobs that fit the user’s criteria but that provide a good “quality of life” as well.

These are the components we will use to piece together the experience:

  • Retrievers (i.e., interfaces that return documents given an unstructured query);
  • Chains (i.e., the ability to compose components by linking them together visually);
  • Language models (i.e., what “trains” the assistant);
  • Memory (i.e., storing previous sessions);
  • Tools (i.e., functions);
  • Conversational agent (i.e., determine which tools to use based on the user’s input).

These are the foundational elements that pave the way for creating an intelligent and efficient assistant. Here is a visual of the final configuration in Flowise:

A visual of the final configuration in Flowise, showing how the workflow is organized
(Large preview)

Install FlowiseAI

First things first, we need to get FlowiseAI up and running. FlowiseAI is an open-source application that can be installed from the command line.

You can install it with the following command:

npm install -g flowise

Once installed, start up Flowise with this command:

npx flowise start

From here, you can access FlowiseAI in your browser at localhost:3000.

FlowiseAI initial screen designed to display chat flows
This is the screen you should see after FlowwiseAI is successfully installed. (Large preview)

It’s possible to serve FlowiseAI so that you can access it online and provide access to others, which is well-covered in the documentation.

Setting Up Retrievers

Retrievers are templates that the multi-prompt chain will query.

Different retrievers provide different templates that query different things. In this case, we want to select the Prompt Retriever because it is designed to retrieve documents like PDF, TXT, and CSV files. Unlike other types of retrievers, the Prompt Retriever does not actually need to store those documents; it only needs to fetch them.

Let’s take the first step toward creating our career assistant by adding a Prompt Retriever to the FlowiseAI canvas. The “canvas” is the visual editing interface we’re using to cobble the app’s components together and see how everything connects.

Adding the Prompt Retriever requires us to first navigate to the Chatflow screen, which is actually the initial page when first accessing FlowiseAI following installation. Click the “Add New” button located in the top-right corner of the page. This opens up the canvas, which is initially empty.

Empty canvas
(Large preview)

The “Plus” (+) button is what we want to click to open up the library of items we can add to the canvas. Expand the Retrievers tab, then drag and drop the Prompt Retriever to the canvas.

Retrievers tab
(Large preview)

The Prompt Retriever takes three inputs:

  1. Name: The name of the stored prompt;
  2. Description: A brief description of the prompt (i.e., its purpose);
  3. Prompt system message: The initial prompt message that provides context and instructions to the system.

Our career assistant will provide career suggestions, tool recommendations, salary information, and cities with matching jobs. We can start by configuring the Prompt Retriever for career suggestions. Here is placeholder content you can use if you are following along:

  • Name: Career Suggestion;
  • Description: Suggests careers based on skills and experience;
  • Prompt system message: You are a career advisor who helps users identify a career direction and upskilling opportunities. Be clear and concise in your recommendations.
Configuring the Prompt Retriever with inputs
(Large preview)

Be sure to repeat this step three more times to create each of the following:

  • Tool recommendations,
  • Salary information,
  • Locations.
Four configured prompt retrievers on the canvas
(Large preview)

A Multi-Prompt Chain is a class that consists of two or more prompts that are connected together to establish a conversation-like interaction between the user and the career assistant.

The idea is that we combine the four prompts we’ve already added to the canvas and connect them to the proper tools (i.e., chat models) so that the career assistant can prompt the user for information and collect that information in order to process it and return the generated career advice. It’s sort of like a normal system prompt but with a conversational interaction.

The Multi-Prompt Chain node can be found in the “Chains” section of the same inserter we used to place the Prompt Retriever on the canvas.

Inserting the multi-prompt chain to the canvas
(Large preview)

Once the Multi-Prompt Chain node is added to the canvas, connect it to the prompt retrievers. This enables the chain to receive user responses and employ the most appropriate language model to generate responses.

To connect, click the tiny dot next to the “Prompt Retriever” label on the Multi-Prompt Chain and drag it to the “Prompt Retriever” dot on each Prompt Retriever to draw a line between the chain and each prompt retriever.

The chain connected to each prompt retreiver
(Large preview)

Integrating Chat Models

This is where we start interacting with LLMs. In this case, we will integrate Anthropic’s Claude chat model. Claude is a powerful LLM designed for tasks related to complex reasoning, creativity, thoughtful dialogue, coding, and detailed content creation. You can get a feel for Claude by registering for access to interact with it, similar to how you’ve played around with OpenAI’s ChatGPT.

From the inserter, open “Chat Models” and drag the ChatAnthropic option onto the canvas.

Inserting the ChatAnthropic node to the canvas
(Large preview)

Once the ChatAnthropic chat model has been added to the canvas, connect its node to the Multi-Prompt Chain’s “Language Model” node to establish a connection.

Connecting the language model to the mutlti-chain prompt
(Large preview)

It’s worth noting at this point that Claude requires an API key in order to access it. Sign up for an API key on the Anthropic website to create a new API key. Once you have an API key, provide it to the Mutli-Prompt Chain in the “Connect Credential” field.

Anthropic API field with the credential name and API key
(Large preview)

Adding A Conversational Agent

The Agent component in FlowiseAI allows our assistant to do more tasks, like accessing the internet and sending emails.

It connects external services and APIs, making the assistant more versatile. For this project, we will use a Conversational Agent, which can be found in the inserter under “Agent” components.

Adding the Conversational Agent to the canvas
(Large preview)

Once the Conversational Agent has been added to the canvas, connect it to the Chat Model to “train” the model on how to respond to user queries.

Conversational Agent connected to the Chat Model
(Large preview)

Integrating Web Search Capabilities

The Conversational Agent requires additional tools and memory. For example, we want to enable the assistant to perform Google searches to obtain information it can use to generate career advice. The Serp API node can do that for us and is located under “Tools” in the inserter.

Adding the Serp API node to the canvas
(Large preview)

Like Claude, Serp API requires an API key to be added to the node. Register with the Serp API site to create an API key. Once the API is configured, connect Serp API to the Conversational Agent’s “Allowed Tools” node.

Connecting Serp API to the Conversational Agent
(Large preview)

Building In Memory

The Memory component enables the career assistant to retain conversation information.

This way, the app remembers the conversation and can reference it during the interaction or even to inform future interactions.

There are different types of memory, of course. Several of the options in FlowiseAI require additional configurations, so for the sake of simplicity, we are going to add the Buffer Memory node to the canvas. It is the most general type of memory provided by LangChain, taking the raw input of the past conversation and storing it in a history parameter for reference.

Buffer Memory connects to the Conversational Agent’s “Memory” node.

 Connecting Buffer Memory to the Conversational Agent
(Large preview)

The Final Workflow

At this point, our workflow looks something like this:

  • Four prompt retrievers that provide the prompt templates for the app to converse with the user.
  • A multi-prompt chain connected to each of the four prompt retrievers that chooses the appropriate tools and language models based on the user interaction.
  • The Claude language model connected to the multi-chain prompt to “train” the app.
  • A conversational agent connected to the Claude language model to allow the app to perform additional tasks, such as Google web searches.
  • Serp API connected to the conversational agent to perform bespoke web searches.
  • Buffer memory connected to the conversational agent to store, i.e., “remember,” conversations.
Showing the entire workfloe on the canvas
(Large preview)

If you haven’t done so already, this is a great time to save the project and give it a name like “Career Assistant.”

Final Demo

Watch the following video for a quick demonstration of the final workflow we created together in FlowiseAI. The prompts lag a little bit, but you should get the idea of how all of the components we connected are working together to provide responses.


As we wrap up this article, I hope that you’re more familiar with the concepts, use cases, and tools of large language models. LLMs are a key component of AI because they are the “brains” of the application, providing the lens through which the app understands how to interact with and respond to human input.

We looked at a wide variety of use cases for LLMs in an AI context, from chatbots and language translations to writing assistance and summarizing large blocks of text. Then, we demonstrated how LLMs fit into an AI application by using FlowiseAI to create a visual workflow. That workflow not only provided a visual of how an LLM, like Claude, informs a conversation but also how it relies on additional tools, such as APIs, for performing tasks as well as memory for storing conversations.

The career assistant tool we developed together in FlowiseAI was a detailed visual look inside the black box of AI, providing us with a map of the components that feed the app and how they all work together.

Now that you know the role that LLMs play in AI, what sort of models would you use? Is there a particular app idea you have where a specific language model would be used to train it?


Smashing Editorial
(gg, yk)