8 min read

In Defense of Manual User Needs Tagging

Automated user needs labelling is easier than ever — but you need to be aware of what you risk losing, and whether it’s worth it before opting for speed.
In Defense of Manual User Needs Tagging
Photo: Lars K Jensen (All rights reserved)

 

This article is a part of my series on user needs for publishers.

 

Working with user needs is still one of the most important – and hyped, rightfully if you ask me – trends in journalism, media and communication in general.

Getting a more user-focused prism and language around your stories is powerful stuff and the user needs approach has the major benefit that it can actually work day-to-day in an editorial environment and newsrooms.

(You can find links to some of my other posts on user needs at the end of this post.)

It goes without saying, though, that user needs are pretty useless until you tie them to your stories; in practical terms meaning that you start tagging your content.

 

My name is Lars K Jensen, and I work with journalism, data and editorial insights as the Audience Development Lead at Berlingske Media in Denmark. 

Feel free to connect on LinkedIn and say hi.

 

Generally, this user needs labelling of stories can be done in two ways: Human and machine. Both have multiple levels, for instance human tagging can be done by the reporters writing the stories, their editors or (as we do in Berlingske where I work) with a few people or just one (me) doing the tagging centrally.

The various approaches have various advantages and risks – and while automated tagging will give you incredible speed, you risk missing important learnings and insights.

As I wrote in a LinkedIn post on the topic of user needs tagging:

"Not to defend my medieval ways of working (well, probably also that) but more to remind everyone that every tech decision and automation usually comes with a loss of skill — and knowledge.

As I said during a talk about AI at one of our universities recently, you need to be sure that you are okay with — and ready for — that loss.

The same goes with user needs in journalism and other content industries.

Tagging stories with user needs is time consuming work, yes, but it’s important work. Because you gain unique insights into the stories you are producing (and the value proposition you are serving) as a publisher and how they resonate with your audience.

Once you give that job to a machine, you risk losing that continuous, deep insight — in favour of speed (hopefully without losing accuracy)."

My first user needs analysis for Berlingske was based on three weeks output, around 750 articles.

(When we’re tagging we only look at the beginning of an article — headline, sub-headline, image and the first paragraphs. This allows us to focus on conversions; buying, logging in or deciding to read the article in front of you. So going through a large number of articles isn’t as daunting as it may sound.)

I learned so much doing that by hand; not only did it help me get to know Berlingske (I had just started working there) it also allowed me to expand and refine the analysis throughout the tagging process – instead of having isolated tagging and analysis phases.

Label and learn

It may not be the fastest or most efficient way, but it is highly creative (the way you work with your analysis and insights throughout) and in the end I saved a lot of time by refining the analysis, the parameters etc. as early as the tagging phase.

And I still tag and learn.

Every week I label around 250 to 300-ish articles published behind the paywall at Berlingske and I learn something new every time. I constantly get to know the journalism we are publishing and it also keeps me in sync with what matters to our reporters and editors.

And it also makes it very easy to do follow-up and adjacent analyses, as I'll get back to later in this post.

When you are in an internal consultant sort of job like me, these insight are invaluable and a prerequisite for what we are trying to do and change when working with user needs.

If we automated that tagging, my ability to help Berlingske would be limited and the contant flow of new knowledge would be hampened.

I'm not saying it would ruin the whole thing, but my guess is that user needs would go into a sort of "maintenance mode" where we would risk losing iterations, constant, meticulous analysis and check-ins across editorial and commercial (where I sit).

There are some very powerful automated tagging tools out there which can get you up and running in almost no time, but you need to be aware of what you risk losing.

One of those tech companies is Smartocto, which are working with a lot of publishers and they have one of the most refined user needs tagging engines in the industry – or so I'm told 😊

In a comment conversation with me below my LinkedIn post, Smartocto's Chief AI Officer, Goran S. Milovanović, wrote something I agree very much with:

"If a user does not understand the knowledge domain being engineered, engineering and automation themselves will not be too helpful."

This is not to say that you should not automate - but you should beware of what you risk missing. And you should make sure that the work will be based on a proper foundation.

If you are curious, Milovanović has written a blog post on automated user needs tagging at the Smartocto website (they also did a webinar on automated tagging).

Tech now or tech later?

I recently spoke with a European publisher who was waiting for the technology part of their user needs implementation. There is not one single right way to do this right, but had it been me I probably would have done something along the lines of:

  • Started off with a small analysis
  • Learned during the process
  • Presented the early findings to colleagues and management
  • Reiterated and expanded the analysis
  • Presented once again

...and then set out to automate what has been shown to colleagues and bosses and which matches with your strategy, vision, journalism and philosophy – and the sort of social contract that your journalism and its value creation/exchange is a part of.

That way you set out to automate something that you already know is working and providing insights.

If you are not automating something that already works for you, you run the risk of automating the process of copying ideas from others (maybe even without succeeding), perhaps because of lack of knowledge or differences between you and whoever you copied it from.

And that way of innovating doesn't need automating. It needs to disappear.

If or when automating user needs tagging (or anything else, for that matter) I would always recommend that you ask these questions:

  • What are we trying to automate?
  • Why do we want to automate that?
  • What are the scenarious – what do we risk losing in terms of skills and knowledge?
  • How will the machine decide or suggest? How is the algorithm built?
  • What data is it trained on? It is that comparable to us?
  • Can it be trained on our content and data?
  • What is the definition of success is this particular automation and/or implementation?

Adjacent analyses

Another benefit if you are tagging by hand, is that it's very easy to dive to a deeper level in your analyses, because you know why a certain article belongs to a certain need (or more than one).

Let me give you an example.

Recently, I did a more focused user needs analysis for Berlingske. We noticed that one of our core needs had been struggling for some time. Even though conversions were still looking good, some of the articles on that need was underperforming – both in terms of conversions and consumption.

So week took a deep dive into that need in a separate analysis.

That particular need can, for us, be split into various sub-types – meaning various ways that user need can be met. It has a lot to do with how the story is presented and the format of the story or content.

We matched those sub-types with a select list of topics and keywords to gain deeper knowledge of when and how this need and these topics perform – and whey they don't.

Doing this analysis took me very little time, since I had already done the labelling and knew why I had labeled every story with that particular need.

As I wrote in my LinkedIn post:

"Did it take time? Yes.

But was it easier, because I already knew why those stories were tagged with that user need? Yes, of course.

But… was the analysis interesting and impactful and was it really easy to get to actionable insights and recommendations?

Oh, you bet
😊"

(And you can add to the list: "Would it have been possible without continuous conversations with the newsroom? Of course not.")

Yes, you can do something similar with automated tagging and AI, but it would take time for it to understand and execute on the new instructions (splitting a user need into subtypes and then matching it with a list of topics that you introduce during the analysis) and you would have to check up on and in with it to verify the quality.

And once you present such an analysis to people like editors-in-chief you are met with questions, ideas and assumptions – and here it's (also) key to have that knowledge ready and shareable and easy to iterate on.

Final thoughts

I am not writing this to insist that you do manually tagging. You obviously need to choose the path forward that is the right (or best one possible) for you and the situation you are in.

Also, I am a member of the AI group at Berlingske Media (where I work in an area covering journalism, data and technology) so I this isn't about bashing AI. But it's about doing it right.

Automating always involves losing something. Knowledge, skills or something else. But obviously there are gains as well. As with so many other things, you need to make sure that the gains outweigh the pains and losses.

And do what you can to mitigate those losses - and that applies whether you go automated or manual.

To nobody's surprise I would always recommend you start with a short and rather quick analysis. This way, you won't just be pitching the user needs model but also the insights it can give you – and what's more important; the performance lifts you can gain by using it.

When that is in place and user needs are working for you and your audience, then I would look at introducing machinery and automation to the equation. If AI is the way you want to go.

Thank you for your time.

Below is a list of previous posts in this newsletter based on my work with user needs and implementing the model into newsrooms.

An Introduction to User Needs for News Publishers
For years publishers have been pushing news on an endless cycle. In a free ad-based world where clicks rule that might make sense. A subscription business doesn't work like that.
🗓️ May 18, 2021

User Needs for Publishers: From Theory to Practice
User needs are easier to work with if you combine them with topics and customer segments that are important to you.
🗓️ February 8, 2022

The Importance of Analysis Activation
An analysis is like a sponsorship: Success depends on it being activated with the audiences or people it is intended for. That's where the magic happens.
🗓️ August 5, 2022

Innovating into a Newsroom: 5+1 Things to Remember
The term "innovation in the newsroom" has been thrown around a lot in recent years. And rightfully so, because innovation is inherently essential for the future of journalism. But creating actual change (for the better) is about so much more than technology and data.
🗓️ April 18, 2023

Implementing User Needs Is Cultural Change
It's a prism, a model for analyzing and developing journalism and publishing stories. But putting it into work is change management. Here are my tips based on actually doing it.
🗓️ March 19, 2024

Why You Should Do an Internal Newsletter
Internal communication is insanely important – whether you are building a new domain, working across silos or doing work that just doesn't get enough exposure in the organisation.
🗓️ May 26, 2024

User Needs: An Easy, Effective Exercise To Get You Started
Sometimes it is the little things that make the difference. Here is an excercise I've had great results with as long as I've worked with user needs.
🗓️ June 6, 2024


My name is Lars K Jensen, and I work with journalism, data and editorial insights as the Audience Development Lead at Berlingske Media in Denmark. 

Feel free to connect on LinkedIn and say hi.