Beating the filter bubble

Humans love having their opinions confirmed. It makes us feel part of a community; understood – even loved. Parallel to digitalisation and the automated filtering of the incredible amount of information available to us online, this human instinct becomes an instigator for what’s referred to as a digital filter bubble. 

Information consumption in 2020

If there’s one area where digitalisation and globalisation have made a truly profound impact, it’s in media. Newspapers have gone from analogue to digital. Journalists have gone from local to global. And news consumers have gone from reading a handful of publications to reading thousands. With the Internet, we have access to everything, everywhere. In fact, there is so much information at hand that we’re having a hard time figuring out how we’re going to process it.

The most obvious and well-documented effect of this development is the decrease in humans’ attention span, which follows digitalisation closely. And when people focus for a shorter period of time (down from 12 seconds to 8 between 2000 and 2013), it has a serious impact on how news is created – and distributed. Online users, as they’re bombarded with information 24/7, have decreased their attention span as a defense mechanism. So what do online platforms need to do in order to make sure people still read what they’re putting out there? Filter it.

What is a filter bubble?

In March 2011, Eli Pariser coined the term ‘filter bubble’ at the TED2011 event. He raised the question of what happens when the news consumer is no longer responsible for the information they consume. When the digital news feed we see isn’t by our own making, but by a computer’s – and is based on personal relevance, rather than objective importance. In his talk, Pariser identified algorithms as the main culprit in today’s filter bubble problem.

Algorithms are pieces of computer code put in place on various digital platforms. Their objective is to analyse the behaviour of the website’s visitors, determine which information is more relevant to each individual, and so create an output that is personalised. The problem with this is that we’re no longer in control over which information we see. And perhaps more importantly: A filter bubble also means that we don’t know which information we don’t see. The algorithm filters it out for us, leaving us none the wiser.

Your own subjective information feed

It’s easiest to understand filter bubbles in the context of social media feeds. Most of us have experienced it in practise on platforms like Facebook or Instagram. Let’s say you really like videos of animals. As you browse your social media account, you spend a lot more time looking at this content, than you do on, say, recipes. After some time, the platform’s algorithm picks up on your behaviour, and uses it to start filtering a personalised output for you.

Fast forward a few weeks, and you’re loving your social media feed. It’s packed with great animal videos that are exactly the kind you like, interspersed with updates from your best and funniest friends. The thing about a filter bubble is that the consumer often doesn’t realise or think about what happened to the content they used to see. In the case of animal videos and recipes, it’s also a fairly harmless phenomenon. But what happens when you only see political posts that support your own view? When you’re only shown the newspaper articles that focus on your three favourite topics – and nothing else? 

Personal filter bubble efforts

How to break the personal filter bubble

Even though Pariser made an important point back in 2011, in that algorithms certainly contribute to creating filter bubbles, it’s an overly simplified explanation. By blaming the computers, and computers alone, we also remove all personal responsibility where online information consumption is concerned. In reality, you can do a whole lot to free yourself from the digital echo chamber that’s built for you. The first step is recognising that there is no way to consume all the information that’s out there – and that, by extension, this means you’re probably seeing a very small part of the bigger whole in your news and information feeds.

One of the best ways of breaking the filter bubble is by actively seeking out source material that’s different to the one you’d usually look out for. Let’s say you lean towards liberal newspapers when you want to follow political developments. Try having a look at some conservative ones. Be sure to read information that’s coming from both sides of the story if you’re interested in a current conflict. We are all biased; it’s part of what makes us human. But as humans, we’re also able to recognise this bias and do something about it.

Bigger scale efforts to pre-empting echo chambers

Algorithms are used practically everywhere on digital platforms, and in recent years, the spotlight has turned to what this means for data privacy. If Google, Facebook or a media outlet is tracking your every move, doesn’t that somehow infringe on your right to privacy? The EU’s General Data Protection Regulation came into effect in 2018, as an effort to stifle the extreme growth of personal data collection by big data companies. The idea was to give power back to the online user. Allowing them to control what information digital platforms could collect about them.

By extension, this actually helps preempt digital filter bubbles. When you can turn off personalisation, it gives the algorithms less material to work with. It may mean you see a lot of stuff that feels irrelevant to you. But the point is that you do see it. It isn’t removed from your information feed before you’re even made aware of its existence. These kinds of international initiatives that question data collection and the effects of doing this in a big way will be important to forestall the creation of filter bubbles.

Ansofy’s tools for making a difference

Another approach to actively preventing echo chambers is by source and fact checking. Created by sophisticated AI technology, the Ansofy news feeds do exactly that. The AI journalist compiles the facts – and only the facts – from numerous sources reporting on a specific event or topic. As the reportage grows, the AI also adds onto the articles, giving readers an increasingly comprehensive perspective on events around the globe.

Furthermore, Ansofy gives its users freedom of choice. You can build your news feeds from an incredible range of publications. In this manner, the app becomes a tool you can use to actively break down your own filter bubble. Dare to challenge your views by subscribing to material that doesn’t necessarily agree with your own opinions. The more you understand, the more objective you can be. And objectivity and self-insight are qualities we certainly need more of in today’s media sphere. 

How are AI generated stories created?

One of the most exciting developments in news production is the introduction of artificial intelligence. As a notion, it’s as ground-breaking as it is terrifying. On the one hand, it means software that can write news that is unbiased, factually accurate, and continuously updated. On the other, it’s an evolution that disrupts the status quo. What happens to journalists if AI can do their job for them? AI generated stories are a big part of Ansofy. We believe that understanding something is an important first step when technological developments hit.

The automated, nonfake news generation system

In large, AI has come as a response to a phenomenon that’s been plaguing the media sphere for a while. Fake news are pieces of incorrect writing that aim to stir the audience, and intentionally misguide them. The writer is generally trying to gain an emotional response from the readers. That’s why they use subjective and emotionally loaded language – and usually very few actual facts. The source of this development in creating (mis)information is quite simply human bias and emotion.

Enter: The nonfake news generation system. The framework removes all bias; the news are all facts, all objective language use, all reliable. The concept is based on AI generated stories, and it’s a gigantic step in the direction away from fake news. These artificial intelligence computer programs are taught to write unique content based on factual details. As a result, they effectively eliminate the human factor. The statement is packed with ethical questions and considerations, of course. Still, AI generated stories are proving effective tools for fact-checking the news. 

Extracting the facts for AI generated stories – and only the facts

To understand a news generation system that’s based on artificial intelligence, we need to get down to the facts – literally. We’ve established that the beauty of AI is that it’s unbiased. Still, software is only as good as its programming, and its input. In the case of news it’s crucial that this input is based on objective truth, ie facts. The first stage of producing AI generated stories, then, is identifying these details.

This is called an extraction process. In it the AI extracts facts from other, human news sources, and checks them against each other. Turkish and Cypriot newspapers will report very differently on events in the area. But the facts generally remain the same when they’re stripped bare. Only the facts that have been corroborated by several news sources will be used as the basis for content generation in AI generated stories.

How are AI generated stories written

The AI article writing process

So the AI has compiled a number of facts on an event, and is ready to write its news story. It’s made sure to include only those pieces of information that it could find across the board, in all the sources it checked. Sophisticated content written by artificial intelligence works with multi-document operations. That is, if there isn’t enough factual basis for the news article, the AI won’t write it. And fake news are taken out of the news generation equation.

AI generated stories reformulate the facts and create them with unique words and sentences to avoid plagiarism. It contains the same facts as the original sources, but instead of biased language, structure or representation of the events that took place, artificial intelligence works on unbiased programming. The reader gains a completely objective and factual understanding of what took place with opinions removed from the mix.

Editing, proofing and publishing AI generated stories

A story written by artificial intelligence software goes through many of the same steps that one written by a journalist does. To start, there’s a classic editing check, that goes over language, syntax and coherence, to make sure that the article is up to scratch and understandable. After this, the text is put through a plagiarism checker, in order to make sure it’s removed from the original sources’ pieces, and optimised for search engines that will hold it up to the same standard as human-written content.

Finally, when the text is in place and approved, it’s populated with media; images, video, and social media comments. In other words: AI generated stories, when done well, are simply news articles completely removed from bias. They help readers gain clarity and a better objective understanding of events from around the globe. As we mentioned previously in this article, though, there’s no denying that the more sophisticated artificial intelligence gets, the more we need to talk about what this development means for traditional journalism.

What do AI generated stories mean for journalism?

Forbes, The NY Times, and The Guardian have all analysed what AI in news creation actually means for journalists and writers alike. And it’s obvious: If a robot can create news articles in the same way that a human can, it won’t be long until journalists all over the world are out of a job. The trick, then, is figuring out a symbiotic balance between machine and human to turn the tides for journalism and make news profitable again. 

For the time being artificial intelligence bots rely on one thing: input. Without any input, there’s no content creation – no production of any kind. In news generation today, this input is equal to the news sources the AI’s use to generate unique articles. Human journalism is necessary as source material for AI generated stories. Moving forward, the challenge will lie in figuring out how these two components can function together in a symbiotic news production market. 

Ansofy's guide to spotting fake news stories

The fake news phenomenon has grown into a force to be reckoned with in the past decade. During the US presidential election of 2016, a combined 3.8 million individuals engaged with the top five fake news stories published on Facebook on the race. Since then, the winner of that election, President Donald Trump, has popularised the phrase and widened its reach considerably. For news enthusiasts looking for objective, factual news, fake news is a nuisance, and sometimes the effort put into producing these stories makes them difficult to spot. In this simple guide, we’re going to give you a rundown of the best tools for identifying fake news stories and separating them from the real ones. 

What’s the difference between fake news stories and opinion pieces?

Before we take a look at the checkpoints for evaluating whether a news piece is real or not, it’s important to make a small note of how fake news and opinion pieces differ. Both tend to make use of subjective approaches to news, as well as emotional language and one-sided presentation of facts. The main difference lies in the intention behind the two.

Generally speaking, fake news stories are either trying to 

  1. spread misinformation deliberately, or
  2. increase readership (by any means necessary)

They are often politically motivated when they are attempting to spread misinformation deliberately, and financially motivated when they are attempting to increase readership. In other words: The person writing a fake news story is trying to trick its readers into acting on the information in a way that benefits the writer or the subject the writer is focusing on. 

On the other side, an opinion piece is the writer’s subjective thoughts on something that matters to them. The underlying intention is not to misinform the reader, but to share a perspective. That’s why fake news stories are generally labelled as news, whereas opinion pieces are labelled as that: opinions. 

When you come across a news story you suspect is unreliable, these are the four checkpoints you should run it through.

Checkpoint 1: How many sources can you find on the topic?

It’s difficult to fake news coming from more than one source. The very first thing you should do if you begin to question how trustworthy a news article is, is to check if any other publications have written about a topic. If a news story is real, you’ll almost always find more than one newspaper or news outlet covering it. A good rule of thumb is that the more sources you can find, the more likely a piece is to be authentic.

The second part of this stage is checking for consistency across the different sources you find. Are the publications based on the same fundamental facts? The tone of voice and focus will vary from outlet to outlet. Still, all serious newspapers will ground their reporting in events as they’ve actually occurred. A typical fake news stories telltale is using sensationalist statements as facts – that you can’t verify anywhere else. 

Fake news stories vs real news

Checkpoint 2: What does the language sound like?

We already mentioned language briefly, but this is the second checkpoint for articles you feel unsure about. There are two aspects to this: how emotional the language is, and how correct it is. Typically, fake news authors are trying to stir their readers. They want an emotional reaction. In fact, they want such an emotional reaction that the reader doesn’t bother to check the facts. That’s why you should look for how subjectively or objectively written the piece you’re checking is. The more subjective, the more cause for fake news suspicion.

When we talk about how correctly written an article is, we’re talking about the actual grammar. Generally speaking, serious publications have thorough and scrutinous editing and proofreading processes in place. It means that real news tends to sound like news. Fake news, on the other hand, can be riddled with spelling mistakes and poor syntax. 

Checkpoint 3: Where was the piece published – and who wrote it?

Your best tool when you want to figure out if a piece of news can be trusted or not is source-checking – and we mean checking the actual source. The Internet is a minefield when it comes to reliable information. Then again, it’s also a brilliant asset when you want to identify fake news. Start by figuring out the actual source of the article. Where was it originally published? What kind of publication is this? Is it known for fake news, or questionable reporting? 

Following up on this, doing a background check of the article’s author is also a good idea. Journalists and bloggers today can write for as many publications as they like. If you can figure out whether the person who wrote the piece is reliable or not, you’ll be more likely to discover whether it’s fake news. Big fake news writers are generally known in the media sphere, and by doing some research you can pinpoint which voices to tune out.

Checkpoint 4: What does the headline say?

A headline says more than a thousand words – or, at least, gives you a pretty good idea of what the rest of the article will sound like. The rule of thumb where headlines and fake news are concerned is that the more tangible information the headline contains, the less likely it is to be fake news. This comes back to what we mentioned earlier. Increased readership at any cost necessary can sometimes be a motivation for fake news production.

The common term for this strategy is clickbait; headlines that are designed to entice and interest the onlooker, and get them to click onto another website. Once there, you may not even land on a news story at all. And if you do, you should always be careful to trust the information in it. Here are a couple of examples of clickbait headlines:

“She thought she would come home to a clean house. What she discovered will shock you”

“Donald Trump is using this trick to attract social media followers”

Neither of these headlines are news story-worthy. They are poorly phrased, say very little about the actual article content, and are specifically designed to make you click – rather than give you a brief idea of the article focus. 

Headlines when spotting fake news stories

Why is it important to avoid fake news stories?

Once you’ve run a suspected article through these four checkpoints, you’ll generally have a clear idea of its reliability. And for a savvy news enthusiast, fake news aren’t necessarily dangerous, so much as they are annoying. The reason it’s important to avoid fake news stories is their intended audience: people who aren’t going to check facts, or who don’t possess the tools to do so.

Fake news stop being harmless when they’re treated as truthful news. If fabricated pieces of information in turn become the basis for action, a silly article suddenly turns into a societal problem. This is why it’s important to work for objective, well-researched, and well-written news for everyone, everywhere. News should give us a good understand of the world around us, without us having to worry about whether the facts are right or not.

Learn more about the Ansofy news revolution here.