Welcome to the Age of Filtering

veil-nebula-1065116_1280

Historians like to break down civilization into revolutions or ages. These typically hinge on certain technologies or system changes that bring about massive societal change.

 

There are many ways to classify, but the way I see it the big ones were agriculture, industrialization, computers and, most currently, information. Let’s ignore the first three for now.

 

The information age

We are currently in an age where knowledge and information are expanding at an exponential pace and we clearly have no hope of consuming all but a tiny fraction. We have almost exhausted the utility of simply collecting more information, although we will continue to relentlessly acquire because – well, because we can.

 

It is becoming cheaper and easier to collect data. In fact, I’ve read that 90% of the data we have collected is from the past two years. Think about that for a moment. I could try and quantify the numbers, but it would be incomprehensible. My brain cannot think in terms of zettabytes (about 250 billion DVDs if you are curious). My guess is yours can not either.

 

So what is the next big revolution or age?

 

As so famously quoted “predictions are hard, especially about the future”, but I think we can have an educated guess about what needs to come next; filtering and cleaning up all of this data and making sense of it.

 

I know this is not quite as sexy as you hoped for, but it does involve the glamorous topic of artificial intelligence and cool science fiction type stuff so hang on with me here for a few minutes.

 

There is a basic form of filtering already here as evidenced by search algorithms, which are a rudimentary form of weak artificial intelligence (AI). Type a question into a search engine and you will get a nice list of results that are likely to be at least a little relevant to your problem, but this is just the tip of the iceberg.

 

When I search wouldn’t it be nice to get a blog/article/video that is extremely relevant to me, regardless of its popularity? Maybe what I really need is buried on page 18 of my search results rather than on the first page. Maybe this is very different from what you want or need.

 

Search attempts to do this based on your prior search history, scanning your email for keywords and other nebulous factors that I don’t fully understand.  In essence it is a popularity contest and largely social phenomenon. It guesses what you want using a statistical model based on what other people want. This is the limit of weak AI.

 

The bigger, more important question is how can an algorithm really know what I need to read, what will resonate with me, or what will deliver the information to me in the most effective way?

 

Have you noticed that sometimes you can read information from multiple sources that pretty much say the same thing, but you only connect with a small sample? Something about how the combination of words and ideas mixed with a compelling story hits you in the right emotional state and circumstances and BOOM…transformation.

 

The information and message stick, even though you may have read similar facts or opinions hundreds of times. This is curious. Is this an anomaly of true intelligence and consciousness?  Can the weak AI we call search ever fully understand this?

 

Maybe we need a strong AI or at least a passable hybrid in a Chinese Room to effectively filter the information, because it is only going to get worse. We are reaching the limits of our comprehension.

 

I see this with my writing. I may write an article on loneliness for instance. Maybe the article is annoying and irrelevant to 98.9999% of people, kind of useful for 1% of the people searching that term, and maybe earth-shattering for 0.0001%. The big question is ‘how do I get my article to that 0.0001%?’ because that is about 337,000 people at the time of my writing this.*

 

This is a big problem and only getting worse as the amount of information increases. Currently there is no way for the weak AI in search engines to really know how much you would benefit from my article. It is guessing based on a primitive algorithm. I think everyone should blog/write/podcast or whatever information synthesizing exercise they prefer, but in a way it only makes this problem worse. It makes it harder for people to find my material (or yours) if they really want or need it.

 

Now, the current attempt at a work around is social media. Social media is in some ways a quasi-intelligent organism that disperses information where it thinks it should go. This is interesting but has its limitations.

 

Let me explain.

 

If I share something on social media I am limited to a small audience, but at least they kind of know me and the kind of things I find interesting. Therefore, they can choose to read or not read based on their previous knowledge of my values and interests.

 

So far, so good.

 

But they don’t know everything about me, and of course I have a selection bias in what I am going to share. I probably won’t share something that is deeply personal or painful for me, or anything too controversial as this may alter peoples opinion of me and affect whether or not they share or read something I post in the future.

 

I may be sharing things to groom my online image. I may choose not to share something that had a tremendous impact because of fear of what others may think.

 

Will they judge me if I share something controversial and interesting? Will they assume certain things about my religious or political leanings? What if I am suicidal and read the article that stops me from pulling the trigger…will I share that? Probably not.

 

Currently all curation is a form of censorship when viewed through a cynical lens.

 

The things that tend to get shared on social media are the things that are so well written, so entertaining, so universal…in other words the very things that are popular, but maybe not all that useful to you.

 

AI will likely not have this problem, at least in theory. The big problem with AI of course is that no one really knows what will happen when we finally create a strong AI. Will it be Skynet and Terminator or a Star Trek type utopia?

 

Or will it be something our small primitive wetware can’t even comprehend…

 

I don’t know what this will look like, but I know it is coming. It is coming because it is the natural evolution of a solution to this problem.

Strong vs. robust weak AI?

I don’t know the answer to this question, but it seems to me that a strong AI will be needed, if we even manage to build one (or if one builds itself). Empathy and understanding (or at least the illusion of it) will be necessary. The AI must understand what we want, what we need, and the meaning of the possible content to recommend to us.

 

A more advanced system would filter the information by our cognitive ability, language, learning style, etc. I’m not sure a robust but weak AI could accomplish this, but then again maybe humans are just a form of robust weak AI. It’s hard to comprehend something significantly more intelligent that you. An ant has no insight into the intelligence of humans, as it is limited by the constraints of it’s own intelligence.

 

This scares me and excites me at the same time, but is a topic for another day perhaps.

 

Think of the implications. What if I could have 10,000-20,000 words of exactly the information and advice I need delivered to me daily? How could that change my life?

 

I recently read a blog post about blogging (exciting right?) that was excellent. It was what I needed to read, maybe the timing was off a little, but it was great. The problem was that it was a complete accident. I was just bored and swiping through Twitter when I stumbled upon it. It was just dumb luck I found it. If you are reading my article right now it is probably lucky as well (or maybe unlucky depending on your perspective).

 

I want information delivered to me when I need to access it. I don’t want to read 10 useless articles for every 1 that is useful. I want and need better filtering.

 

I don’t know how this will evolve, but I do think that filtering of information will be the next revolution. Generating and collecting data is easy. Filtering is hard.

 

It may be ushered gradually or all at once. It may be overshadowed by other implications of artificial intelligence.

 

Of note, a darker side of this (and some may argue this already happens) is a third party gaining control over the information and selectively filtering it to you based on their agenda. An even scarier thought experiment is to think about the AI itself having an agenda. It is a little scary to imagine a strong AI dispersing information based on its own motives.

 

Humans are very susceptible to propaganda from other humans. How would we react to a consciousness that is twice as smart…or 1000 times as smart? What would history look like when every electronic document could be altered simultaneously to reflect a certain ‘truth’?

 

This is the kind of thing that makes for good science fiction, but may be closer to reality than we think.

 

In the mean time please be good filters yourself and share this article with those you think would enjoy it 🙂


*Rough calculation using estimated total internet users multiplied by 0.0001% Currently that article has only 150 views which means there is some room for improvement.

1 comment

2 pings

  1. Interesting post. I know Facebook got in trouble for its little social experiment when it filtered people’s feeds based on the emotional tenor of its content. That should fall into your category of scary information manipulation. I think that your post is a great argument for Net Neutrality as well.

  1. […] vast place of almost endless information, and as I mentioned in an earlier post I hypothesize that efficient filtering will be one of the most important changes we will implement to get manageable amounts information […]

  2. […] vast place of almost endless information, and as I mentioned in an earlier post I hypothesize that efficient filtering will be one of the most important changes we will implement to get manageable amounts information […]

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.