Feeds:
Posts
Comments

Posts Tagged ‘twitter’

Do we lose any important details as we compress experience through gadgets and social web?

Read Full Post »

Yes, Paul Carr of TechCrunch is right in many ways… the real time web, and people powering it, can’t really handle the truth.   I’ve said in the past too.  The real time web is not going to last as a viable source of data and truth.  To make it reliable it’s going to be far less real time.  Getting to the facts takes time, resources and sometimes vast amounts of thought (by a computer or a human).

What’s troubling though is that there’s a ton more misinformation pain to go through before users and/or companies figure out what to do with all this mass real time web publishing.  This Ft. Hood twitter stuff is pretty bad.  The celebrity death rumors are horrible. how much worse does it have to get before our values catch up? or maybe it’s ok?  maybe deciphering real from fake information is best left up to the end user?  it’s better than less info?

 

Read Full Post »

It wasn’t real news the other day, but it is now.  Balloon Boy – a series of hoaxes.  First the original hoax, now the authorities “misleading” the media to “keep the trust” of the Heenes.  So now how do you go on to nail someone for lying and then use lies to trap them in their lies?

As I said the other day there are some serious issues with TV news and the real time web.  It’s clear that few folks stepped away from the situation to really consider what was going on.   It’s pretty easy to blame the Heenes.  BUT…. Media (broadcasters and consumers) created the Heenes.   So… how will we all approach these situations in the future?   Instead of news and real time web being a stiff wind to fan the flames, how can it turn into machinery to get at the facts/truth faster? Is it even possible to be REAL TIME and get to the facts?  (I don’t think so)

If new media doesn’t figure this out, which only happens when consumers demand it, we’ll see oddities like this becoming far less odd and it will get harder to decipher what’s a serious situation and what isn’t.

Read Full Post »

Methinks the best experience will end up combining real time search with regular web search.  Yes, it’s nice to have unfiltered immediate information in certain situations like breaking news or emergencies.  Outside of that synthesis is essential to keep the noise to signal ratio down.

I don’t so much mind the metaphor used on TechCrunch today of consciousness and memory.

Imagine having just memory or just real time consciousness – it somehow wouldn’t be very efficient for the processing of information into action.  TC brings this up.  Yesterday’s Michael Jackson and celebrity death coverage and the malware issues showcases that without some non-real time synthesis things get pretty messed up.

Thinking through this is not that hard.  Though you can’t use citation analysis to filter results like in PageRank, you can do similar things to get some confidence interval in the real time results.  However, the more accurate you make that the more processing time it will take and, thus, it will be less real time.   I think some hybrid of rapid filtering with a real time pressentation of streams with a big note that says UNFILTERED or UNVERIFIED should do just fine at the top of regular web results.

I’d use that kind of experience, for what it’s worth…

Read Full Post »

In case you ever wanted to see some nice theory + simulation + visuals here’s a collection of nice Mathematica based explorations:

Animation for Epidemic Spread

Animation for Epidemic Spread

Jeff Bryant with Ed Pegg’s code on Influenza Epidemic modeling

Disease spread demonstration

SARs spread demonstration/animation

Oh, and I thought this was interesting… a nice PPT on pandemics.

I wonder if the swine flu spread through social networks with a similar dynamic? hmmm… Perhaps one should dig through this code for mining Twitter with Mathematica and start connecting the dots.  How could we do this?  We need to pull down a lot of tweets.  and we need some way to codify them by location or friends/groups.  What would we consider “spreading”.  Is it posting a link? replying to someone? hmmm.  Maybe this is a big fat waste of time….

Read Full Post »

The market will not support all these photosharing sites in the long run.

Here’s why:

Twitter will be purchased or exclusively locked up in a strategic relationship by one of the companies that already has a photosharing set up/photo monetization platform.

Under that scenerio the acquiring company is unlikely to promote such a disaggregated approach to the aggreagtion of media in microblogging.  If there is money to be made with Twitter it involves pushing people into monetizable experiences, like monetizable media destinations and transactions/etailing. (The social networks have finally figured this out e.g. MySpace/Citysearch)

Sure, there will still be boutique image hosters and tiny URL providers, etc. etc.  But ultimately the world just doesn’t need 20 different places to dump your photos.   The ones that will still hang around will be the ones most tightly coupled with the apps that encourage the uploading.

Read Full Post »

This is a pretty useful data dump from Google’s CEO

Economic situation pretty dire. Combination of what we’ve seen does not appear to have a bottom. People are using the Internet more. Obviously will affect online ad market because our systems are so tightly tuned. It will eventually be reflected in CPC, CPM. We are not immune to this. We may be better positioned from ad perspective, but ultimately the real pain felt by companies worldwide will sometime translate to our world

Read Full Post »

Update 2/17/09: Here’s a fun piece on CNN about MDs using Twitter from the OR. Again, this is NOT particular useful data being generated.  It is, however, an excellent BROADCAST tool.  Surgeons pushing out updates is useful to families and friends. In the grand scheme of useful information unto itself, this content will have no reuse outside of that surgical operation context.  Perhaps an aggregation and synthesis (not real time) would be useful in trending operations, but there are other, more efficient, ways of computing and comparing data from operations.

Ok, so perhaps, this is why VCs, media pundits and internet geeks gush over Twitter: The idea that it represents some collective thought stream/collective brain.

The most common statement about why this colletive stream of drivel has value comes in this excerpt from the TechCrunch post:

Twitter may just be a collection of inane thoughts, but in aggregate that is a valuable thing. In aggregate, what you get is a direct view into consumer sentiment, political sentiment, any kind of sentiment. For companies trying to figure out what people are thinking about their brands, searching Twitter is a good place to start. To get a sense of what I’m talking about, try searching for “iPhone,” “Zune,” or “Volvo wagon”.

Viewing the proposed examples SEEMS to validate the claim.  However, online discussion and online “tweets” are NOT the same as the behavior you’re actually trying to gain insight into.  Whether people are into a brand is not accurately assesed by viewing what they SAY about it — it’s what they DO about it.  Do people BUY the brand? Do they SHOW the brand/products to others?  Do they consume the brand?

These above examples are not predictive in anyway.  They are reflective.  Twitter can’t do much better than Google, blogs, and news outlets at ferreting out important events, people, products, and places before they are important.  Twitter, in some respects gets in its own way because the amount of “tweet” activity is not always a great indicator of importance.  In fact, some of the most mundane events, people and places get a ton of twitter activity versus really important stuff.

Twitter is also highly biased.  It is predominately used by the technical/digtial elite.  Yes, it’s growing quickly, but it still doesn’t reflect more than perhaps 1-2% of the US population.    Heck, even Google traffic is highly biased, as only 50% of the US population uses search every day. You say, so what, it will get there!  No, it won’t.  Consider the following examples.

Twitter can’t tell you ANYTHING about the real stuff of life like Baby Food, Peanut (recall), or your local hospital. (I leave it as an exercise for the reader to try these searches on Google and compare the results).  With more usage, this only gets more impossible to find the real information.  New tools to parse and organize tweets must be created.  This implies you’ll need computational time to parse it all, thus destroying the “real time part” the techcrunch authors and this quoted blogger adore.  Beyond just filtering and categorizing, an engine needs some method to find the “accurate” and “authoritative” data stream.  Twitter provides no mechanism of this and doing so would destroy it’s general user value (you don’t want to have to compete with more authoritative twitterers, do you?)  Twitter search would need to become more “Googly” to matter at all in some bigger world or commerce sense.

TechCrunch correctly identifies this problem:

An undifferentiated thought stream of the masses at some point becomes unwieldy. In order to truly mine that data, Twitter needs to figure out how to extract the common sentiments from the noise (something which Summize was originally designed to do, by the way, but it was putting the cart before the horse—you need to be able to do simple searches before you start looking for patterns).

So where does Twitter really sit and does it have value?  It is a replacement for the newsgroup and chatroom and some IM functions.  It has value, obviously, because people use it.  Users replace their other forms of conversation with Twittering.  Broadcasters and publishers are also replacing other forms of broadcasting/pushing messages with Twitter.  This, too, has value in that Twitter better fits the toolsets more and more of us sit in front of all day long.  It’s somewhat of a “natural” evolution of things to find a new mechanism of broadcasting when a medium (terminals attached to the network) reaches critical mass.  The hudson river landing example is a better example of the shift in broadcasting method than it is of some crack in the value of Google and others for a value to have “real time search.”  If that logic were sound, CNN would been hailed as a “Google Slayer” as they are more real time than Twitter is (yes, they use twitter and ireport and citizen journalism…).    In fact, CNN is the human powered analytic filter required to make sense of real time streams of data.  News journalists capture all that incoming data and find the useful and accurate information and summarize and rebroadcast.

If I were an operator of IM networks or a business that relied on chatrooms and forums, I’d be worried.  Google, news outlets and other portals should not be worried.  They don’t need more contextless content to sift through, they do just fine without yet another 99% source of throw-away thoughts.

I, myself, am not a Twitter-hater.  It is a great media success.  It probably can make money.  However, it doesn’t represent some shift in social networking, high tech, communications, much less how we interact.  Anyone who claims that must be delusional or hoping to make a buck or two, which is fine too.

TechCrunch concludes with the real question here:

But what is the best way to rank real-time search results—by number of followers, retweets, some other variable? It is not exactly clear. But if Twitter doesn’t solve this problem, someone else will and they will make a lot of money if they do it right.

Is there a possibility to generate a collective thoughtstream? big Internet brain?  Sure, in some loose sense, that’s already happened.  Twitter (and other tools) is just a piece of the puzzle.  The human brain doesn’t have just one piece you can claim as the main part – the CPU that can make sense of everything.  Why should we think something less complicated (the Internet has far fewer nodes, interconnections and far higher energy demands than just one human brain!) have a central core (service) providing some dominant executive function?   There are several reasons this physically can’t happen.  The main thing, I mentioned it earlier, is that making sense of random streams of data requires computational time.  The more inputs a system takes in, the more computation it requires to make sense (or to filter it in the first place).  New information or new types of information must first be identified as potentially useful before they can even be included for summarization.  And so on.  The more useful you need to make entropic data (random), the more energy you need expend. Raw data streams trend toward entropy, yes in an informatic and thermodynamic sense.

In other words, no one company is going to figure out how to rank real time search results – it can’t be done.  Perhaps more damning is, it doesn’t need to be done.  There’s no actual value in searching real time.  The idea of searching is that there is some order (filter) to be applied.  When something happens, John Borthwick, correctly claims “relevancy is driven mostly by time”.  So twitter already has the main ordinal, time, as it’s organizing principle.  Perhaps TC and John Borthwick desire a “authority” metric on tweet search… however, you can’t physically do this without destroying the value of real time.  No algorithm accounting for authority will be completely accurate -there’s a trade off with real time and authority.  (PageRank has the similar problem with authority and raw relevancy, as no name authors and pages often have EXACTLY what you want but you can’t find them.  This is a more damaging problem in “real time” scenerios where you want the RIGHT data at the RIGHT TIME).

If Twitter could plant an authoritative twitterer at every important event and place, real time twitter search might become real.

Oh wait, that’s called Journalism – we already have 1000s of sources of that.

Read Full Post »