Posts Tagged ‘technology’

Recently there’s been hubbub around Artificial Intelligence (AI) and our impending doom if we don’t do something about it. While it’s fun to scare each other in that horror/sci-fi movie kind of way, there isn’t much substance behind the various arguments floating about regarding AI.

The fears people generally have are about humans losing control and more specifically about an unrestrained AI exacting its own self-derived and likely world-dominating objectives on humankind and the earth. These fears aren’t unjustified in a broad sense. They simply don’t apply to AI, either the artificial nor the intelligence part. More importantly, the fears have nothing to do with AI but instead with the age-old fear that humankind might not be the point of all existence.

We do not have a functional description of intelligence, period. We have no reliable way to measure it. Sure we have IQ or other tests of “ability” and “awareness” and the Turing Test but none of this actually tell you anything about “intelligence” or what might actually be going on with things we consider intelligent. We can measure whether some entity accomplishes a goal or performs some behavior reliably or demonstrates new behavior acquisition in response to changes in its environment. But none of those things establish intelligence and the presence of an intelligent being. They certainly don’t establish something as conscious or self-aware or moral or purpose driven nor any other reified concept we choose as a sign of intelligence.

The sophisticated fear peddlers will suggest of the above that it’s just semantics and we all know what we mean in a common sense way of what intelligence is. This is simply not true. We don’t. Colleges can’t fully count on the SAT, phrenology turned out to be horribly wrong, the Turing Test isn’t poor by definition, and so on. Go ahead, do your own research on this. No one can figure out just exactly what intelligence is and how to measure it. Is it being able to navigate a particular situation really well? Is it the ability to assess the environment? Is it massive pattern matching? Is it particular to our 5 senses? One sense? Is it brain based? Is it pure abstract thinking? What exactly does it mean to be intelligent?

It’s worse for the concept of artificial. It’s simply the wrong term. Artificial comes from very old ideas about what’s natural and what’s not natural. What’s real is what the earth itself does and not what is the work of some humankind process or a machine. Artificial things are made of metals and powered by electricity and coal and involve gears and circuits. Errrrr. Wait… many things the earth makes are made of metal and their locomotion comes from electricity or the burning of fuel, like coal. In fact, humans were made by the earth/universe. The division between artificial and natural is extremely blurry and is non-existent for a reified concept like intelligence. We don’t need the term artificial nor the term intelligence. We need to know what we’re REALLY dealing with.

So here we are… being pitched a fearsome monster of AI which has zero concrete basis, no way to observe it, and zero examples of its existence as described in most discussions. But still the monster is in the closet, right?

For the sake of argument (which is all these counter-factual future predictions of doom are) let’s assume there is some other entity/set of entities that is more “intelligent” that humans. We will need to get a sense of what that would look like.

Intelligence could be loosely described by the presence of complex, nuanced behaviors exhibited in response to the environment. Specifically an entity is observed as intelligent if it responds to changing conditions (internal as well as environmental) effectively. It must recognize changes, be able to adjust to those changes, and evaluate the consequences of made changes as well as any changes the environment has made in response.

What seems to be the basis of intelligent behavior (ability to respond to complex contingencies) in humans comes from the following:

  • Genetic and Epigenetic effects/artifacts evolved from millions of years of evolutionary experiments e.g. body structures, fixed action patterns
  • Sensory perception from 5 basic senses e.g. sight, touch, etc
  • Ability to pattern match in a complex nervous system e.g. neurological system, various memory systems
  • Cultural/Historical knowledge-base e.g. generationally selected knowledge trained early into a new human through child rearing, media and school
  • Plastic biochemical body capable of replication, regeneration and other anti-fragile effects e.g. stem cells, neuro-plasticity
  • more really complex stuff we have yet to uncover

Whatever AI we think spells our demise most likely will have something like above (something functionality equivalent). Right? No? While there is a possibility that there exists some completely different type of “intelligent” being what I’m suggesting is that the general form of what would be akin to “intelligent” would have these features:

  • Structure that has been selected for fitness over generations in response to its environment and overall survivability
  • Multi-modal Information perception/ingestion
  • advanced pattern recognition and storage
  • Knowledge reservoir (previously explored patterns) to pull from that reduces the need for a new entity to explore the space of all possibilities for survival
  • Resilient, plastic and replication mechanism capable of abstract replication and structural exploration

And distilling that down even more raw abstractions:

  • multi-modal information I/O
  • complex pattern matching
  • large memory
  • efficient replication with random mutation and mutation from selected patterns

What are the implementation limits of those abstracted properties? Turns out, we don’t know. It’s very murky. Consider a rock. Few of us would consider a plain old rock as intelligent. But why not? What doesn’t it fulfill in the above? Rocks adjust to their environment – think of erosion and the overall rock cycle. Their structures contain eons of selection and hold a great deal of information – think of how the environment is encoded in generational build of a rock, it’s crystal structure and so forth. Rocks have an efficient replication scheme – again, think of the rock cycle, the make of a rock being able to be adsorbed into other rocks and so forth.

Perhaps you don’t buy that a rock is intelligent. There’s nothing in my description of intelligence or the reified other definitions of intelligence that absolutely says a rock isn’t intelligent. It seems to fulfill on the basics… it just does so over timescales we’re not normally looking at. A rock won’t move through a maze very quickly or solve a math problem in our life time. I posit though it does do these things over long expanses of time. The network of rocks that form mountains and river beds and ocean bottoms and the entire earths crust and the planets of our solar system exhibit the abstract properties above quite adeptly. Again just at spacetime scales we’re not used to talking about in these types of discussions.

I could go on to give other examples such as ant colonies, mosquitoes, dolphins, pigs, The Internet and on and on. I doubt many of these examples will convince many people as the reified concept of “intelligence as something” is so deeply embedded in our anthropocentric world views.

And so I conclude those that are afraid and want others to be afraid of AI are actually afraid of things that have nothing to do with intelligence – that humans might not actually be the alpha and omega and that we are indeed very limited, fragile creatures.

The first fear from anthropocentrists is hard to dispel. It’s personal. All the science and evidence of observation of the night sky and the depths of our oceans makes it very clear humans are a small evolutionary branch of an unfathomably large universe. But to each person we all struggle with our view from within – the world literally, to ourselves, revolves around us. Our vantage point is such that from our 5 senses everything comes into us. Our first person view is such that we experience the world relative to ourselves. Our memories are made of our collections of first person experiences. Our body seems to respond from our own relation to the world.

And so the fear of an AI that could somehow spell our own “top of the food chain” position makes sense while still being unjustified. The reality is… humans aren’t the top of any chain. The sun will one day blow up and quite possibly take all humans out with it. Every day the earth spins and whirls and shakes with winds, heat, snow, quakes and without intention takes humans out in the process. And yet we don’t fear those realities like folks are trying to get us to fear AI. The key to this fear seems to be intention. We’re afraid of anything that has the INTENT, the purpose, the goal of taking humans and our own selves out the central position of existence.

And where, how, when, why would this intent arise? Out of what does any intent arise? Does this intent actually exist? We don’t really have any other data store to try to derive an answer to this other than humanity’s and our own personal experience and histories. Where has the intent to dominate or destroy come from in humans? Is that really our intent when we do it? Is it a universal intent of humankind? Is it something intrinsically tied to our make up and relation to the world? Even if the intent is present, what are its antecedents? And what of this intent? If the intent to dominate others arises in humans how are we justified in fearing its rise in other entities?

Intent is another reified concept. It doesn’t really exist or explain anything. It is a word that bottles up a bunch of different things going on. We have no more intention than the sun. We behave. We process patterns and make adjustments to our behavior – verbal and otherwise. Our strategies for survival change based on contingencies and sometimes our pattern recognition confuses information – we make false associations about what is threatening our existence or impeding our basic needs and wants (chasing things that activate our adrenaline and dopamine production…). It’s all very complex. Even our “intentions” are complex casual thickets (a concept I borrow from William Wimsatt).

In this complexity it’s all incredibly fragile. Fragile in the sense that our pattern recognition is easily fooled. Our memories are faulty. Our bodies get injured. Our brains are only so plastic. And the more or survival depends on rote knowledge the less plastic our overall machinery can be. Fragile, as refereed to here, is a very general concept about the stability of any given structure or pattern – belief system, behavioral schedule, biochemical relations, rock formations… any structure.

The fear involved in fragility and AI is really about less complex entities that are highly specialized in function, sensory scope and pattern matching ability. The monster presented is a machine or group of machines hell-bent on human subordination and destruction with the weaponry and no-fragility in its function or intent – it cannot be diverted by itself nor an outside force.

We fear the accidental AI. The AI that accidentally happens into human destruction as a goal.

In both cases it is not intelligence we fear but instead simple exploitation of our own fragility.

And yet, as appealing as a fear this seems to be, it is also unjustified. There’s nothing that suggests even simple entities can carry out single “purpose” goals in a complex network. The complexity of the network itself prevents total exploitation by simple strategies.

Consider the wonderful game theoretic models that consider very simple games like the Prisoner’s Dilemma and how it turns out simple Tit-For-Tat exploitation models simply do work over the long-term as domination strategies. It turns out that even in very simple situations domination and total exploitation turns out to be a poor survival strategy for the exploiter.

So domination itself becomes nuanced strategy subject to all sorts of entropy, adjustments and complexity.

Maybe this isn’t a convincing rebuttal. After-all what about a simple idea that what if someone created really simplistic AI and armed it with nuclear warheads. Certainly even a clumsy system (at X time or under Y conditions, nuke everything) armed with nukes would have the capability to destroy us all. Even this isn’t a justified fear. In the first place, it wouldn’t be anything at all AI like in any sense if it were so simple. So fearing AI is a misplaced fear. The fear is more about the capability of a nuke. Insert bio-weapons or whatever else WMD one wants to consider. In all of those cases it has nothing to do with the wielder of the weapon and it’s intelligence and everything to do about the weapon.

However, even having a total fear of WMDs is myopic. We simply do not know what the fate of humankind would be nor of the earth should some entity launch all out strategies of mass destruction. Not that we should attempt to find out but it seems a tad presumptuous for anyone to pretend to be able to know what exactly would happen at the scales of total annihilation.

Our only possible data point for what total annihilation of a species on earth might be that of dinosaurs and other ancient species. We have many theories suggesting about their mass extinction from the earth, but we don’t really know, and this extinction took a very long time, was selective and didn’t end up in total annihilation (hell, and likely lead to humanity… so…) [ see this for some info: http://en.wikipedia.org/wiki/Cretaceous%E2%80%93Paleogene_extinction_event].

The universe resists simple explanations and that’s likely a function of the fact that the universe is not simple.

Complex behavior of adaptation and awareness is very unlikely to bring about total annihilation of humankind. A more likely threat is from simple things that accidentally exploit fragility (a comet striking the earth, blotting out the sun so that anything that requires the sun to live goes away). It’s possible we could invent and/or have invented simple machines that can create what amounts to a comet strike. And what’s there to fear even in that? Only if one believes that humanity, as it is, is the thing to protect is any fear about its annihilation by any means, “intelligent” or not, justified.

Protecting humanity as it is is a weak logical position as well because there’s really no way to draw a box around what humanity is and whether there’s some absolute end state. Worse than that, it strikes me personally, that people’s definition of humanity when promoting various fears of technology or the unknown is decidedly anti-aware and flies counter to a sense of exploration. That isn’t a logical position – it’s a choice. We can decide to explore or decide to stay as we are. (Maybe)

The argument here isn’t for an unrestrained pursuit of AI. Not at all. AI is simply a term and not something to chase or prevent unto itself – it literally, theoretically isn’t actually a thing. The argument here is for restraint through questioning and exploration. The argument is directly against fear at all. Fear is the twin of absolute belief – the confusion of pattern recognition spiraling into a steady state. A fear is only justified by an entity against an immutable pattern.

For those that fear AI, then you must, by extension, fear the evolution of human capability – and the capability of any animal or any network of rocks, etc. And the reality is… all those fears will never result in any preventative measures against destruction. Fear is stasis. Stasis typically leads to extinction – eons of evidence make this clear. Fear-peddlers are really promoting stasis and that’s the biggest threat of all.

Read Full Post »

Computing Technology enables great shifts in perspective. I’ve long thought about sharing why I love computing so much. Previously I’m not sure I could articulate it without a great deal of confusion and vagueness, or worse, zealotry. Perhaps I still can’t communicate about it well but nonetheless I feel compelled to share. 

Ultimately I believe that everything in existence is computation but here in this essay I am speaking specifically about the common computer on your lap or in your hands connected to other computers over the Internet.

I wish I could claim that computers aren’t weapons or put to use in damaging ways or don’t destroy the natural environment. They can and are used for all sorts of destructive purposes. However I tend to see more creation than suffering coming from computers.

The essential facets of computing that gives it so much creative power are interactive programs and simulation. With a computer bits can be formed and reformed without a lot of production or process overhead. It often feels like there’s an endless reservoir of raw material and an infinite toolbox (which, in reality, is true!). These raw materials can be turned into a book, a painting, a game, a website, a photo, a story, a movie, facts, opinions, ideas, symbols, unknown things and anything else we can think up. Interactive programs engage users (and other computers) in a conversation or dance. Simulation provides us all the ability to try ideas on and see how they might play out or interact with the world. All possible from a little 3-4lb slab of plastic, metal, silicon flowing with electricity. 

Connecting a computer to the Internet multiplies this creative power through sharing. While it’s true a single computer is an infinite creative toolbox the Internet is a vast, search-able, discoverable recipe box and experimentation catalog. Each of us is limited by how much time we have each day, but when we are connected to millions of others all trying out their own expressions, experiments and programs we all benefit. Being able to cull from this vast connected catalog allows us all to try, retry, reform and repost new forms that we may never have been exposed to. Remarkable.

Is there the same creative power out in the world without computers? Yes and no. A computer is probably the most fundamental tool ever discovered (maybe we could called it crafted, but I think it was discovered.) Bits of information are the most fundamental material in the multiverse. Now, DNA, proteins, atoms, etc are also fundamental or primary (think: you can build up incredible complexity from fundamental materials). The reason I give computers the edge is that for the things we prefer to make we can make within our lifetime and often in much shorter timeframes. It would take a long time for DNA and its operating material to generate the variety of forms we can produce on a computer.

Don’t get me wrong there’s an infinite amount of creativity in the fundamental stuff of biology and some of it happens on much shorter than geological timescales. You could easily make the case that it would take a traditional computer probably longer than biology to produce biological forms as complex as animals. I’m not going to argue that. No do I ignore the idea that biology produced humans which produced the computer, so really biology is possibly more capable that the computer. That said, I think we’re going to discover over time that computation is really at the heart of everything, including biology and that computers as we know them probably exist in abundance out in the universe. That is, computers are NOT dependent on our particular biological history.

Getting out of the “can’t really prove these super big points about the universe” talk and into the practical – you can now get a computer with immense power for less than $200. This is a world changing reality. Computers are capable of outputing infinite creativity and can be obtained and operated for very modest means. I suspect that price will come down to virtually zero very soon. My own kids have been almost exclusively using Chromebooks for a year for all things education. It’s remarkably freeing to be able to pull up materials, jump into projects, research, create etc anywhere at anytime. Escaping the confines of a particular space and time to learn and work with tools has the great side effect of encouraging learning and work anywhere at anytime.

There are moments where I get slight pangs of regret about my choices to become proficient in computing (programming, designing, operating, administrating). There are romantic notions of being a master painter or pianist or mathematician and never touching another computing device again. In the end I still choose to master computing because of just how much it opens me up creatively. Almost everything I’ve been able to provide for my family and friends has come from having a joyous relationship with computing.

Most excitingly to me… the more I master computers the more I see the infinitude of things I don’t know and just how vast computing creativity can be. There aren’t a lot of things in the world that have that effect on me. Maybe I’m simply not paying attention to other things but I strongly suspect it’s actually some fundamental property of computing – There’s Always More.  

Read Full Post »

The aim of most businesses is to create wealth for those working at it. Generally it is preferred to do this in a sustainable, scalable fashion so that wealth may continue to be generated for a long time. The specific methods may involve seeking public valuation in the markets, selling more and more product directly profitably, private valuation and investment and more. The aim of most technology based companies to make the primary activity and product of the business involve technology. Most common understanding of the “technology” refers to information technology, bio technology, advanced hardware and so forth – i.e. tools or methods that go beyond long established ways of doing things and/or analog approaches. So the aims of a technology company are to create and maintain sustainable, scalable wealth generation through technological invention and execution.

Perhaps there are better definitions of terms and clearer articulation of the aims of business but this will suffice to draw out an argument for how technology companies could fully embrace the idea of a platform and, specifically, a technological platform. Too often the technology in a technology company exists solely in the end product sold to the market. It is a rare technology company that embraces technological thinking every where – re: big internet media still managing advertising contracts through paper and faxes, expense reports through stapled papers to static excel spreadsheets and so on. There are even “search” engine companies that are unable to search over all of their own internal documentation and knowledge.

The gains of technology are significant when applied everywhere in a company. A technological product produced by primitive and inefficient means is usually unable to sustain its competitive edge as those with technology in their veins quickly catch up to any early leads by a first, non technical mover. Often what the world sees on the outside of a technology company is The Wizard of Oz. A clever and powerful façade of technology – a vision of smoking machines doing unthinkable things. When in reality it is the clunky, hub bub of a duct taped factory of humans pulling levers and making machine noises. If the end result is the same, who cares? No one – if the result can be maintained. It never scales to grow the human factory of tech facade making. Nor does it scale to turn everything over to the machines.

What’s contemplated here is a clever and emergent interaction of human and machine technology and how a company goes from merely using technology to becoming a platform. Consider an example of a company that produces exquisite financial market analysis to major brokerage firms. It may be that human analysts are far better than algorithms at making the brilliant and challenging pattern recognition observations about an upcoming swing in the markets. There is still a technology to employ here. Such a company should supply the human analysts with as much enhancing tools and methods to increase the rate at which human analysts can spot patterns, reduce the cost in spreading the knowledge where it needs to go and to complete the feedback loop on hits and misses. There is no limit to how deeply a company should look at enhancing the humans ability. For instance, how many keystrokes does it take for the analyst to key in their findings? How many hops does a synthesized report go through before hitting the end recipient? how does the temperature of the working space impact pattern recognition ability? Perhaps all those details are far more of an impact to the sustainable profit than tuning a minute facet in some analytic algorithm.

The point here is that there should be no facet of a business left untouched by technology enhancement. Too often technology companies waste millions upon millions of dollars updating their main technology product only to see modest or no gain at all. The most successful technology companies of the last 25 years have all found efficiencies through technology mostly unseen by end users and these become their competitive advantages. Dell – ordering and build process. Microsoft – product pre-installations. Google – efficient power sources for data centers. Facebook – rapid internal code releases. Apple – very efficient supply chain. Walmart – intelligent restocking. Amazon – everything beyond the core “ecommerce”.

In a sense, these companies recognized their underlying ”platform” soon after recognizing their main value proposition. They learned quickly enough to scale that proposition – and to spend a solid blend of energy on the scale and the product innovation. A quick aside – scale here is taken to mean how efficiently a business can provide its core proposition to the widest, deepest customer base. It does not refer solely to hardware or supply chain infrastructure, though often that is a critical part of it.

One of many interesting examples of such platform thinking is the Coors Brewing company back in its hey day. Most people would not consider Coors a “technology” company. In the 1950s though it changed many “industries” with the introduction of the modern aluminum can. This non-beer related technology reduced the cost of operations, created a recycling sub industry, reduced the problem of tin-cans damaging the beers taste and so on. It also made it challenging on several competitors to compete on distribution, taste and production costs. This isn’t the first time the Coors company put technology to use in surprising ways. They used to build and operate their own powerplants to reduce reliance on non optimal resources and to have better control over their production.

Examples like this abound. One might conclude that any company delivery product at scale can be classified as a technology company – they all will have a significant platform orientation. However, this does not make them a platform company.

What distinguishes a platform technology from simply a technology company is one in which the platform is provided to partners and customers to scale their businesses as well. These are the types of companies where their product itself becomes scale. These are the rare, super valuable companies. Google, Apple, Intel, Facebook, Microsoft, Salesforce.com, Amazon and so on. These companies often start by becoming highly efficient technically in the production of their core offering and then turn that scale and license it to others. The value generation gets attributed to the scale provider appropriately in that it becomes a self realizing cycle. The ecosystem built upon the platform of such companies demands the platform operator to continue to build their platform so they too may scale. The platform operator only scales by giving more scale innovation back to the ecosystem. Think Google producing Android, offering Google Analytics for Free and so on. Think Facebook and Open Graph and how brands rely on their facebook pages to connect and collect data. Think Amazon and its marketplace and Cloud Computing Services. Think Microsoft and the MSDN/developer resources/cloud computing. Think Apple and itunes, app store and so on.

It’s not all that easy though! There seems to come a time with all such platform companies that a critical decision must be made before it’s obvious that it’s going to work. To Open The Platform Up To Others Or Not? Will the ecosystem adopt it? How will they pay for it? Can we deal with what is created? Are we truly at scale to handle this? Are we open enough to embrace the opportunities that come out of it? Are we ready to cede control? Are we ready to create our own competitors?

That last question is the one big one. But it’s the one to embrace to be a super valuable, rare platform at the heart of a significant ecosystem. And it happens to be the way to create a path to sustainable wealth generation that isn’t a short lived parlor trick.

Read Full Post »

Almost all humans do all the following daily:

  • Eat
  • Drink Water
  • Sleep
  • Breath
  • Think about Sex/Get Sexually Excited
  • Communicate with Close Friends and Family
  • Go to the bathroom
  • People Watch
  • Groom

Almost all humans do the following very regularly:

  • Work (hunt, gather, desk job, factory job, sell at the market)
  • Have sex or have sexual activities
  • Listen to or play music
  • Play
  • Take inventory of possessions (count, tally, inspect, store)

A good deal of humans do the following regularly:

  • Go to school/have formal learning (training, go to school, college, apprenticeship)
  • Cook/Prepare Food
  • Read
  • Compete for social status
  • Court a mate

Fewer humans do the following occasionally:

  • Travel more than a few miles from home
  • Write (blog, novel, paper)
  • Eat away from home
  • Stay somewhere that isn’t their home
  • Exercise outside of work tasks (play sports, train, jog)

I’m sure we can think up many more activities in the bottom category probably not many more in the top 3 categories.

For a technology to be mass market successful it has to, at its core, be about behaviors in the top categories.   And it has to integrate with those behaviors in a very pure way, i.e. don’t try to mold the person, let the person mold the technology to their behavior.

I define mass market success as “use by more than 10% of the general population of a country.”   Few technologies and services achieve this.   But those that do all deal with these FHAs.  Twitter, Google, MySpace, Facebook, Microsoft, TV, Radio, Telephone, Cellular Phone…. the more of those activities they deal with the faster they grow.   Notice also that almost all of these examples do not impose a set of specific use paths on users.  e.g. Twitter is just a simple messaging platform for that you can use in a bazillion contexts.

It’s not about making everything more efficient, more technologically beautiful.   It is about humans doing what they’ve done for 100,000+ years with contemporary technology.  If you want to be a successful service, you have to integrate and do a behavioral evolution with the users.

Read Full Post »

Whether it’s “valid” or not humans (and probably most animals) make associations of new, unknown things with similar-seeming known things.  In fact, this is the basis of communication.

In the case of discussing new websites/services/devices like Wolfram|Alpha, Bing, Kindle, iPhone, Twitter and so on it’s perfectly reasonable to associate them to their forebears.  Until users/society gets comfortable with the new thing and have a way of usefully talking about it making comparisons to known things is effective in forming shared knowledge.

My favorite example of this is Wikipedia and Wikis.  What the heck is a wiki?  and what the heck is wikipedia based on this wiki?  Don’t get me wrong – I know what a wiki is. But to someone who doesn’t, hasn’t used one, and hasn’t contributed to one it’s pretty hard to describe without giving them anchors based on stuff they do know.  “Online Encyclopedia”, “Like a Blog but more open”…  (for fun read how media used to talk about wikipedia, more here)

More recently is Twitter.  What is it like?  A chat room? a social network?  a simpler blog? IM?  right… it’s all that and yet something different, it’s Twitter.  You know it when you use it.

Just like in nature new forms are always evolving with technology.  Often new tech greatly resembles its ancestories.  Other times it doesn’t.

In the specific case of Wolfram|Alpha and Bing/google… they share a common interface in the form of the browser and an HTML text field.  They share a similar foundation in trying to make information easy to access.  The twist is that Wolfram|Alpha computes over retrieved information and can actually synthesize (combine, plot, correlate) it into new information.  Search engines retreive information and synthesize ways to navigate it.  Very different end uses, often very complimentary.  Wikipedia uses humans to synthesize information into new information, so it shares some concepts with Wolfram|Alpha.  Answers.com and other answer sites typically are a mash up of databases and share the concept of web search engines of synthesizing ways to navigate data.

All of these are USEFUL tools and they ARE INTERCONNECTED.  None of them will replace each other.  Likely they will all co-evolve. And we will evolve our ways of talking about them.

Read Full Post »

There are many writings in science, law and politics these days that amount to a plea to use the laws that exist in the natural environment to better accomplish common objectives. The argument goes that in order to extend our life on earth and boost the quality of the 28,500 days* we have, we need to play to our strengths. Right now, our strength and the strength of many industrialized nations is the strengths that comes from science and technology.

In a world view and in a regional view, it would be irresponsible if we didn’t use technology and science. That technology includes and an empirical approach to behavior that rubs the status quo proponents the wrong way. Use of what is known about the how we learn and how we behave can help prevent some of the ills and dilemmas that we’ve put ourselves in. It can also prevent others from occurring. Just as assuredly, changes are sure to cause new forms of discomfort. The individual and the society will have to work to deal individually, culturally, politically and environmentally with an empirical behavior analysis that takes a less emotional approach to behavior. Compared to the slaughter of people and cultures, this discomfort is actually a reprieve from what we can expect without change in the voodoo logic and the mumbo-jumbo of 3000 year old superstitions and traditions.

This isn’t a new idea, just a new appeal to the world inhabited by users of science and technology. Someone said in the early 70’s:

“2500 years ago it might have been said that man understood himself as well as any other part of his world. Today, he understands himself least.”

Look at what has happened since that statement…! That makes it a particularly poignant when amplified repeatedly by the disturbing events each of us has had to endure in that period. Things that could have been prevented, avoided or better managed if there had been a different view of “man”.

Due to technology society gets to see war real time; gets to see live video cam death in hospital wards, in police stations, on the campuses and via natural disasters. We get instant access and feedback in almost every media channel available to man. Thus, those that posit that a scientific approach to behavior is a threat to our way of life, look around. For those that claim a sectarian approach to behavior is not possible are reminded that the same things were said about chemistry, biology and physics as some brave explores went about figuring out how those parts of the world work. Anyone frightened by science that keeps him/her alive to protest is in a vulnerable state and is dangerous.

Without a comfort zone we almost all are scared. With a comfort zone we may be still using leaches to clean blood, sage to protect us from evil spirits or apricot pits to cure cancer. [Yes, some reading this may still be favoring one or more of these remedies, but…]

To prevent some of the things that we don’t like happening we’ll have to give up something. Nothing in life or Nature is free. Chemistry, biology and physics gave up ‘stuff.’ Now it is our turn. And, as frightening as it is, we need to find others that can help us make the adjustments over time. Others will help us learn to attend to the natural rules that have always been there but were ignored. Others can help dissolve conflicts over competing beliefs and traditions. We humans are great at adjustments and adaptations so it is clean this can work. It an adjustment how you came to read blogs on the Internet? However, if you like the state of the world, the relationships you have with the institutions, agencies and forces that inhabit your world, then you don’t need to do anything. You get to take the cultural Quaalude and pull over. Ok.

Read Full Post »

The ritual:

Using speech-to-text software trained to my voice, I get to process the world’s media (including environmental sounds) in a way that usurps the originator’s intent and content. The software is as imperfect as my notes are and misrepresents sentiments as well as police sirens as text, translating the mumblings as dialogue.  What’s more, in its translations it adds (hummms, ahs, ands & LOL) to make another unique piece of content for others to scan.  Think of the process as an interactive shoreline based on initial conditions and interrupting floods of events.  What you end up with is an electronic oracle that spews out unclaimed media prophets.  Sometimes you get hooch and sometimes you get scotch!

The rules:

The raw text file is generated from an environmental source and is recreated into a stream of content that needs to be clipped and cut like a shrub…frequently not taking the form it was supposed to have when it was recorded.  All ordered text remain hallowed but words were cut, added, indented and otherwise dressed for the occasion as you remembered.  Only afterthoughts are added robustly dotted with parenthesis and punctuation in the attempt to capture the nuance that is lost in the moments.  When complete, it is not recognizable any more than that uncle that moved to Wisconsin after his parole.

The results:

Our reliance on technology to interpret and host the world’s events is colliding with our ability to absorb, analyze, reflect and proclaim.  Good.  Are these gizmos mutating our perceptions or making us own them?  Beats me… Maybe now we can witness that what was, wasn’t as we thought.  That which never happened, could’ve.   Somehow I think that it is like the NSA/CSS threat; there is only the hint of something important to be gleaned from the abyss of bytes.  

Next up, self-talk recorders…it hurts my amygdala just considering it.

Read Full Post »

Older Posts »