Feeds:
Posts
Comments

Archive for the ‘grid computing’ Category

And I have to start this essay with a simple statement that it is not lost on me that all of the above is 100% derived from my own history, studies, jobs, art works, and everything else that goes into me.  So maybe this is just a theory of myself or not even a theory, but yet another expression in a life time of expressions.   At the very least I enjoyed 20 hrs of re-reading some great science, crafting what I think is a pretty neat piece of art work, and then summarizing some pondering.   Then again, maybe I’ve made strides on some general abstract level.  In either case, it’s just another contingent reconfiguration of things.

At the end I present all the resources I read and consulted during the writing (but not editing) and the making of the embedded 19×24 inch drawing and ink painting (which has most of this essay written and drawn into it).   I drank 4 cups of coffee over 5 hrs, had 3 tacos and 6 hotwings during this process. Additionally I listened to “The Essential Philip Glass” while sometimes watching the movie “The Devil Wears Prada” and the latest SNL episode.

——————-  

There is a core problem with all theories and theory at large – they are not The t=Truth and do not interact in the universe like the thing they refer to.   Theories are things unto themselves.  They are tools to help craft additional theories and to spur on revised dabbling in the world.

FullSizeRender (4)

We have concocted an unbelievable account of reality across religious, business, mathematical, political and scientific categories.  Immense stretches of imagination are required to connect the dots between the category theory of mathematics to radical behaviorism of psychology to machine learning in computer science to gravitational waves in cosmology to color theory in art.  The theories themselves have no easy bridge – logical, spiritual or even syntactically.

Furthering the challenge is the lack of coherence and interoperability of measurement and crafting tools.   We have forever had the challenge of information exchange between our engineered systems.   Even our most finely crafted gadgets and computers still suffer from data exchange corruption.   Even when we seem to find some useful notion about the world it is very difficult for us to transmit that notion across mediums, toolsets and brains.

And yet, therein lies the the reveal!

A simple, yet imaginative re-think provides immense power.   Consider everything as network.  Literally the simplest concept of a network – a set of nodes connected by edges.   Consider everything as part of a network, a subnetwork of the universe.  All subnetworks are connected more or less to the other subnetworks.   From massive stars to a single boson, all nodes in a network and those networks of networks.   Our theories are networks of language, logic, inference, experiment, context.  Our tools are just networks of metals, atoms, and light.   It’s not easy to replace your database of notions reinforced over the years with this simple idea.

But really ask yourself why that is so hard but you can believe that blackholes collide and send out gravitational waves that slightly wobble spacetime 1.3 billion light years away or if you believe in the Christian God, consider how that’s believable and that woman was created from a guy named Adam’s rib.    It’s all a bit far fetched but we buy these other explanations because the large network of culture and tradition and language and semiotics has built our brains/worldviews up this way.

Long ago we learned that our senses are clever biological interpreters of internal and external context.  Our eyes do not see most of “reality” – just a pretty course (30 frames per second) and small chunk of electromagnetic waves (visible light).   in the 1930s we learned that even mathematics itself and the computers we’d eventually construct can not prove many of the claims they will make, we just have to accept those claims. (incompleteness and halting problem.).

These are not flaws in our current understanding or current abilities.  These are fundamental features of reality – any reality at all.  In fact, without this incompleteness and clever loose interpretations of information between networks there would be no reality at all – no existence.   This is a claim to return to later.

In all theories at the core we are always left with uncertainty and probability statements.   We cannot state or refer to anything for certain, we can only claim some confidence that what we’re claiming or observing might, more or less, be a real effect or relation.   Even in mathematics with some of the simplest theorems and their logical proofs we must assume axioms we cannot prove – and while that’s an immensely useful trick it certainly doesn’t imply that any of the axioms are actually true and refer to anything that is true or real.

The notion of probability and uncertainty is no easy subject either.   Probability is a measure of what?   It is a measure belief (Bayes) that something will happen given something else?  Is it a measure of lack of information – this claim is only X% of the information?  Is it a measure of complexity?

IMG_4369

Again, the notion of networks is incredibly helpful.  Probability is a measure of contingency.   Contingency, defined and used here, is a notion of connectivity of a network and nodes within the network.  There need be no hard and fast assignment of the unit of contingency – different measures are useful and instructive for different applications.  There’s a basic notion at the heart of all of them: contingency is a cost function of going from a configuration to another configuration of the network.

And that leads to another startling idea.   Spacetime itself is just a network.  (obvious intuition from my previous statement) and everything is really just a spacetime network.    Time is not the ticks on a clock nor an arrow marching forward.  Time is nothing but a measure of steps to reconfigure a network from state A to some state B.   Reconfiguration steps are not done in time, they are time itself.

(most of my initial thinking comes from Wolfram and others working on this long before my thinking about it: http://blog.stephenwolfram.com/2015/12/what-is-spacetime-really/ – Wolfram and others have done a ton of heavy lifting to translate the accepted theories and math into network terms).

This re-framing of everything into network thinking requires a huge amount of translation of notions of waves, light, gravity, mass, fields, etc into network conventions.  While attempting to do that in blog form is fun and I’ve attempted to keep doing it, the reality of the task is that no amount of writing about this stuff will make a sufficient proof or even useful explanation of the idea to people.

Luckily, it occurred to me (a contingent network myself!) that everyone is already doing this translation and even more startling it couldn’t go any other way.   Our values and traditions started to be codified into explicit networks with the advent of written law and various cultural institutions like religion and formal education.   Our communities have now been codified into networks by online social networks.  Our location and travels have been codified by GPS satellites and online mapping services.  Our theories and knowledge are being codified into Wikis, Programs (Wolfram Alpha, Google Graph, Deep Learning networks, etc).   Our physical interpretations of the world have been codified into fine arts, pop arts, movies and now virtual and augmented realities.   Our inner events/context are being codified by wearable technologies.    And now the cosmos has unlocked gravitational waves for us so even the mystery of black holes and dark matter will start being codified into knowledge systems.

It’s worth a few thoughts about Light, Gravity, Forces, Fields, Behavior, Computation.

  • Light (electromagnetic wave-particles) is the subnetwork encoding the total configurations of the entire universe and every subnetwork.
  • Gravity (and gravitational wave-particles) is the subnetwork of how all the subnetworks over a certain contingency level (mass) are connected.
  • Other 3 fundamental Forces (electromagnetics, weak nuclear, strong nuclear) are also just subnetworks encoding how all subatomic particles are connected.
  • Field is just another term for network, hardly worth a mention.
  • Behavior observations are partially encoded subnetworks of the connections between subnetworks.  They do not encode the entirety of a connection except for the smallest, most simple networks.
  • Computation is time is the instruction set is a network encoding how to transform one subnetwork to another subnetwork.

These re-framed concepts allow us to move across phenomenal categories and up and down levels of scale and measurement fidelity.  They open up improved ways of connecting the dots between cross-category experiments and theories.   Consider radical behaviorism and schedules of reinforcement combined with the Probably Approximately Correct learning theory in computer science against a notion of light and gravity and contingency as defined above.

What we find is that learning and behavior based on schedules of reinforcement is actually the only way a subnetwork (say, a person) and a network of subnetworks (a community) could encode the vast contingent network (internal and external environments, etc).   Some schedules of reinforcement maintain responses better than others, and again here we find the explanation.  Consider a Variable Ratio schedule reinforcing a network.  (see here for more details: https://en.wikipedia.org/wiki/Reinforcement#Intermittent_reinforcement.3B_schedules).   A variable ratio (a variations/compositions on this) schedule is a richer contingent network itself that say a fixed ratio network.  That is, as a network encoding information between networks (essentially a computer program and data) the variable ratio has more algorithmic content to keep associations linked after many related network configurations.

Not surprisingly this is exactly the notion of gravity explained above.  Richer, more complex networks with richer connections to other subnetworks have much more gravity – that is they attract more subnetworks to connect.  They literally curve spacetime.

To add another wrinkle in theory, it has been observed in a variety of categories that the universe seems to prefer computational efficiency.  Nearly all scientific disciplines from linguistics to evolutionary biology to physics to chemistry to logic end up with some basic notion of “Path of Least Effort” (https://en.wikipedia.org/wiki/Principle_of_least_effort).  In the space of all possible contingent situations networks tend to connect in the computationally most efficient way – they encode each other efficiently.  That is not to say it happens that way all the time.  In fact, this idea led me to thinking that while all configurations of subnetworks exist, the most commonly observed ones (I use a term: robust) are the efficient configurations.  I postulate this explains mathematical constructs such as the Platonic solids and transcendental numbers and likely the physic constants.  That is, in the space of all possible things, the mean of the distribution of robust things are the mathematical abstractions.  While we rarely experience a perfect circle, we experience many variations on robust circular things… and right now the middle of them is the perfect circle.

IMG_4366

Now, what is probably the most bizarre idea of all:  nothing is actually happening at the level of the universe nor at the level of a photon.  The universe just is.   A photon, which is just a single massless node, everything happens to it all at once, so nothing happens.

That’s right, despite all the words and definitions above with all the connotations of behavior and movement and spacetime… experience and happening and events and steps and reconfigurations are actually just illusions, in a sense, of subnetworks describing other subnetworks.   The totality of the universe includes every possible reconfiguration of the universe – which obviously includes all theories, all explanations, all logics, all computations, all behavior, all schedules in a cross product of each other.   No subnetwork is doing anything at all, it simply IS and is that subnetwork within the specific configuration of universe as part of the wider set of the whole.

This sounds CRAZY.   until you look back on the history of ideas, this notion has come up over and over regardless of the starting point, the condition of the observational tools, the fads of language and business of the day.  It is even observable in how so many systems “develop” first as “concrete” physical, sensory things… they end up yielding time and time again to what we call the virtual – strangely looping recursive networks.   Here I am not contradicting myself, instead… this is what exists within the fractal nature of the universe (multiverse!) it is self similar all the way up and down scales and across all configurations (histories).

Theories tend to be ignored unless they are useful.   I cannot claim utility for everyone on this theory.  I do find it helpful for myself in moving between disciplines and not getting trapped in syntactical problems.   I find confirmation of my own cognitive bias in the fact that the technology of loosely connecting the dots like GPS, hyperlinks, search engine, social media, citation analysis, Bayes, and now deep learning/PAC have yielded tremendous expansion of information and re-imaging of the world.

IMG_4355

Currency, writing, art, music are not concrete physical needs and yet they mediate our labor, property, government, nation states.   Even things we consider “concrete” like food and water are just encodings of various configurations.  Food can be redefined in many ways and has been over the eons as our abstracted associations drift.   Water seems like a concrete requirement for us, but us is under constant redefinition.  Should people succeed in creating human-like (however you define it) in computers or the Internet it’s not clear water would be any more concrete than solar power, etc.

Then again, if I believe anything I’ve said above, it all already exists and always has.

 

———————————–

 

Chaitin on Algorithmic Information, just a math of networks.
https://www.cs.auckland.ac.nz/~chaitin/sciamer3.html

Platonic solids are just networks
https://en.m.wikipedia.org/wiki/Platonic_solid#Liquid_crystals_with_symmetries_of_Platonic_solids

Real World Fractal Networks
https://en.m.wikipedia.org/wiki/Fractal_dimension_on_networks#Real-world_fractal_networks

Correlation for Network Connectivity Measures
http://www.ncbi.nlm.nih.gov/pubmed/22343126

Various Measurements in Transport Networks (Networks in general)
https://people.hofstra.edu/geotrans/eng/methods/ch1m3en.html

Brownian Motion, the network of particles
https://en.m.wikipedia.org/wiki/Brownian_motion

Semantic Networks
https://en.wikipedia.org/wiki/Semantic_network

MPR
https://en.m.wikipedia.org/wiki/Mathematical_principles_of_reinforcement

Probably Approximately Correct
https://en.m.wikipedia.org/wiki/Probably_approximately_correct_learning

Probability Waves
http://www.physicsoftheuniverse.com/topics_quantum_probability.html

Bayes Theorem
https://en.m.wikipedia.org/wiki/Bayes%27_theorem

Wave
https://en.m.wikipedia.org/wiki/Wave

Locality of physics
http://www.theatlantic.com/science/archive/2016/02/all-physics-is-local/462480/

Complexity in economics
http://www.abigaildevereaux.com/?p=9%3Futm_source%3Dshare_buttons&utm_medium=social_media&utm_campaign=social_share

Particles
https://en.m.wikipedia.org/wiki/Graviton

Gravity is not a network phenomenon?
https://www.technologyreview.com/s/425220/experiments-show-gravity-is-not-an-emergent-phenomenon/

Gravity is a network phenomenon?
https://www.wolframscience.com/nksonline/section-9.15

Useful reframing/rethinking Gravity
http://www2.lbl.gov/Science-Articles/Archive/multi-d-universe.html

Social networks and fields
https://www.researchgate.net/profile/Wendy_Bottero/publication/239520882_Bottero_W._and_Crossley_N._(2011)_Worlds_fields_and_networks_Becker_Bourdieu_and_the_structures_of_social_relations_Cultural_Sociology_5(1)_99-119._DOI_10.11771749975510389726/links/0c96051c07d82ca740000000.pdf

Cause and effect
https://aeon.co/essays/could-we-explain-the-world-without-cause-and-effect

Human Decision Making with Concrete and Abstract Rewards
http://www.sciencedirect.com/science/article/pii/S1090513815001063

The Internet
http://motherboard.vice.com/blog/this-is-most-detailed-picture-internet-ever

Read Full Post »

There’s a remarkable feature on Edge.org.  I point it out because it’s robust dialogue about collective behavior.  In particular, the discussion is decidedly not casual agentish, mind-body dualist nor monocasual.  Dr. Couzin is refreshing!  His approach ties very well to analysis of media (collective behavior!)  (Read his other stuff like this essay, too!)

how can a colony decide between two food sources, one of which is slightly closer than the other? Do they have to measure this? Do they have to perform these computations?

We now know that this is not the case. Chris Langton and other researchers have also investigated these properties, whereby individuals just by virtue of the fact that one food source is closer, even if they are searching more or less at random, have a higher probability of returning to the nest more quickly. Which means they lay more chemical trail, which the other ants tend to follow. You have this competition between these sources. You have an interaction between positive feedback, which is the amplification of information—that’s the trail-laying behavior—and then you have negative feedback because of course if you just have positive feedback, there is no regulation, there is no homeostasis, you can’t create these accurate decisions.

There’s a negative feedback, which in this case is the decay of the pheromone, or the limited number of ants within the colony that you can recruit, and this delicate balance of positive and negative feedback allows the colony to collectively decide which source is closest and exploit that source, even though none of these individuals themselves have that knowledge.

Great exposition on the web of contingencies and the feedback loops capable of reinforcing complex behavior that we typically claim is “free choice” like or conscious decision making.

One of the big challenges that still remains, and one that we’re beginning to address—I’m not saying it’s a new question; I’m not saying people haven’t addressed it—is the level at which selection is acting within populations. The view of individual level selection, and selection at the level of genes, of course, holds. But if you consider, say, a school of pelagic fish—these large schools that make these dramatic maneuvers—the individuals are unrelated to each other. They drift around as pelagic larvae, so when their schools are comprised as adults, they’re completely unrelated.And yet the individuals’ functioning is entirely within the context of these schools; you can see the integration of the behavior when they are attacked by predators, you can see why in the ’40s people thought there must be thought transference, must be telekinesis, because of these remarkable maneuvers. We now know that these maneuvers are created by the relatively local interactions among the individuals. But if you take an individual, say, a herring, from the school and isolate it, it will die of stress.It is a bit like taking cells from your body—when you take them outside the body, they are unable to function. Of course it is not as closely integrated as a body, it is not as closely integrated as an ant colony. But there is this high level of integration among unrelated individuals. And in terms of how the genes are going to propagate, genes that allow individuals to function collectively as a group are going to be extremely important. So one has to then begin to think about the level at which selection is actually functionally acting. There is nothing new in terms of the genetics here, but it just in terms of how you begin to understand how the collective behaviors emerge and evolve within these types of systems.

Yes! Selection by consequences. Selection happens at many levels – genetically, epigenetically, and behaviorally.  This is a clearer and more accurate description of collective behavior than some previous discussion on edge.org.

It doesn’t stop there though.  Dr. Couzin reminds us yet again observed behavior emerges from simple, often unexpected, contingencies.

Another example that we’ve been investigating arehuge swarms of Mormon crickets. If you look at these swarms, all of the individuals are marching in the same direction, and it looks like cooperative behavior. Perhaps they have come to a collective decision to move from one place to another. We investigated this collective decision, and what really makes this system work in the case of the Mormon cricket is cannibalism.

You think of these as vegetarian insects—they’re crop pests—but each individual tries to eat the other individuals when they run short of protein or salt, and they’re very deprived of these in the natural environment. As soon as they become short of these essential nutrients, they start trying to bite the other individuals, and they have evolved to have really big aggressive jaws and armor plating over themselves, but the one area you can’t defend is the rear end of the individual—it has to defecate, there has to be a hole there—and so they tend to specifically bite the rear end of individuals. It is the sight of others approaching and this biting behavior that causes individuals to move away from those coming towards them. This need to eat other individuals means you are attracted to individuals moving away from you, and so this simple algorithm essentially means the whole swarm starts moving as a collective.

I mean, really. Think about this… how different is human behavior? (I know, I know, it’s more complex, but…).   Consider the elections, consider online behavior, and consider office politics.  We move towards what we move away from and then you get this behavior that appears collective.  Perhaps in our rush to be anti-brand, unique, a cut above, a stand out, we all come together????

In the human sphere it takes us a great deal of effort and complex cognitive abilities to decide what to do if we are going to decide in a collective. What we can now show is that animal groups, with really simple cognitive powers, can actually perform these types of computations. What we now want to understand is under what conditions these types of cognitive capacities work. How can these animal groups take information from multiple sources; how do they filter out noise and yet amplify weak signals?

Right, we should get some data to back that up!  We are on to it…

Second idea that is interesting is one reason I’m interest in NKS Summer School – simple rules applied over and over and sometimes compounded can generate very complicated behavior.  Sometimes the behavior is so complicated we’re inclined to model it with complex rules.  I want to explore this in depth and showcase it visually.

Dr. Couzin makes a great case for why the study of behavior is key to understanding.

The locust is one of the best-studied organisms for physiology and neurobiology. It is really an amazing model system for looking at these principles. Yet the last time people looked at the swarming behavior of locusts in the lab was in 1954. So there is this dearth of information, and no matter how much you look at a locust, and how much you know about the biology of a locust, you cannot predict what will happen when you start to put these organisms in swarms. …. Issues like the dimensionality of the problem are important; issues such as certain details of the interactions are not important if you want to understand the general principles of how it changes—the phase transition is very important for us in the case of locusts because, as everyone knows, locusts are always around. But then suddenly there is this transition from one state to another almost liquid state—so its driven liquid-type state—where the swarms can become enormous.

He, too, arrives at a similar curiousity in the application of these abstractions to the study of media.

Individuals can have a certain opinion on certain topics, and we can allow individuals to interact across a social network, and of course the social network’s topology is partly defined by what opinions you have. You tend to interact more often with people who have similar opinions to yourself because you are more likely to meet them in your sphere of life. But interacting with people can change your opinions, which can then change your social network, which can change your opinions, so again we have this recursive feedback. And so we are using it to explore these types of properties?I am sure that these types of principles also would apply to understanding dynamics on the Web.

I know there’s some excellent work by Duncan Watts on how individuals buy on-line, or how they judge information that they have on-line, how your judgment of something is dependant on what previous people have said about it. What they used was an on-line music store, where you either have information about what previous people have thought about a song, or you have no information and you just have to rank the song without that previous buyer. This strongly changes people’s behavior because of course when you see what other people have been doing, you can have this autocatalysis, this positive feedback. You can tend to buy into that because you have seen other people do it.

Yes, like so many of my posts, we’ll leave with a simple take away:

We are constantly looking for areas where we can create a more data-driven science behind the spread of these normative behaviors.

And maybe this constant looking is also a spread of a normative behavior?

~R

p.s.
For info on Duncan Watts weave your way to his Kevin Bacon paper and his take on online music/social trends.  You may want to hit up his wikipedia page for more links.

And Yahoo! research, where he works, has some kickass concepts, publications, apps.

add to del.icio.us :: Add to Blinkslist :: add to furl :: Digg it :: add to ma.gnolia :: Stumble It! :: add to simpy :: seed the vine :: :: :: TailRank :: post to facebook

Read Full Post »

How convenient!  Slashdot had a lead post about Google ad patents and these patents are all about behavioral targeting.

19. The computer-implemented method of claim 18 wherein actions of the user monitored are selected from a group of user actions consisting of (a) cursor positioning, (b) cursor dwell time, (c) document item selection, (d) user eye direction, (e) user facial expressions, (f) user expressions, and (g) express user topic interest input.

A few other posts around the web lead to other coolish behavior based ad information.

One key aspect of behavior based anything – you need computing power.  lots of it.  This is yet another reason most vertical ad networks, targeted publishers and standard marketers can’t really do behavior based, uber effective advertising campaigns.  They simply lack the raw horsepower.

Read Full Post »

I’m a bit behind some of the other early movers…

3tera.  Taking grid and virtualization in a different direction.  They provide services for entire virtual clusters, virtual data centers, and more.

If implementing massive super computers and data centers becomes little more than filling in a sales web form, watch out hardware, hosting, and desktop sellers.

Perhaps google will get some competition now that massive CPU resources are being made available to anyone with an idea.

~Russ

Read Full Post »

Grid is here and it’s a game changer.  Not today, maybe not totally in 08, but certainly in the nearish future.

What is grid computing (cloud computing), you ask?  well, it’s lots of things.  Generally it refers to the idea that you can rent N number of cpu cycles to compute whatever you need.  Run websites, crunch datasets, run simulations, parse logs.  whatever you need to do, just rent the cycles to do it from grid computing providers, companies with excess cpu time or from your friendly neighborhood tech.

Grid is useful now because the tools to benefit from it are finally easy enough to generate adoption.  Amazon’s EC2 Cloud computing is amazing.  Really it is.  A webservice approach to setting up custom “nodes”.  Billed simply into accounts you probably have had for years.  Tons of documentation, samples, support developers… all there for you.

Yahoo just invest in Hadoop which is somewhat of grid computing.  Google is a gigantic grid computer system (use GWT to take some advantage of it!)  All available to Fortune 5, government, and YOU!

Technically, this matters because you can do a lot more when you do have to sweat the cycles.  Really.  if there’s no computational limit to what you are doing (other than can you afford it) all sorts of new services can be created.  New games, new investor tools, new education software, new advertising, new communications, new social networks.  Bandwidth was the first big damn to break.  With giant pipes readily available, we got to move away from text only experiences.  Look what’s resulted!?!  Computational power is another damn we’re breaking.  Retargeting of content, behavior analysis on the fly, improved AI…  all available to the common dev.  That’s huge.

At first I thought it would hurt hosting provides, hardware makers and so forth.  Actually though, i think it’s additive.  It’s yet another tool we can all use. It doesn’t replace always on, dedicated servers nor fast locked down storage.  It simply gives us lots of cycles as we need them to do interesting things.  And because I can’t see the future in any detail, I can’t make any claims about what it might do to existing industry and technology.

If you haven’t played with this stuff or even read about it, you need to.  It likely will be embedded in most online (and what isn’t online anymore?) within a decade.  web services and ajax was just the tip of this type of thinking.

Here’s what I want to do with cloud computing:

  • Find largest Mersenne Prime Number
  • Power my Decision Engine product (evolution of search engines to actually guide decisions)
  • Hook into ad servers to reforcast in realtime and retarget media based on behavior
  • Hook into a swarm of networked NXT bots to create social behavior across geography
  • fingerprint all YouTube videos and categorize based on transcripts and similarity scores (good for targeting ads or finding related media)
  • Create first homegrown weather forecasting simulation from Global models to Weather On the Ground. make freely available to all
  • Analyze social networks in real time
  • create a bot to play halo 3 for me all the time, but actually using the controller and data on SCREEN!
  • more more more

~Russ

Read Full Post »

There’s just more and more analysis and speculation about how critical it is to be fast.

Slashdot linked to this fairly decent NYtimes article about Google and Microsoft.  One of the key points, which I actually agree with, is that GRID COMPUTING IS AT THE CENTER OF ALL FAST INNOVATION IN THE VERY NEAR FUTURE.

I have tons more to talk about with grid computing and will save that for a later entry.

I wanted to also point out the recent Yahoo! announcement of their investment in the Apache Foundation and the hiring of a key person there.  They don’t hide from the fact that this will help them be FASTER due to the bigger, quicker open source community.  It’s clearly yet another move to compete with Google and Microsoft.  It’s particularly strategic considering Apache is a foundational component to so many web things and lucene and hadoop (more grid computing!) are directly competitive with Google, Microsoft and Amazon.

Recently Google announced they are working on these web things called “knols” which are basically wikipedia/about.com style landing pages to search results.  (Gee, all those landing pages on the web are worth something? go figure.)   This is a “get it faster” strategy too… as it get me more ad dollars faster than waiting for wikipedia and others to put google ads all over the place…

i could go on and on about moves big companies are making to simply KEEP UP.  it’s damn near impossible to keep up the pace at any company.  Why?

  1. The talent is fluid.  They can do themselves or they can go to the competition
  2. Big companies almost always get slow.  Start ups don’t have the cash flow to invest heavily in things like super computing, lots of bandwidth, etc. etc.  (one way or another something is subverting speed)
  3. The foundations of the technology are changing quicker than we can agree on standards.  (Over 10 wireless standards, no browsers work the same, .Net is on version 3.5, vista isn’t taking over, intel macs make them a force (but who knows how to code that), flex, silverlight, ruby on rails, python, ajax…).  Without standards it’s hard to educate buckets of programmers.  Without lots of programmers, hard to transfer knowledge quickly.
  4. If the programmers aren’t getting it fast enough, who in the organization is?
  5. Transparency to users – they get their say and they say it hard and fast.  course correct quickly or it’s over
  6. China
  7. India
  8. Tampa – cheaper US labor markets, accessible for high tech/remote projects
  9. and tons more reason

In fact, I have so much to say on SPEED, i’m going to launch into a series of posts on who is fast, what it takes to be fast, what undercuts speed, how you can’t fail fast enough………

fast to bed now.

~Russ

Read Full Post »