My Contribution to the “Tech Bubble 2.0” Literature

I’m linking this piece here a bit late, but when I wrote it there wasn’t a consensus that any sort of bubble had formed. Literally a week later there was, and then the next week it started cooling off as start-up valuations began to descend from the stratosphere.

WhatsApp’s unseen hordes of engineers and the massive, global compute infrastructure that those engineers are constantly growing and maintaining, are all rented in very small time slices by the full-time members of the WhatsApp engineering team. It’s the combination of on-demand engineers, the on-demand compute resources that those engineers are packaged with, and the proliferation of cheap, capable client devices at the network’s edge (i.e. smartphones and tablets) that makes up the real force multiplier—the lever, if you will—for WhatsApp’s core team, and not any particular technology.

When you think about the fact that this very same force multiplier can be rented by anyone with a credit card, a laptop, and an internet connection, then you realize that the breathtaking scale and speed of WhatsApp’s success isn’t any more of a “tech” story than, say, the similar success of BuzzFeed. Rather, it’s a story about the sudden, outsized rewards that can accrue to a relatively small amount of properly timed effort applied from just the right spot to a globe-spanning networked computing platform.

Because WhatsApp is renting a really long, world-moving lever that anyone can rent to move anywhere from a few dozen users to a few billion, then the difference between the 50 engineers at WhatsApp and any other group of talented engineers with access to the same really long lever of networked compute resources, is that someone at WhatsApp has pointed those engineers to a fulcrum that’s sitting in exactly the right place — in this case at the nexus of free SMS and no ads, no games, no gimmicks — so that when they grab that lever the world moves.

Thus the secret to success in the present moment — if we define “success” as “I have a billion users” — seems to lie in grabbing the big public lever and being the first to iterate into one of the primo fulcrum spots that’s characterized by minimal in-house engineering hours and rapid user growth.

Lots more at the link.

I, Mr. Robot

While I’m on a kick of updating my site, I should include a link to a Medium essay that I wrote a while back on software, the past, the future, and a particular TV show.

When I watch Mr. Robot, it’s like I’m watching dystopian sci-fi, except holy crap I actually live in that world on the screen. As I sit there on the couch I think back to 2005, before the iPhone and Facebook and Twitter and YouTube and Tinder and Instagram, before we all jumped feet first into “the stream”, and I watch the show through my pre-stream eyes. This practice triggers for me the novel sensation of having suddenly woken up in the future, and that sensation is part of the reason I enjoy the show. It’s like the inverse of virtual reality’s “presence” effect — I know that the world on the screen is real and that I’m actually in it, but I somehow don’t quite believe it.

This is what makes Mr. Robot so utterly compelling: it’s dystopian sci-fi about the right now, and it works because so much of “right now” is so new that it still has the potential to blow our minds when we grind it up and snort it in little 43-minute lines.

For those of you who watch the show or plan on watching it, I want you to borrow my 2005 glasses for at least one episode. To help orient you and jog your memory, here are some things that were true about the world in 2005…

My 3,000-word Smart Gun Smackdown

I know TechCrunch Editor Jonathan Shieber from when we both went to the same summer nerd camp in middle school. We reconnected on Facebook a while back, and we recently got into a long conversation about smart guns, which Jonathan encouraged me to turn into a TC post. So I turned it into 3K words worth of “if you ask me about this again I’ll just send you this link”. I’ll give away the ending right here, but please do read the whole thing:

To sum up, smart guns aren’t gonna happen because electronic locks will never be reliable enough that the shooting public will embrace them. It’s possible that cops might eventually warm up to smart guns, because cops open carry and are at constant risk of having their own guns used against them. But for every law abiding citizen who’s not carrying openly and/or wearing a uniform that screams “guy with a gun right here!”, smart guns are just not going to be attractive for the reasons outlined above.

Even if the public warms up to smart guns, this won’t stop criminals from firing stolen weapons, because there’s no way to lock down a firearm or any other gadget in such a way that it can’t be “jailbroken”. Criminals will just remove the locks, or, even worse, they’ll learn to remotely disable the guns of victims and police. And for law-abiding concealed carriers who leave the electronic locks in place, any scheme that relies on wireless technology will effectively make such people open carriers for anyone who can eavesdrop on the signal.

If smart gun proponents are really serious about saving lives, they’ll quit wasting time on a doomed quest for a quick technological fix to a nasty set of social problems, and instead focus their efforts on changes that could actually save lives. How about ending the war on drugs, demilitarizing the police, advocating for prison reform, investing in street-level intervention programs, or stopping the drone strikes and the endless military interventions that kill countless civilians and radicalize the survivors. Even a small victory in any one of those areas would save more lives than the most advanced smart gun imaginable.

But all of that stuff that I just suggested is hard, and it involves politics, and our politics seem more hopelessly broken with every day that passes. So I certainly get the appeal of going around the system and throwing some Silicon Valley “disruption” at the problem of gun violence. And as the father of three beautiful little girls I wish to God that there were a killer app that could stop or even measurably reduce the killing. But there isn’t, and until someone invents a technology that can address the deeper, systemic problems that drive Americans from all walks of life to arm themselves, there never will be.

I’d have never agreed to have a TC byline back when Arrington was the EiC because… well, because Arrington. But now that he’s a distant memory, and I have friends there, I’m over my long-standing issues with it.

Intel Responds to Calxeda/HP ARM Server News: Xeon Still Wins for Big Data

Cloudline | Blog | Intel Responds to Calxeda/HP ARM Server News: Xeon Still Wins for Big Data

Intel’s Radek Walcyzk, head of PR for the chipmaker’s server division, called Wired today with Intel’s official response to the ARM-based microserver news from Tuesday. In a nutshell, Intel would like the public to know that the microserver phenomenon is indeed real, and that Intel will own it with Xeon, and to a lesser extent with Atom.

Now, you’re probably thinking, isn’t Xeon the exact opposite of the kind of extreme low-power computing envisioned by HP with Project Moonshot? Surely this is just crazy talk from Intel? Maybe, but Walcyzk raised some valid points that are worth airing.

More at Cloudline.

Big Data, Fast & Slow: Why HP’s Project Moonshot Matters

Cloudline | Blog | Big Data, Fast & Slow: Why HP’s Project Moonshot Matters

In Marz’s presentation, which describes how Twitter’s Storm project complements Hadoop in the company’s analytics efforts, Marz says in essence (and here I’m heavily paraphrasing and expanding) that there are really two types of “Big Data”: fast and slow.

Fast “Big Data” is real-time analytics, where messages are parsed and for some kind of significance as they come in at wire speed. In this type of analytics, you apply a set of pre-developed algorithms and tools to the incoming datastream, looking for events that match certain patterns so that your platform can react in real time. A few examples: Twitter runs real-time analytics on the Twitter firehose in order to identify trending topics; Topsy runs real-time analytics on the same Twitter firehose in order to identify new topics and links that people are discussing, so that it can populate its search index; a high-frequency trader runs real-time analytics on market data in order to identify short-term (often in the millisecond range) market trends so that it can turn a tiny, quick profit.

Real-time analytics workloads are have a few common characteristics, the most important of which is that they are latency sensitive and compute-bound. These workloads are also bandwidth intensive in that the compute part of the platform can process more data than storage and I/O can feed it (hence the compute bottleneck). People doing real-time analytics need lots and lots of CPU horsepower (and even GPU horsepower in the case of HFT), and they keep as much data as they can in RAM so that they’re not bottlenecked by disk I/O.

I’ve drawn a quick and dirty diagram of this process, above. As you can see, the bottlenecks for Hadoop are the disk I/O from the data archive and the human brain’s ability to form hypotheses and turn them into queries. The first bottleneck can be addressed with SSD, while fixing the second is the job of the growing stack of more human-friendly tools that now sits atop Hadoop.

More at Cloudline

The Opposite of Virtualization: Calxeda’s New Quad-Core ARM Part for the Cloud

Cloudline | Blog | The Opposite of Virtualization: Calxeda’s New Quad-Core ARM Part for the Cloud

On Tuesday, Austin-based startup Calxeda launched its EnergyCore ARM system-on-chip (SoC) for cloud servers. At first glance, Calxeda’s chip looks like something you’d find inside a smartphone, but the product is essentially a complete server on a chip, minus the mass storage and memory. The company puts four of these EnergyCore SoCs onto a single daughterboard, called an EnergyCard, which is a reference design that also hosts four DIMM slots and four SATA ports. A systems integrator would plug multiple daughterboards into a single mainboard to build a rack-mountable unit, and then those units could be linked via Ethernet into a system that can scale out to form a single system that’s home to some 4096 EnergyCore processors (or a little over 1,000 four-processor EnergyCards).

More at Cloudline

Interview: Topsy Co-Founder on Twitter, Uprisings, Authority, and Journalism

Cloudline | Blog | Interview: Topsy Co-Founder on Twitter, Uprisings, Authority, and Journalism

Ghosh: Around 2005, people used to do this thing called “Google bombing,” where they would put links. One of the responses from Google was to require that all websites put a “nofollow” tag on links that are not created by the website itself.

So if you had a link that was posted in the comments, or posted by a user — which includes things like Wikipedia or all social media — which has not been created by the website [then you had to add a “nofollow” tag]. So the authority model — where, when a website links to something else, it gives its authority to that thing — that model breaks down because the website is no longer controlling who puts that link on its pages. So for all links of those types, they were forced to add this nofollow tag so that [the links] could be ignored for the purpose of computing authority. What that means, though, is that, while it was breaking the earlier authority model of Google, [Google] did not change their authority model in response to the way the web was changing.

And the web changed so that the authority model of the new web is that people are the sources of authority. This was always really the authority model, but 10 or 15 years ago, a website and a person were pretty much the same thing.

Wired.com: A website was a useful proxy for a person or a collection of people (an institution, say).

Ghosh: Yes. And that changed when you had different people posting on the same website, or the same people posting on different websites — that proxy didn’t work anymore. But Google didn’t change their authority model.

More at Cloudline.

Beyond Google’s Reach: Tracking the Global Uprising in Real Time

Cloudline | Blog | Beyond Google’s Reach: Tracking the Global Uprising in Real Time

On Oct. 15, groups of protesters affiliated with the Occupy Wall Street movement began filing into the branch offices of their banks to close their accounts. Later that day, videos began to show up online of those protesters being arrested. Irate branch managers had called the cops, claiming that these customers were being disruptive, so police began hauling the protesters away for booking.

The spectacle of citizens being arrested for attempting to close out their personal bank accounts made a splash in all of the usual corners of the internet. Except one: Google.

Like the larger Occupy Wall Street movement, which is often referenced online via the Twitter hashtag #OWS, the Oct. 15 protest was organized using #oct15. A search for #oct15 on the day of the protest yielded nothing but garbage results, and my searches as late as a day later yielded similar output. But despite allegations that Google — especially Google News, which still doesn’t have any worthwhile results for #oct15 — is censoring protest-related material, the more straightforward answer to the question of why the world’s largest search engine can’t produce useful results for current events in real time is that it’s simply not designed to.

As I found out on the day of Oct. 15, if you want quality information about events as they unfold in real time, then you can forget about the Google search box. Instead, you have to turn to alternative search engines, and specifically to Topsy, which had links to blog posts, videos, and pictures of the protest on the day of the protest, often mere minutes after the information was posted online. I’ve been a Topsy user for the past six months, and on Oct. 15, when Google searches were turning up garbage, I typed “#oct15″ into the Topsy search box and was able to track events as they happened.

More at Cloudline.

Meet ARM’s Cortex A15: The Future of the iPad, and Possibly the Macbook Air

Cloudline | Blog | Meet ARM’s Cortex A15: The Future of the iPad, and Possibly the Macbook Air

In addition to unveiling its Cortex A7 processor on Wednesday, the press event was also a sort of second debut for the Cortex A15. The A15 will go into ARM tablets and some high-end smartphones during the second half of 2012, and it’s by far the best candidate for an ARM-based Macbook Air should Apple chose to take this route. Just as importantly, A15 will also go into the coming wave of ARM-based cloud server parts that have yet to be announced.

As part of the press materials for the A7 launch, ARM also released the first detailed block diagram—at least that I’ve been able to find—of the Cortex A15. The company also had the first working silicon of the A15 on display running Android. So let’s take a look at the A15 from top to bottom, because it is the medium-term future not only of the mobile gadgets that we all know and crave, but possibly of some of the servers that those devices will connect to.

More at Cloudline.

ARM’s Cortex A7 Is Tailor-Made for Android Superphones

Cloudline | Blog | ARM’s Cortex A7 Is Tailor-Made for Android Superphones

The A7′s design improvements over the older A8 core are possible because ARM has had the past three years to carefully study how the Android OS uses existing ARM chips in the course of normal usage. Peter Greenhalgh, the chip architect behind the A7′s design, told me that his team did detailed profiling in order to learn exactly how different apps and parts of the Android OS stress the CPU, with the result that the team could design the A7 to fit the needs and characteristics of real-world smartphones. So in a sense, the A7 is the first CPU that’s quite literally tailor-made for Android, although those same microarchitectural optimizations will benefit for any other smartphone OS that uses the design.

The high-level block diagram for the A7 released at the event reveals an in-order design with an 8-stage integer pipeline. At the front of the pipeline, ARM has added three predecode stages, so that the instructions in the L1 are appropriately marked up before they go into the decode phase. Greenhalgh told me that A7 has extremely hefty branch prediction resources for a design this lean, so I’m guessing that the predecode phase involves tagging the branches and doing other work to cut down on mispredicts.

More at Cloudline.