Authentic8 whitepaper: Why a virtual browser is important for your enterprise

The web browser has become the defacto universal user applications interface. It is the mechanism of choice for accessing modern software and services. But because of this ubiquity, it puts a burden on browsers to handle security more carefully.

silo admin console2Because more malware enters via the browser than any other place across the typical network, enterprises are looking for alternatives to the standard browsers. In this white paper that I wrote for Authentic8, makers of the Silo browser (their console is shown here), I talk about some of the issues involved and benefits of using virtual browsers. These tools offer some kind of sandboxing protection to keep malware and infections from spreading across the endpoint computer. This means any web content can’t easily reach the actual endpoint device that is being used to surf the web, so even if it is infected it can be more readily contained.

Why the original Soviet Internet failed

I am reminded today about the cold war with next week being the 30th anniversary of the Chernobyl disaster. But leading up to this unfortunate event were a series of activities during the 1950s and 1960s where we were in a race with the Soviets to produce nuclear weapons, launch manned space vehicles, and create other new technologies. We also were in competition to develop the beginnings of the underlying technology for what would become today’s Internet.

One effort succeeded thanks to well-managed state subsidies and collaborative research that worked closely with a centrally planning authority. The other failed largely because of unregulated competition that was stymied by a variety of self-interests. Ironically, we acted like the socialists and the Soviets acted like the capitalists.

While the origins of the American Internet are well documented, until now there has been little published research into those early Soviet efforts. A new book from Benjamin Peters, a professor at the University of Tulsa, called How Not to Network a Nation seeks to rectify this. While a fairly dry read, it nevertheless is fascinating to see how the historical context unfolded and how the Soviets missed out on being at the forefront of Internet developments, despite early leads in rocketry and computer science.

It wasn’t from lack of effort. From the 1950s onward, a small group of Soviet scientists tried to develop a national computer network. They came close on three separate times, but failed each time. Meanwhile, the progenitor of the Internet, ARPANET, was established in late 1959 in the US and that became the basis for the technology we use every day.

Ultimately the Soviet-style “command economy” proved inflexible and eventually imploded. Instead of being a utopian vision of the common man, it gave us quirky cars like the Lada and a space station Mir that looked like something built out of spare parts.

The Soviets had trouble mainly because of a disconnect between their civilian and military economies. The military didn’t understand how to marshall and manage resources for civilian projects. And when it came time to deal with superstar scientists from its army, they faltered when deciding on proposed civilian projects.

Interestingly, those Soviet efforts at constructing the Internet could have become groundbreaking, had they moved forward. One was the precursor to cloud computing, another was an early digital computer. Both of these efforts were ultimately squashed by their bureaucracies, and you know how the story goes from there. What is more remarkable is that this early computer was Europe’s first, built in an old monastery that didn’t even have indoor plumbing.

Almost a year after ARPANET was created, the Soviets had a meeting to approve their own national computing network. Certainly, having the US ahead of them increased their interest. But they tabled the idea from Vicktor Glushkov and it died in committee. It was bad timing: two of the leaders of the Politburo were absent that day they considered the proposal.

Another leading light was Anatoly Kitov, who proposed in 1959 that civilian economists use computers to solve economic problems. For his efforts, he was dismissed from the army and put through a show trial. Yet during the 1950s the Soviet military had long-distance computer networks and in the 1960s they had local area networks. What they didn’t have were standard protocols and interoperable computers. It wasn’t the technology, but the people, that stopped their development of these projects.

What Peters shows is that the lessons from the failed Soviet Internet (he adoringly calls it the ‘InterNyet’) has to do more with the underlying actors and the intended social consequences than any lack of their combined technical skill. Every step along the route he charts in his book shows that some failure of one or more organizations held back the Soviet Internet from flourishing like it did here in the States. Memos got lost in the mail, decisions were deferred, committees fought over jurisdiction, and so forth. These mundane reasons prevented the Soviet Internet from going anywhere.

You can pre-order the book from Amazon here.

The Evolution of today’s enterprise applications

Enterprises are changing the way they deliver their services, build their enterprise IT architectures and select and deploy their computing systems. These changes are needed, not just to stay current with technology, but also to enable businesses to innovate and grow and surpass their competitors.

In the old days, corporate IT departments built networks and data centers that supported computing monocultures of servers, desktops and routers, all of which was owned, specified, and maintained by the company. Those days are over, and now how you deploy your technologies is critical, what one writer calls “the post-cloud future.” Now we have companies who deliver their IT infrastructure completely from the cloud and don’t own much of anything. IT has moved to being more of a renter than a real estate baron. The raised-floor data center has given way to just a pipe connecting a corporation to the Internet. At the same time, the typical endpoint computing device has gone from a desktop or laptop computer to a tablet or smartphone, often purchased by the end user, who expects his or her IT department to support this choice. The actual device itself has become almost irrelevant, whatever its operating system and form factor.

At the same time, the typical enterprise application has evolved from something that was tested and assembled by an IT department to something that can readily be downloaded and installed at will. This frees IT departments from having to invest time in their “nanny state” approach in tracking which users are running what applications on which endpoints. Instead, they can use these staffers to improve their apps and benefit their business directly. The days when users had to wait on their IT departments to finish a requirements analysis study or go through a lengthy approvals process are firmly in the past. Today, users want their apps here and now. Forget about months: minutes count!

There are big implications for today’s IT departments. To make this new era of on-demand IT work, businesses have to change the way they deliver IT services. They need to make use of some if not all of the following elements:

  • Applications now have Web-front ends, and can be accessed anywhere with a smartphone and a browser. This also means acknowledging that the workday is now 24×7, and users will work with whatever device and whenever and wherever they feel the most productive.
  • Applications have intuitive interfaces: no manuals or training should be necessary. Users don’t want to wait on their IT department for their apps to be activated, on-boarded, installed, or supported.
  • Network latency matters a lot. Users need the fastest possible response times and are going to be running their apps across the globe. IT has to design their Internet access accordingly.
  • Security is built into each app, rather than by defining and protecting a network perimeter.
  • IT staffs will have to evolve away from installing servers and towards managing integrations, provisioning services and negotiating vendor relationships. They will have to examine business processes from a wider lens and understand how their collection of apps will play in this new arena.

 

Network World: Five cloud costing tools reviewed

Certainly, using a cloud provider can be cheaper than purchasing your own hardware, or instrumental in moving a capital expense into an operating one. And there are impressive multi-core hyperscale servers that are now available to anyone for a reasonable monthly fee. But while it is great that cloud providers base their fees on what resources you actually consume, the various elements of your bill are daunting and complex, to say the least.

Separating pricing fact from fiction isn’t easy. For this article, we looked at five shopping comparison services, including Cloudorado, CloudHarmony’s CloudSquare, CloudSpectator, Datapipe and RightScale’s PlanForCloud.com. Some of them cover a lot of providers, some only focus on a few.

You can read the full review in Network World today here.

Why you need to review your stats regularly

I admit it; I have fallen out of the habit of reviewing my various stats on my websites and other content-oriented places. For many years I dutifully kept track of how my posts were doing, who was commenting, where backlinks were coming from, and so forth.

For some reason, I stopped doing this in the past year. Maybe it was just being lazy, maybe because I had gotten very busy with a lot of very interesting assignments. Maybe it was just old age: I have been writing stuff for more than 25 years, after all.

Well, all of those (and others) aren’t valid excuses. You need to check your stats, and check them regularly. There are lots of interesting things hidden in them that you might not realize, and some of these things can help you delivery better content, or target new audiences, or figure out what you are doing right (and do it more often) or wrong (and avoid or improve).

WordPress’ Jetpack delivers an annual email summary of your blog and its posts: this is a very useful reminder that you need to dive in deeper and see what is going on with your blog (or blogs, in my case). And Slideshare.net also has some great analytics. This service is a wonderful place to post PowerPoints of my presentations. Looking at these analytics, I would have found out:

  • Influence can be found in odd places. A post that I wrote for SoftwareAdvice.com about real-time retail store tracking was picked up by a blogger for the point-of-sale system Vend.com, that brought a bunch of visitors to my site back in the spring when I was quoted by their blogger. Could have been an opportunity to talk more about the subject.
  • Don’t knock the long tail. I am still the leading expert on a very obscure Windows error message: if you were to Google “Windows Media Player error c00d11b1” you will see my post in the first ten or so results. The post has received more than 380,000 views in the more than eight years since I wrote it, and it is still getting comments on my blog and links in the Microsoft forums too. Why is this important? All this traffic on a very specific subject can help raise your Google ranking, and also provide an entry point into your content ecosystem if you manage it properly.
  • My influence beyond North American borders is somewhat quirky. The second most-visited place of origin for my Slideshare.net account is Ukraine, with about half the views from the US over the past year. Again attesting to the very long tail, a good chunk of these views came from a presentation that I posted five years ago on how to set up your first blog and business email. (That kind of makes sense.) For my blog, other popular countries of origin for my visitors were India and Brazil. Don’t forget the rest of the world when you are posting your content and widen your perspective to engage more of these readers.
  • Twitter and Facebook were both important traffic drivers for my blog over the past year. This emphasizes how critical your own social media accounts are and how you need to cross-link posts among them. Combined the two were equal to the traffic brought in from Google organic searches, which is another important element in referring traffic too. Don’t just post blog entries on your blog: I have begun cross-posting my content on LinkedIn Pulse and Medium and they are getting a fair share of views there too. The analytics for those sites could be better though: for example, Medium only allows you to look at month-long intervals at a time and Pulse will only send you static results in regular email summaries.

There are lots of Twitter analytic tools, and some that are quite pricey. One that I like that has a free version is TwitterCounter, including who unfollowed and followed you over time. For example, I got excited this week to see that the actor Taye Diggs followed me (he has been following my wife’s Tweets for some time) but our local mayor dropped me (oh well). You can see the kind of graphs it produces such as this, which to me indicate a fairly steady stream of new followers replacing the drop-offs with a slow overall growth:

twittt

Happy new year and may your stats encourage you to deliver better content in 2015!

Everyone is in the software business

Everyone is in the software business You may not know it, but you are in the software business, no matter what your actual business may appear to be. It doesn’t matter what you produce, whether you are a “bricks and mortar” retailer or a “guys in trucks” distributor, software is where you are going to end up.

virgin-america-logo-1Why is everyone in the software business? Simply because software is become the lifeblood of so many decisions on what a business makes, how it is sold, and how customers are kept happy. All the interesting business operations are happening inside your company’s software. Software is where you can find out if your customers are going elsewhere, if your profits are coming from some new markets, and if your employees are helping or hurting your overall reputation.

As an example, the airline Virgin America has billions of dollars invested in planes and the people that fly them, but their brand lives and thrives based on their mobile and web experience. That experience is all because of the airline’s booking software, and understanding what is happening with that software will make the difference between success and failure for the airline.

Take as another example a national food service distributor. Their business is getting food into trucks, and then getting those trucks to restaurants and other institutional caterers and retail kitchens. The company has had an ecommerce business for the past decade, and a pretty significant one at that. But lately their customers have been shifting their emphasis from calling their sales representatives with the weekly orders and wanting to do more online. The distributor needed to scale up their online business, and also be more data-driven. Rather than letting their truck drivers or regional offices make decisions about distribution, they wanted a single view of their business, and use the changes in their orders and other data to fine-tune their deliveries. This food distributor is now firmly into the software business.

What is driving everyone to software? Several things.

  • The cloud. The days where you had to build your own servers and data centers are over. Post Holdings is probably the largest cloud-only company, and their revenues are in the billions.
  • Everyone wants an online storefront. Just like the food distributor, even the most basic industries are finding out there is value is selling their stuff online.
  • Big Data is getting more familiar. You can now find Hadoop clusters in many traditional Fortune 500 companies. The IT staffers at the giant retailer Sears eventually spun off a side business in helping others get started with Big Data.
  •  Customer experience is king. One way that businesses can differentiate themselves is by paying attention to their customers. This isn’t anything new: Nordstrom’s department stores have been doing this for decades. What is new is a range of software tools to help figure this stuff out.

I will have more to say about this topic, right now I am working on a white paper for a client that will dive into this deeper. Check back here in the fall when I can post a link to it.

New Relic blog: JQuery Foundation’s Dave Methvin Shares his Rules of the Road for Speeding Up Your Website

It’s practically a universal truth: just about every website is too slow. There are many, many reasons for this unfortunate fact, of course, but Dave Methvin, president of the jQuery Foundation, has some practical advice on how to get the biggest speed improvements with the least amount of effort. This is based on an in-person meeting as well as some of his talk that he gave at a programming conference earlier this year.

You can find my article on the New Relic blog here.