“Do I really look like a guy with a plan? You know what I am? I’m a dog chasing cars. I wouldn’t know what to do with one if I caught it. You know, I just do things. The mob has plans, the cops have plans, Gordon’s got plans. You know, they’re schemers. Schemers trying to control their little worlds. I’m not a schemer. I try to show the schemers how pathetic their attempts to control things really are.”
—The Joker, The Dark Knight
(By way of explanation/disclaimer/credibility enhancer/armchair-quarterbacking, let me say that I was chairman of my town’s cable TV regulatory commission for 3 or 4 years and had the extreme pleasure of negotiating 2 franchise agreements with Comcast.)
This quick blog note is in response to the news that Comcast is charging L3 to deliver movies and other content that would normally be available to Comcast cable subscribers, but over Comcast’s data network.
IANAL, but it sure seems like Comcast has jumped the shark by asking that their data service to be considered the same as their CATV service as far as fees to deliver movies. If so, I’d say the FCC should declare the two Comcast networks identical and let localities tax and franchise the crap out of Comcast’s data pipes the same way they can regulate and tax the Comcast CATV network. IMO, Comcast just gave up common carrier status for their data network and if I was NYC or any other big city, I’d be knocking on their door with a new franchise agreement that taxed them for as much as they are charging L3.
Just my $.02. But since I have Verizon’s FiOS service for video and data, I really only marginally care out of anything more significant than latent spite and malice left over from many hours of wasted time hassling with their legal team over franchise terms. Good luck, L3. Don’t let ’em get away with it!
Dave Winer infected me with the node.js meme the other day. I’m sure it was innocent on his part, but the results have been a bit like John Pinette at a chinese buffet. So much to try, so little time.
node.js has been around for a while. I remember looking at it almost a year ago and thinking that I liked my own Java version of poor man’s middleware, built out of Rhino, Derby, and my own home-grown object database, a lot better than the weird mishmash of pieces that seemed to be dumped into node.js.
And all the pieces are there to build just about any sort of Web app you can imagine now. Modules exist to tie together just about any popular database, web framework, CMS, or web service you can think of. And it’s all designed to be maximally scalable and take proper advantage of multi-core processors. It’s like someone dumped out a huge pile of all the Lego pieces in the world and sat me down in the middle of it.
Unfortunately, just like with real Legos, I now have to figure out what to make out of them.
I’ve just finished a weekend-long battle with Apple’s RAID software. In the end, it saved my bacon. But the story is longer than that and holds a couple of warnings about backup strategies. First some background.
I have a old PowerBook running OS X Server that plays the role of media server. It had 3 TB of drive space connected to it. 1 TB was set aside as a network-shared Time Machine back-up drive and the other 2 TB were in a mirrored RAID array. Because movie files, audio, photos, ebooks, etc. take up a lot of space, the decision was made to keep them on a RAID, rather than trying to keep them backed up with Time Machine or some sort of removable media.
Somewhere along the way, one of the two drives in the RAID array failed, and here comes the problem. Apple’s OS X Server product and their software-based RAID software give you no alarm or warning that a RAID array is degraded or failing. The only way I noticed the problem was because movies weren’t streaming properly off the server and I had to use the Disk Utility app to see why one drive wasn’t mounting.
It was at that point that I happened to click on the RAID tab and noticed a big red warning and the word “degraded” next to the RAID volume. Had I not bothered to investigate, it is entirely possible that the second drive in the RAID array could have failed or been corrupted by the first, completely defeating the purpose of the RAID in the first place.
So a few lessons learned:
1. Don’t put cheesy, low-end Western Digital hard drives in a RAID array. You’ll just end up having to constantly rebuild the array.
2. Don’t trust OS X Server to give you any sort of warning about your failing RAID array. Check early and often using Disk Utility.
3. Regardless of 1 and 2, a RAID array is a great way to avoid more expensive or time consuming back-up options like Time Machine or tons of removable media.
p.s. This is partially an exercise in trying out the WordPress -> Twitter functionality. Apologies if anyone finds this post boring.
I’ve been working with Dave Winer’s “feedhose” system for the past several days and have run into a problem that will likely face everyone trying to leverage this technology.
Since the RSS spec is fairly lenient, things like title and description tags can have all sorts of mark-up in them. (Don’t get me started on the issue of mark-up in RSS titles — it’s a major pet peeve of mine!) Anyway, it’s obviously possible to stick JSON syntax into the content of a RSS feed.
This makes interpreting the JSON data coming from Dave’s RSS-to-JSON conversion process a bit complicated. So here is the question(s) at hand:
The answers will help the feedhose stack properly and securely pass RSS up to clients for display, so please give freely of your painful past experience.
In response to Dave Winer’s post about bootstrapping a federated 140 character message system, I doodled up a spec for a hypothetical service called “F140”. I have a basic implementation of it working and am interested in hearing comments and feedback. The implementation is VERY simple and barely scratches the surface of the issues related to this sort of thing. But in the spirit of bootstrapping, please consider the F140 Proposal.
Please note, also, that this page is extracted from the running system, so ignore the links at the bottom of the document.
I decided to get an iPad. I wanted it specifically to use as a platform for work. I wanted to see if it could hold its own against my MacBook Pro for the usual daily tasks around the office as well as use and evaluate it as a platform for potential customer applications.
So, I took it to work today and left my MacBook at home. And guess what? With the exception of one critical type of task, there wasn’t a single thing I needed to do that the iPad didn’t handle fine. I scheduled two meetings, answered all my email, previewed two presentations, worked on some content in our Wiki, made a couple of phone calls (Skype), and checked the traffic on the way home.
That involved using only one app that isn’t part of the basic, out-of-the-box iPad, and that was Keynote from the iWork suite. It happily worked with the Exchange server in the office, the MediaWiki install, and even hooked up OK to the projector for briefings.
What it couldn’t do, even with the full set of iWork apps (Pages, Numbers, Keynote) was anything resembling a decent outline. No concept of outlining exists anywhere that I could find, at least not in the sense that MS Word or PowerPoint can perform. And it’s not a function I can do without on a regular bases.
Outlining in a Wiki is possible, but difficult to share (and preview, and reorganize, and restyle, etc.) and requires a live network connection of some sort. But the stock iPad plus iWork apps just can’t. Not in any useful way.
The device is pitched to students and teachers as a great note taking tool. I say BS. It’s not, unless what you are writing is a disorganized pile of text.
I know there are several 3rd party apps for outlining, but they suffer from a distinct lack of integration with iTunes and the iWork online site for file sharing. And I really don’t think this is a feature to be omitted from the iWork apps.
But in the absence of this feature, I’m still going to have to drag my laptop to meetings unless someone has a better idea.
The fat ping addition to the rssCloud infrastructure is a great new feature. After pondering all the myriad of application opportunities it opens up, it occurs to me that there would be value in expanding beyond the implicit content types supported in RSS. As it stands now, the current CloudPipe examples pass through the same RSS content fields present in the originating feed. That means clients will only expect to see plain text or text with embedded (x)HTML mark-up tags and entities.
The ability to embed rich media types directly in the fat ping message would open up a lot of new application types. Image feeds, pushed audio (think instant voice mail), and other sorts of mixed media (multipart MIME, HTML with accompanying media, for example) would all be possible if clients knew how to interpret the type of media before trying to render it. Adopting an optional attribute that specified a MIME type and possible encoding format for content-containing entities would be a HUGE addition to this extension without requiring much, if any, additional work by client authors.
An example might look like:
<description type="image/jpeg" encoding="base64">...base64 encoded jpeg spew here...</description>
The PubSubHubBub spec supports embedding non-text content types in fat pings as a side effect of the Atom spec’s “type” attribute on content entities. However, they don’t explicitly support specifying an encoding scheme.
As with Atom, RSS assumes specific types (e.g., text with embedded mark-up) in the absence of any specifier. So the addition of a “type” attribute is purely optional and would provide clients looking for it with rendering hints as well as an ability to avoid trying to display content types they don’t understand. The up-side is that we get the ability to push all sorts of media and not just text. I think this would be an awesome addition going forward and it should be completely backward compatible with any RSS clients that parse properly formed XML.
Adding both content type and encoding format makes the payload of a fat ping essentially identical to the body of a HTTP response in terms of information available to the client. And every modern OS out there has all the tools needed to interpret and render that sort of message. It also provides a small, but critical addition not present in the Atom/PSHB stack with respect to encoding schemes.
So, my modest proposal is to add optional “type” and “encoding” attributes to the <title> and <description> entities as described above.
Well, not exactly, but if I’d wanted to win the DARPA Network Challenge, here’s what *I* would have done to win.
First of all, I’d settle for splitting the $40,000 prize. I’d enlist 2 other people, one on the other coast and one somewhere in the middle, and offer them $5000 apiece to play along. Each of us would have gotten up this morning and lofted 3 or 4 big red weather balloons of our own around San Francisco, Austin, and the DC area.
Then we’d have enlisted a couple of our high-flow Twitter buddies for another $2500 apiece to toss out the following offer:
The first person who sends me legit coordinates for a particular red DARPA balloon will be guaranteed $2500 of our winnings.
That means we’re out another $25000 for all 10 coordinates (obviously we aren’t paying for our decoys — they aren’t DARPA’s), for a total of $35,000. A few more posts around the net in our spare time, and bam! all 10 coordinates show up and everyone’s a winner.
I’d be surprised if anyone tries all the elements of this strategy, but it’s sure what I’d do. Fake out the competition and incentivize people who might not otherwise stand a chance at the $40,000 (and pocket $5000 for myself).
So, anyone seen any red balloons today? $2500 of my winnings if you have!
Given the amount of interest people seem to be showing in the Twitter proxy concept, I’m going to take a shot at making one. If you’d like to help, here’s what I am thinking:
I learned a lot about the pros and cons of using Google App Engine while making my Java implementation of a rssCloud server, rssNimbus. It seems like a perfect platform for hosting a thin set of Twitter APIs that can form the basis of a hackable open source effort at giving Twitter a brain.
I’m going to start a SourceForge or Google Code project to host the Java code. As for collaborating on the proxy “language”, I’m open to suggestions. Google Docs? Wave? Mailing list?
Anyone who is interested in participating please drop me an email at “chuck -at- shotton dot com” or post a comment here.