A Canvas for Thought

October 14, 2007

Social Networking: What’s Coming After Facebook

Filed under: ideas — vednis @ 9:30 pm

Facebook is a thinly veiled economic machine that turns your online identity into a monetizable commodity. We sell our digital souls for the privilege of playing in a garden walled by banner ads, on paths carefully planned by product managers in Palo Alto. We may not look beyond the walls, and we are encouraged to bring our friends in to play. The same pattern has been repeated time and again since the beginning of the Internet.

It doesn’t have to be this way. The technology exists for us to control our online personal information, to dictate to whom and how much our identities will be revealed, while still participating in the network at large. You should have control of your online identity and personal information. It should be your right as a digital citizen. The next generation of social networking sites can provide that control, using simple tools that we have used for years.

Start with a personal website. Next, place all of your personal information into a database, including a list of links to friends’ sites. A set of privileges is given to each friend in order to restrict their access to your information, perhaps assigned using groups, such as ‘colleagues’, or ‘family’.

You and each of your friends have a security key associated with your identity. That key uniquely identifies you, and allows for secure encrypted communication that can be read by you, and only you (The system for creating and managing those keys is already in place; it is called Public Key Infrastructure, and it is available as free software).

Now for the piece that makes this software ‘like Facebook’. You have a personal homepage on your website, one that may look like Facebook, or Myspace, LinkedIn; any social network you desire. When you visit your personal site your web browser will call all of your friend’s sites, ask for some of their personal information, and display that information on your page in the format you want. Alternatively, you may visit their sites directly to see their information in the format they desire; the choice is up to you. They know who is asking for the information because of the security key, and they are free to show you as much or as little information as they want.

The technical implementation appears relatively straight forward. The information passed can be based on microformats such as hCard and hCalendar. Data can be retrieved from your contacts’ sites and rendered on the your screen using JavaScript. Public Key Encryption through OpenID ensures that your contact’s information is only given out to trusted parties. The language used to write your friends’ or your own site does not matter, so long as they speak the same data-exchange language. The network would be technology-independent and open to participation for all, combining commercial and Open Source solutions to create a larger structure.

There are still opportunities to make money in the new social network landscape. Facebook can still exist, but the walls will have disappeared. Advertising supported hosting of personal profiles as Facebook does is still possible, but not required in order to participate in your friends’ social networks. Opportunities exist for searching the large network of people beyond your first-degree contacts. LinkedIn would become such a service, allowing you to submit a profile to their database in return for searching across the tens of thousands of people in your third-degree network. Selling your personal data on a controlled basis becomes feasible: you may opt to exchange your detailed personal and demographic information (age, income, etc.) to an information broker in for a monetary reward, or in exchange for one-to-one advertising that is actually relevant.

I believe that a network of personal information sites connected under the control of its users would constitute a new phase in the Internet’s development, opening doors to applications and utility as yet unseen. I hope it is only a matter of ‘when’ it will happen, and not a question of ‘if’.

Update: It looks like a few others are thinking in same space. Good!

Advertisement

October 10, 2007

My Life Must Change

Filed under: perspectives — vednis @ 12:29 pm

Today begins a new chapter in my life. It is the first day in my professional career that I am out of a company’s employ. I was laid off. Our parting was on good terms, but the reality of the situation before me is deep, and harsh.

My life must change. The thought invokes thrill, fear, raw exhilaration, worry. It fully calls upon the support of my strength, family, faith, and reason. It tests me.

My skills are rarely seen – Python, Linux, Open Source, Agile development – they catch the eye of others, recruiters especially: it interests them. But the companies that use these skills are as unique as I am. The search is difficult, the opportunities rare; every resume and cover letter becomes a finely crafted shot bound for an elusive mark.

Others have offered to open doors, but to cross through them I must give up some of what makes me unique: I must forsake Python, and Linux, and find myself a place in the brave world of Microsoft’s .NET. My friends have faith in me, in my skills as a Software Engineer, and I thank them greatly for it, but to give up the path I have forged these past few years…

This part of Canada, Southern Ontario, is based on big business: manufacturing, finance, and commerce. Only a faint glimmer of the Silicon Valley startup scene shines here. Globally, perhaps 60% of programming is done with Java or .NET: here, 90% feels more accurate. Vancouver, on Canada’s West coast, is different. It feeds off of the fervor to the South. But to move out West is to leave my family, my wife’s family, the place where I grew up, and for what? A computer language? A dream? An ideal?

The weight of it, my family’s well-being, rests squarely on my shoulders. My wife is at home, and cares for our children; it is a decision we made together, one we truly believe in, and I will fight for it until the bitter end. I am responsible for what comes next.

My life must change. And it is never easy.

September 23, 2007

Craigslist Software Jobs: Canada vs. San Francisco

Filed under: perspectives,wild speculation — vednis @ 11:57 am

I recently checked out the Canadian Craigslist scene after reading some recent posts by Guy Kawasaki.  I’ve heard that Craigslist is big in the San Francisco Bay area, but how is it catching on in Canada’s largest cities?  I devised a small survey to compare the regions, and wow, what a difference.

I compared Canada’s largest city, Toronto, and the surrounding area (known as the GTA), to the San Francisco Bay area.  I also threw in Montreal, Vancouver, and their surrounding areas.  I looked at some of my software specialities, including the Python programming language and Linux, and at some general Web2.0 activity indicators.

As searched on Sept. 23rd, 2007:

S.F. Bay GTA Vancouver Montreal
Population (2006) 7 million 5.5 million 2.1 million 3.6 million
Python in Jobs 386 27 23 16
Linux in Jobs 1157 247 244 75
Ruby in Jobs 259 31 55 6
Python in Gigs 14 0 0 0
Ruby in Gigs 41 0 0 0
AJAX in Jobs 577 222 121 41
Developer in Jobs 1206 772 456 141
Software Engineer in Jobs 1458 98 99 26
Programmer in Jobs 269 234 169 80
Startup in Software Jobs 234 10 4 4
Startup in Internet Engineering Jobs 179 4 1 4

heri noted in the comments that the Montréal numbers are irrelevant, because most job postings will be in French.

Some interesting observations:

  • Why does everyone in S.F. want a Software Engineer, but in Toronto they only want Developers?
  • I have heard that the Linux scene in Toronto is pretty dead.  Answer: yep.
  • I have heard that Vancouver is a bigger tech hub, per capita, than Toronto: perhaps, going by the Cutting Edge Indicators (Ruby, Open Source/Linux, startups, and overall Craigslist usage).

Do keep in mind that this is just a survey, and unscientific in every way.  But do feel free to have fun with the numbers, and draw your own conclusions.

Technorati Tags: , , , , ,

September 16, 2007

Rails, Y-Combinator, and the E-Myth Revisited

Filed under: perspectives — vednis @ 5:45 pm

I have recently been reading a book called The E-Myth Revisited.  The author, Michael Gerber, lays out the foundation for building a successful small business using a franchise model.  I decided to measure two successful projects from the Web2.0 world against the franchise model: Ruby on Rails, and Paul Graham‘s startup-generating machine, Y-Combinator.  The results are interesting.

First, some background.  Gerber defines a successful franchise as a Turn Key Business Model.  You learn how the business model works, get the keys to the business, fire it up, and it succeeds; every time, it succeeds.  This model is followed by two of the most lucrative franchises in Canada: MacDonald’s and Tim Horton’s.  Starting either restaurant means that you start a successful small business, guaranteed.

Put another way, a successful franchise is a system, powered by people, that is guaranteed to run properly, so long as the system guidelines are followed.  Tim Horton’s is a system for running a profitable doughnut shop.  Rails is a system for repeatedly building robust, flexible, featurful web applications.  And Y-Combinator is a system for turning wizardly hackers into Web2.0 startup entrepreneurs.

Gerber states six rules that a business model must follow to succeed at the franchise game.  We can see some interesting aspects of Y-Combinator and Ruby on Rails’ success
if we look at how each follows Gerber’s small business franchise rules.  (Note that these rules must be taken together: they are an irreducible system, a whole of interrelated parts.)

1. The model will provide consistent value to your customers, employees, suppliers, and lenders, Beyond what they expect.

Rails fills this role in flying colours.  Diving into Ruby, Rails, and the Pragmatic bookself will have you building those super robust Web2.0 web applications very, very quickly.  And that is before you have even touched the larger Rails community, which is teaming with tutorials, plugins, and stories that will add untold value to an already successful application.

Y-Combinator is a dream come true for a hacker-turned-founder wanting to start a startup.  They deliver exceptional value by giving startup founders everything they need: legal advice and services, bookkeeping, market awareness, and other smart startup founders to bounce ideas off of.  The founder may be unbelievably creative, and you can turn out code like you wouldn’t believe, but they are still at great risk of running afoul of Gerber’s Fatal Assumption:  “if you understand the technical work of a business, you understand a business that does that technical work”.  ie, wizardly hacking != knowing how to run a business that employs wizardly hackers.  Y-Combinator is a system that teaches you, the wizardly hacker, how to run a business.

2. The model will be operated by people with the lowest possible level of skill.

Now, I should clarify exactly what “lowest possible” means.  The lowest possible level of skill may be “brain surgeon”.  The model must be operable at the lowest possible skill level because highly skilled people are expensive.  In comparing two similarly successful franchise models for brain surgeons, the one that requires less skill to run will be more successful simply because it has a larger eligible talent pool to draw from.  It would be easier to set up a franchise that requires “one brain surgeon and two assistants”, rather than one that requires “three brain surgeons”.

Rails makes doing difficult things simple because many tasks involving complex concepts have been neatly abstracted away.  Need a database table to store and maintain a hierarchical tree structure?  You need not read about nested sets, adjacency lists, SQL syntax, or time/space performance tradeoffs, and, more importantly, you need not spend time debugging the whole mess.  You simply add the line ‘acts_as_tree’ to you model, and you’re done!

Y-Combinator makes startups accessible to ordinary wizardly hackers.  Not hackers like Miguel de Icaza, or Paul Graham himself, but normal wizardly hackers with a bright idea that can be turned into code in three months.

3. The model will stand out as a place of impeccable order.

Here we see Rails shine through.  If you understand “The Rails Way”, and “The Ruby Way”, then many of the design decisions encountered while building a typical web application have been made for you.  There is an obvious place for everything, every piece of functionality or code.  Application deployment is clean, orderly, and portable across web hosts.  The risks and rewards for adhering to The Way are clearly laid out, they are flexible, but departing from The Way will involve trading away some of the advantages that the platform brings.  (The Pragmatic books do an excellent job of communicating this point.)

Now, applying this rule to Y-Combinator is difficult for me, since I know nothing about how the course internals work.  But I would assume that first things first: get your idea written down, and get your legal paperwork out of the way.  Perhaps some of the “impeccable order” is evident in Y-Combinator’s stringent selection process: they prefer to fund startups that fit a certain profile, which increases their chance of success.

4. All work in the model will be documented in operations manuals.

The operations manuals for Rails are obvious, those being the Pragmatic books, the various cookbooks, and so forth.  Follow these, learn The Way, and success with your Rails projects is guaranteed.

Once again, I don’t know if Y-Combinator has anything like a manual.  But I will bet that many people would love to get their hands on the powerpoint slide decks that the company shows to each round of startup founders.

5. The model will provide a uniformly predictable service to the customer.

A company that understands Rails development will produce web applications that are flexible and robust.  However, Rails does not guarantee a decent user interface, competent project management, or clear client communication (look to SCRUM for that).

I would assume that Y-Combinator successfully delivers Web2.0 startups, because they are still going.

6.  The model will utilize a uniform colour, dress, and facilities.

OK, you got me on this one.  I can’t apply this rule to Rails or Y-Combinator without bandying about evidence of near-ubiquitous Mac, Linux, and Textmate usage in the web and startup communities.

So there you have it, an attempt to apply a new-found model to my world.  It will be interesting to see if Gerber’s ideas can be applied to other business models in the technology space: is Y-Combinator the only sure-fire way to build a successful startup incubator?  How well does Y-Combinator fill out the market strategy, customer awareness, and visionary components of a successful business?  Would it be possible to provide a Rails Turn Key Business franchise, a system for building world-class Rails shops, one that provides business assistance in the same way that Y-Combinator helps startups?

The E-Myth Revisited is a good book: I’d recommend it to anyone thinking about going into business for themselves.

July 18, 2007

Facebook: the ultimate P2P darknet enabler?

Filed under: ideas,wild speculation — vednis @ 6:16 am

Could Facebook be used as the catalyst for a new generation of Peer-to-Peer darknet applications?

Briefly, a darknet is a private virtual network where users only connect to people they trust.  This is very similar to the networks that Facebook builds. Trust is the key.  You connect to close friends and relatives, giving them access to personal content not privy to your larger network as a whole.

Facebook could become an enabler for these networks, in that it provides a common point in the network through which you may connect with those trusted people.  Not directly, but via Facebook’s new applications interface, or via exisiting network tools that Facebook supports directly, such as MSN, Gmail, etc.

One such darknet application may be Peer-to-Peer shared backups.  Imagine making an agreement with your relatives, that you would each devote 2GB of hard-drive space to keeping the family photo pool backed up.  Some clever Open Source software could keep the photo pool maintained, distributed among all of your computers.  The sharing tool could use a Facebook application for peer discovery.

You could even route new content over existing tools.  I wonder, if you could install the iLike application, could you use it to publish new content for your friends, and hook an Open Source content sharing tool into the iLike interface to handle the transfer?  iLike publishes what’s new, Facebook publishes your content share points, and the Open Source tool handles the data.

Using Facebook to publish content discovery and sharing points opens the door to federated services, allowing you to get your network out of the hands of commercial parties.  If I could publish the address of a personal server on Facebook (a server that I own, running my own services), then I could start building networks and sharing with others outside of Facebook.  And I would once again have control of my identity within those networks – I won’t have to rely on the Facebook privacy controls, or anything like that.

Just some ideas.

July 13, 2007

Setting a custom Ruby GEM_HOME on Ubuntu Feisty

Filed under: Uncategorized — vednis @ 1:23 pm

Here is a quick and painless solution for setting your own Ruby GEM_HOME in Ubuntu Feisty.

First, I’ll assume that you have installed the ruby and rubygems packages, and set your new GEM_HOME:


$ sudo apt-get install ruby rubygems
...
$ mkdir -p /home/mars/lib/ruby/gems
$ export GEM_HOME=/home/mars/lib/ruby/gems

All looks well, until we try to install something with the ‘gem’ command:


mars@sol:~/tmp$ gem install rake
/usr/lib/ruby/1.8/rubygems/custom_require.rb:27:in `gem_original_require': no such file to load -- sources (LoadError)
    from /usr/lib/ruby/1.8/rubygems/custom_require.rb:27:in `require'
    from /usr/lib/ruby/1.8/rubygems/remote_installer.rb:462:in `sources'
    from /usr/lib/ruby/1.8/rubygems/remote_installer.rb:472:in `source_index_hash'
    from /usr/lib/ruby/1.8/rubygems/remote_installer.rb:436:in `install'
    from /usr/lib/ruby/1.8/rubygems/gem_commands.rb:263:in `execute'
    from /usr/lib/ruby/1.8/rubygems/gem_commands.rb:225:in `each'
    from /usr/lib/ruby/1.8/rubygems/gem_commands.rb:225:in `execute'
    from /usr/lib/ruby/1.8/rubygems/command.rb:69:in `invoke'
    from /usr/lib/ruby/1.8/rubygems/cmd_manager.rb:117:in `process_args'
    from /usr/lib/ruby/1.8/rubygems/cmd_manager.rb:88:in `run'
    from /usr/lib/ruby/1.8/rubygems/gem_runner.rb:28:in `run'
    from /usr/bin/gem:23
mars@sol:~/tmp$

Oops.  We are missing a file called ‘sources.rb’.  That file, curiously enough, is contained in the ‘sources’ gem.

So, we try this:


mars@sol:~/tmp$ gem install sources
/usr/lib/ruby/1.8/rubygems/custom_require.rb:27:in `gem_original_require': no such file to load -- sources (LoadError)
...

Same error. Drat!

Thankfully, there is a simple solution.  The sources gem was installed in feisty’s default gem cache:


mars@sol:~/tmp$ locate sources.rb
/var/lib/gems/1.8/gems/sources-0.0.1/lib/sources.rb

mars@sol:~/tmp$ ls /var/lib/gems/1.8/cache/
sources-0.0.1.gem

We can install the original gem into our new GEM_HOME.  Just make sure that you pass the ‘–local’ switch to the gem command, so that it doesn’t check for remote sources!


mars@sol:~/tmp$ gem install --local /var/lib/gems/1.8/gems/cache/sources-0.0.1.gem
Successfully installed sources, version 0.0.1

Now everything works as expected; we can install gems without using sudo:


mars@sol:~/tmp$ gem install rake
Bulk updating Gem source index for: http://gems.rubyforge.org
Successfully installed rake-0.7.3

P.S. Don’t forget to set your PATH!

$ export PATH=$GEM_HOME/bin:$PATH

July 12, 2007

Possibly the best software I have ever used

Filed under: cool,ideas — vednis @ 9:07 pm

My wife has been wanting to relearn French for a while, and she has wanted to try Rosetta Stone.  After seeing and playing around with the Flash demo, all I can say is, WOW.  This is probably the best-designed software I have ever used.

The interface is so simple, a four-year old could easily use it.  The content used to teach the lessons is spectacular.  And, best of all, the software places no barriers in front of the user.  The lessons have a flow, but you are not required to follow it – you can move forward or back, change the difficulty, time yourself, score yourself, but only at your own request.

I can’t stress that last point enough: You are in complete control of the software.

This is the way learning should be.  Structured by the learner, for the learner.  The goals you set for yourself while using the software are personal, based on your comfort level and motivation.  The consequences are profound: you are motivated to set goals higher, because there is no fear of failure, no consequences.  Being motivated means you actually learn.  Teaching a language by rote memorization doesn’t even compare!

Ok, time for some wild speculation: the technique used to present the language is conversational, repetitive, immediately applicable, and predictable.  The language constructs follow each other logically, weaving in and out, building on what came before.

So the question is: could we apply the Rosetta techniques to learning computer languages? :)

Some of the bases are already covered: we learn “hello world”, and move on from there, picking up the patterns along the way: here’s a procedure, this is a data structure, this is binding and scope.  But this is picked up slowly, by reading about it, often a chapter (or an example) at a time.  Could we speed things up, improve retention, quickly getting a feel for the flavour of the language?

I have already come across some interesting techniques for learning a new language, example programs worth porting, and such.  But I still think we could do better.

I’ll post more on this later.  Stay tuned!

June 13, 2007

hresume – Microformat for publishing resumes

Filed under: Uncategorized — vednis @ 11:43 am

I looked at the idea of a Resume Markup Language in my previous post. That post stated my belief that a community-driven standard would quickly gain traction, and it turns out that one already has. I discovered the hresume microformat, a means for tagging web page HTML elements with resume-specific qualifiers.

Well, that saves a lot of work. :)

June 12, 2007

Resume Markup Language

Filed under: ideas — vednis @ 12:56 pm

I was in a meeting earlier today where the owner of a company spoke about the problems they had with hiring for a new position. One big problem was the variety of job applicant information coming in the door: cover letters, portfolios, and packaging all varied widely. Small companies see a big problem when dealing with 200 or more resumes like that.

The company’s solution, and, I might add, a standard one, was to offer a web form for applicants to fill out. But the solution takes more away from the job applicant than it gains for the employer. It changes the tone of the conversation.

The applicant is putting their best foot forward – their resume is their golden ticket, and they have invested everything into personalizing it, to impress. But the employer has a hundred tickets to deal with, each ticket’s value being reduced by number of other tickets it competes against. So the employer changes the conversation; they limit the information exchange to what fits on a blank web form. An analogy would be a person buying a new mid-sized car, having a large number of makes and models to choose from, and many salesmen vieing. But the first thing the car buyer asks the salesman is “please, only tell me the color, and the price.” Truncating the conversation not only demoralizes the seller, but key aspects of the offer may be missed: deals, leasing, seller reputation, and so forth.

Another conversation would allow the seller to present all that they have to offer, in a common language. The buyer could filter the information to items of interest. In the case of the job hunter, you want to present your resume, portfolio, cover letter, everything. And the employer wants to see all 100 resumes through a common lens, with the option to see the original offers in their unfettered form.

A structured resume markup language would solve the problem nicely. The document would hold all aspects of the job application process under its domain. You, the job hunter, may still submit everything, but the employer can cherry-pick aspects of the offer as it suits them, filtering on keywords, and archiving information in a machine-searchable format alongside the original document.

One idea would be “subjective flags”; one may mean ‘I put effort into the visual design of this item, please look at it’. It’s an attempt to preserve the plurality of a resume: it speaks to your skills in writing, information design, and asthetics. Your traditional, hand-crafted resume becomes a member of your portfolio, flagged subjectively.

Formats become less of an issue as well. Different formats are faces of the same information, some crafted, some plain. HTML resumes become easier to process. PDF resumes come along beside the markup form. Extraction from office suites is possible, as is format-independent document versioning – the structure is already there, but no one has called it out.

A quick Google search for this idea turned up a little thought along these lines from the HR-XML crowd, but that spec looks a little heavy for our practical needs (but the resume.dtd they offer may be useful). A community de-facto standard would gain traction much more quickly, enabling Open Source software to take root.

We are reducing the friction in conversations between employers and prospective employees; the magnitude of that reduction will indicate this idea’s value.

Interesting stuff.

June 8, 2007

Smart-tagging email

Filed under: ideas — vednis @ 12:48 pm

Here is a random idea: why can’t my email program, using Bayesian filtering and hints from me, make a reasonable guess as to an inbound email’s subject? For any topic in my inbox, not just Spam? Why can’t it pre-assign some tags for me based on it’s guesses?

The UI concept: the user is presented with a bar full of text tags along the top of the email message pane. Middle-clicking on a tag removes it, making it fast and easy to remove bad guesses. Right-clicking on a tag opens a menu containing sub-category tags: this makes it easy to refine the message category.

The user can also add tags by highlighting a word in the message, right-clicking on it, and navigating a hierarchy of tags based on our common “root” categories (similar to our top-level folders, or email labels). The user navigates the tag hierarchy to an appropriate sub-category, and adds the selected text as a new tag in said category. Call this ‘explicit tagging’.

Alternatively, right-clicking on a word, and navigating to an existing tag provides that word as a hint for the filtering system. Call this ‘implicit tagging’.

Every time a tag is added or removed the filtering software takes that as a hint for future mail processing, helping it to guess better next time.

As far as I can tell this goes beyond the Filter/Label/Folder model of email used by 99% of the mail clients out there, and brings in fast and easy way to add, remove, and change context-sensitive message meta-data. What’s more, the system learns as you go, taking grunt work out of your hands.

In a way it is a real shame that Gmail is becoming the personal nerve center for interacting with the web. This would be *extremely difficult* to build into Gmail, because, with Gmail, you don’t own your data!. A Mozilla Thunderbird or Outlook plugin, on the other hand….

Next Page »

Blog at WordPress.com.