A Canvas for Thought

October 14, 2007

Social Networking: What’s Coming After Facebook

Filed under: ideas — vednis @ 9:30 pm

Facebook is a thinly veiled economic machine that turns your online identity into a monetizable commodity. We sell our digital souls for the privilege of playing in a garden walled by banner ads, on paths carefully planned by product managers in Palo Alto. We may not look beyond the walls, and we are encouraged to bring our friends in to play. The same pattern has been repeated time and again since the beginning of the Internet.

It doesn’t have to be this way. The technology exists for us to control our online personal information, to dictate to whom and how much our identities will be revealed, while still participating in the network at large. You should have control of your online identity and personal information. It should be your right as a digital citizen. The next generation of social networking sites can provide that control, using simple tools that we have used for years.

Start with a personal website. Next, place all of your personal information into a database, including a list of links to friends’ sites. A set of privileges is given to each friend in order to restrict their access to your information, perhaps assigned using groups, such as ‘colleagues’, or ‘family’.

You and each of your friends have a security key associated with your identity. That key uniquely identifies you, and allows for secure encrypted communication that can be read by you, and only you (The system for creating and managing those keys is already in place; it is called Public Key Infrastructure, and it is available as free software).

Now for the piece that makes this software ‘like Facebook’. You have a personal homepage on your website, one that may look like Facebook, or Myspace, LinkedIn; any social network you desire. When you visit your personal site your web browser will call all of your friend’s sites, ask for some of their personal information, and display that information on your page in the format you want. Alternatively, you may visit their sites directly to see their information in the format they desire; the choice is up to you. They know who is asking for the information because of the security key, and they are free to show you as much or as little information as they want.

The technical implementation appears relatively straight forward. The information passed can be based on microformats such as hCard and hCalendar. Data can be retrieved from your contacts’ sites and rendered on the your screen using JavaScript. Public Key Encryption through OpenID ensures that your contact’s information is only given out to trusted parties. The language used to write your friends’ or your own site does not matter, so long as they speak the same data-exchange language. The network would be technology-independent and open to participation for all, combining commercial and Open Source solutions to create a larger structure.

There are still opportunities to make money in the new social network landscape. Facebook can still exist, but the walls will have disappeared. Advertising supported hosting of personal profiles as Facebook does is still possible, but not required in order to participate in your friends’ social networks. Opportunities exist for searching the large network of people beyond your first-degree contacts. LinkedIn would become such a service, allowing you to submit a profile to their database in return for searching across the tens of thousands of people in your third-degree network. Selling your personal data on a controlled basis becomes feasible: you may opt to exchange your detailed personal and demographic information (age, income, etc.) to an information broker in for a monetary reward, or in exchange for one-to-one advertising that is actually relevant.

I believe that a network of personal information sites connected under the control of its users would constitute a new phase in the Internet’s development, opening doors to applications and utility as yet unseen. I hope it is only a matter of ‘when’ it will happen, and not a question of ‘if’.

Update: It looks like a few others are thinking in same space. Good!


July 18, 2007

Facebook: the ultimate P2P darknet enabler?

Filed under: ideas,wild speculation — vednis @ 6:16 am

Could Facebook be used as the catalyst for a new generation of Peer-to-Peer darknet applications?

Briefly, a darknet is a private virtual network where users only connect to people they trust.  This is very similar to the networks that Facebook builds. Trust is the key.  You connect to close friends and relatives, giving them access to personal content not privy to your larger network as a whole.

Facebook could become an enabler for these networks, in that it provides a common point in the network through which you may connect with those trusted people.  Not directly, but via Facebook’s new applications interface, or via exisiting network tools that Facebook supports directly, such as MSN, Gmail, etc.

One such darknet application may be Peer-to-Peer shared backups.  Imagine making an agreement with your relatives, that you would each devote 2GB of hard-drive space to keeping the family photo pool backed up.  Some clever Open Source software could keep the photo pool maintained, distributed among all of your computers.  The sharing tool could use a Facebook application for peer discovery.

You could even route new content over existing tools.  I wonder, if you could install the iLike application, could you use it to publish new content for your friends, and hook an Open Source content sharing tool into the iLike interface to handle the transfer?  iLike publishes what’s new, Facebook publishes your content share points, and the Open Source tool handles the data.

Using Facebook to publish content discovery and sharing points opens the door to federated services, allowing you to get your network out of the hands of commercial parties.  If I could publish the address of a personal server on Facebook (a server that I own, running my own services), then I could start building networks and sharing with others outside of Facebook.  And I would once again have control of my identity within those networks – I won’t have to rely on the Facebook privacy controls, or anything like that.

Just some ideas.

July 12, 2007

Possibly the best software I have ever used

Filed under: cool,ideas — vednis @ 9:07 pm

My wife has been wanting to relearn French for a while, and she has wanted to try Rosetta Stone.  After seeing and playing around with the Flash demo, all I can say is, WOW.  This is probably the best-designed software I have ever used.

The interface is so simple, a four-year old could easily use it.  The content used to teach the lessons is spectacular.  And, best of all, the software places no barriers in front of the user.  The lessons have a flow, but you are not required to follow it – you can move forward or back, change the difficulty, time yourself, score yourself, but only at your own request.

I can’t stress that last point enough: You are in complete control of the software.

This is the way learning should be.  Structured by the learner, for the learner.  The goals you set for yourself while using the software are personal, based on your comfort level and motivation.  The consequences are profound: you are motivated to set goals higher, because there is no fear of failure, no consequences.  Being motivated means you actually learn.  Teaching a language by rote memorization doesn’t even compare!

Ok, time for some wild speculation: the technique used to present the language is conversational, repetitive, immediately applicable, and predictable.  The language constructs follow each other logically, weaving in and out, building on what came before.

So the question is: could we apply the Rosetta techniques to learning computer languages? :)

Some of the bases are already covered: we learn “hello world”, and move on from there, picking up the patterns along the way: here’s a procedure, this is a data structure, this is binding and scope.  But this is picked up slowly, by reading about it, often a chapter (or an example) at a time.  Could we speed things up, improve retention, quickly getting a feel for the flavour of the language?

I have already come across some interesting techniques for learning a new language, example programs worth porting, and such.  But I still think we could do better.

I’ll post more on this later.  Stay tuned!

June 12, 2007

Resume Markup Language

Filed under: ideas — vednis @ 12:56 pm

I was in a meeting earlier today where the owner of a company spoke about the problems they had with hiring for a new position. One big problem was the variety of job applicant information coming in the door: cover letters, portfolios, and packaging all varied widely. Small companies see a big problem when dealing with 200 or more resumes like that.

The company’s solution, and, I might add, a standard one, was to offer a web form for applicants to fill out. But the solution takes more away from the job applicant than it gains for the employer. It changes the tone of the conversation.

The applicant is putting their best foot forward – their resume is their golden ticket, and they have invested everything into personalizing it, to impress. But the employer has a hundred tickets to deal with, each ticket’s value being reduced by number of other tickets it competes against. So the employer changes the conversation; they limit the information exchange to what fits on a blank web form. An analogy would be a person buying a new mid-sized car, having a large number of makes and models to choose from, and many salesmen vieing. But the first thing the car buyer asks the salesman is “please, only tell me the color, and the price.” Truncating the conversation not only demoralizes the seller, but key aspects of the offer may be missed: deals, leasing, seller reputation, and so forth.

Another conversation would allow the seller to present all that they have to offer, in a common language. The buyer could filter the information to items of interest. In the case of the job hunter, you want to present your resume, portfolio, cover letter, everything. And the employer wants to see all 100 resumes through a common lens, with the option to see the original offers in their unfettered form.

A structured resume markup language would solve the problem nicely. The document would hold all aspects of the job application process under its domain. You, the job hunter, may still submit everything, but the employer can cherry-pick aspects of the offer as it suits them, filtering on keywords, and archiving information in a machine-searchable format alongside the original document.

One idea would be “subjective flags”; one may mean ‘I put effort into the visual design of this item, please look at it’. It’s an attempt to preserve the plurality of a resume: it speaks to your skills in writing, information design, and asthetics. Your traditional, hand-crafted resume becomes a member of your portfolio, flagged subjectively.

Formats become less of an issue as well. Different formats are faces of the same information, some crafted, some plain. HTML resumes become easier to process. PDF resumes come along beside the markup form. Extraction from office suites is possible, as is format-independent document versioning – the structure is already there, but no one has called it out.

A quick Google search for this idea turned up a little thought along these lines from the HR-XML crowd, but that spec looks a little heavy for our practical needs (but the resume.dtd they offer may be useful). A community de-facto standard would gain traction much more quickly, enabling Open Source software to take root.

We are reducing the friction in conversations between employers and prospective employees; the magnitude of that reduction will indicate this idea’s value.

Interesting stuff.

June 8, 2007

Smart-tagging email

Filed under: ideas — vednis @ 12:48 pm

Here is a random idea: why can’t my email program, using Bayesian filtering and hints from me, make a reasonable guess as to an inbound email’s subject? For any topic in my inbox, not just Spam? Why can’t it pre-assign some tags for me based on it’s guesses?

The UI concept: the user is presented with a bar full of text tags along the top of the email message pane. Middle-clicking on a tag removes it, making it fast and easy to remove bad guesses. Right-clicking on a tag opens a menu containing sub-category tags: this makes it easy to refine the message category.

The user can also add tags by highlighting a word in the message, right-clicking on it, and navigating a hierarchy of tags based on our common “root” categories (similar to our top-level folders, or email labels). The user navigates the tag hierarchy to an appropriate sub-category, and adds the selected text as a new tag in said category. Call this ‘explicit tagging’.

Alternatively, right-clicking on a word, and navigating to an existing tag provides that word as a hint for the filtering system. Call this ‘implicit tagging’.

Every time a tag is added or removed the filtering software takes that as a hint for future mail processing, helping it to guess better next time.

As far as I can tell this goes beyond the Filter/Label/Folder model of email used by 99% of the mail clients out there, and brings in fast and easy way to add, remove, and change context-sensitive message meta-data. What’s more, the system learns as you go, taking grunt work out of your hands.

In a way it is a real shame that Gmail is becoming the personal nerve center for interacting with the web. This would be *extremely difficult* to build into Gmail, because, with Gmail, you don’t own your data!. A Mozilla Thunderbird or Outlook plugin, on the other hand….

May 2, 2007

A New Way to Feed The World

Filed under: ideas — vednis @ 2:26 pm

Spring has come, and my wife and I have turned our attentions to gardening. Explorations of companion gardening eventually led to John Jeavons’ work with biointensive farming, the dreams of a US Navy scientific genius, John Craven, and, just maybe, a new way to feed the world.

There is a nice interview with Jeavons where he puts the results of his research into perspective:

“It takes about 15,000 to 30,000 square feet of land to feed one person the average U.S. diet,” he says. “I’ve figured out how to get it down to 4,000 square feet. How? I focus on growing soil, not crops.”

The technique is interesting, with great potential. Jeavons has a book, now in its 7th edition, detailing his methods and discoveries. But something about Jeavons’ work stirred the dust in my memories: this isn’t the first ecologically sound intensive farming technique I have read about.

Ah yes, now I remember: an ex-US Navy scientific genius, John Craven, has been using cold ocean water to repeatedly shock the roots of plants, causing cycles of dormancy followed by frantic growth. This leads to greatly increased crop yields.

He talks about his technique in an interview with Wired:

“Craven’s system exploits the dramatic temperature difference between ocean water below 3,000 feet – perpetually just above freezing – and the much warmer water and air above it. … The cold water also creates a temperature difference between root and fruit that Craven believes speeds growth. And by turning the flow on and off, Craven has found he can further accelerate the plants’ growth cycle by forcing them in and out of dormancy – he can get three crops of grapes a year and pineapples in eight months instead of the usual 18.”

“We’ll make freshwater for nothing, 3,000 to 15,000 pounds of grapes per acre per year, three times what the best vineyard in California can do.”

Fascinating things could happen if you combine Craven’s techniques with Jeavons’. Could you double the output per hectare of biointensive methods? That would be a fifteen-to-one land improvement over traditional farming, with the added bonus of being completely organic and sustainable.

Ah, not so fast. Craven points to one problem:

“What the world doesn’t understand,” says Craven, … “is that what we don’t have enough of is cold, not heat.”

This is true; Craven’s techniques rely on readily available cold, to bring down costs, and to keep the energy clean. The root-chilling technique would require the creation of quite a bit of cold, especially if it were to scale in hot climates.

But what if we lived in Northern latitudes, where the tempurature is below freezing for a good part of the year? We could harness the cold climate, storing the cold as ice. After all, the Persians were storing ice in 400BC! We just need to store enough ice to run our root-chilling system through the short Northern growing season.

So, could it work? Yes, I believe it could. It took Jeavons thirty years to establish his techniques, and it took Craven seven. Perhaps, with a little scientific rigour, and some social networking, we could do it in less.

March 30, 2007

Some thoughts on the web as a filesystem

Filed under: ideas — vednis @ 12:03 pm

This is a draft idea I have been working on. I am posting it to provide context for some other ideas I had recently.

Update: it looks like Jon Udell and the guys at Freebase are already dreaming along the same lines.

Some interesting ideas result from REST placing restrictions on you.

These constraints make web objecs similar to file-like objects. In fact, the entire URL space is a little file-like. Things rest in only one place (canonical). Many names can point to the same thing (hard links). The same object can be copied in multiple spaces.

Maybe that is why you can re-invent old Unix tools as web services?

Now, what happens if we try to merge web-based tools with a extreme file-oriented operating system?

You might end up with commands like this:

$ mount http://amazon.com
$ grep 'agile & software' /mnt/w/amazon/books > agile-books
$ cat agile-books | cut -3 | uniq -c
# Result: the number of books about Agile software
development, counted by publication date.

Or this:

$ mount http://google.com/gmail
$ mount https://acanvas.wordpress.com/
$ ls /mnt/w/gmail/inbox
$ cp foo.eml $home/drafts/foo-reply.txt
edit foo-reply.txt
$ cat $home/drafts/foo.txt > /mnt/w/acanvas/posts/new

Or my favorite, this:

$ mount https://acanvas.wordpress.com/
$ cat $home/drafts/post.txt | wiki2html > /mnt/w/acanvas/posts/new

I know, I’m not the first person to think of this, And there are problems like “What does it mean to run ‘cp’ or ‘mv’ on a URI, or it’s contents?”. But it is a problem worth solving – one could ‘program the web’, in the most literal sense.

March 13, 2007

Seaching past distractions

Filed under: ideas — vednis @ 2:13 pm

For a while now I have wanted a “visual search engine”. A search engine that will answer when I ask ‘What was that site? The one with the gross burgundy bar down the right-hand side?’. And yesterday I came upon a workable answer to this problem!

It’s a bit of a whirlwind, so hold on:
It starts with an attention recorder, a piece of software that keeps track of the web sites I view. Normally, the recorder would store the site addresses in an attention vault of some sort. But what if I stored images from snap.com along with the addresses? You know, site preview images, those little thought bubbles that pop up when you hold the mouse over a link (this weblog uses them).

That small site preview image from Snap.com is just large enough to recognize large blocks of colour and contrast. We can use simple techniques from desktop optical character recognition software to look for patterns in the Snap images. The user would draw a simple picture using a trivial drawing program (simpler than Microsoft Paint); a coloured box, maybe a line. The drawing is broken into patterns, and the search software looks for those patterns in the attention vault images. The user gets back a list of sites they have visited that look similar to the picture they drew.

Pretty cool. This technique lets the user perform vague searches like “the page was green down left”, or “it had a circle in the middle”, or “there was a distinct blue and orange theme”. Mix in some of the artistic and design skill necessary for good data representation, and you have a real application.

So there you go. Snap.com, my attention vault, and OCR technology shine a new light on my past distractions.

Create a free website or blog at WordPress.com.