How Should Group Brainstorming Work on the Web?
The web is full of collaborative tools and services, but a popular group brainstorming platform has yet to materialize. The question is “Why?”
Admittedly, the web in general has replicated many of the features of classic real-world group brainstorming. While none of the popular social media platforms were designed to conduct formal brainstorming sessions, their ease of access and familiarity has made them a popular choice for conducting ad hoc collaboration. Comment fields and voting buttons (points, “Likes”, “Hearts”, up-and-down arrows, etc.) perform many of the functions attributed to brainstorming, if only on a rudimentary level. For anyone who has experienced a formal group brainstorming session in real life, the tradeoffs with using this makeshift method leaves much to be desired.
Lesser known are the group brainstorming applications currently available for private use. None are open online for the general public; access is based on a contracted legal agreement with per-user license fees. The target audience for these offerings is large organizations with complex internal decision-making needs. In cases where the application is hosted as part of a consulting engagement, guidance by a trained facilitator is a central requirement of the service. A compelling argument for why these brainstorming solutions haven’t gained wide acceptance or use is that the licensing and usage model severely limits the potential for ubiquitous ad hoc collaboration.
The answer then is that no one has succeeded in taking the next step of reimagining group brainstorming to take full advantage of the latest features and functions of the web. Here are 7 game-changing attributes that a group brainstorming platform on the web should possess…
1. Eliminate the “Tragedy of the Comments”.
The standard methodology for group collaboration today is the use of comment strings. As useful as this is for general interaction on the web, it’s a terrible method for facilitating large-scale collaborative dialogue on the web. In the early days of the Internet, it served as the online representation of one-on-one conversation. Today, comment strings are used even when the number of participants makes coherent dialogue impossible. A better solution would use the natural tendency of fragmentation within group dialogue as a core element of collaboration. Instead of producing one monolithic string of comments, group brainstorming should employ compulsory tangential discussions during a dialogue to maximize the coverage of the conceptual terrain for the topic at hand, regardless of the subject matter or number of users.
2. Eliminate bias from user’s prior reputation from the Evaluation Process.
Interaction among participants during a brainstorming session should be anonymous. This is because biases such as user reputation or size of a participant’s social network can easily skew the evaluation process. Most of the valuations attributed to content submitted by users of other collaborative systems rely heavily on social networking. The problem with that reliance is what social theorists call reputation or information “cascades,” more commonly referred to as groupthink.
3. Sampled Comparative Evaluation: No centralized evaluators, no one has to read every comment, and all comments are reviewed.
Allowing users to evaluate any piece of submitted content they want — independent of any requirement of comparing it with other submitted content — is the standard mode of most systems today. But this creates an inherent fallacy. Since users are not formally comparing one piece of content versus another, what do their scores really mean?
If the collectively perceived value of submitted content is purely relative to the value of other pieces of content, then the real value of content that isn’t based on comparative judgement is unclear at best. At worst, it doesn’t mean anything.
Should a user be able to review only one piece of submitted content (e.g. blindly giving a score based on a request from a related acquaintance) or should they be required to compare it to at least one other piece of content before making an evaluation? The content displayed on systems such as these that have the highest value could very likely be nothing more than accidental (manipulated?) groupthink. Like a feedback loop between a microphone and a nearby speaker, any content could become “the most valued” for no other reason than the momentum of its popularity at the beginning of a collaboration.
The best current example of sampled comparative voting (made famous by the notorious web site “Hot or Not”) is the Pairwise comparison methodology. Here’s how it works: Users are presented with two pieces of content (typically a line of text or a picture) and are asked to decide which one they like the best. It has an impressive amount of analytical grounding. Of the known Pairwise-based systems out there, none have a facility for submitting commentary for consideration by content originators and/or subsequent evaluators. Since there’s almost no feedback loop, it creates a “rank reversal” problem.
4. Merging incentives for competition and cooperation to form “Coopetition” among the participants who also serve as evaluators.
Game theory can and should be employed to loop the motivations of cooperation and competition into each other. For example, giving a glowing review to the proposal that receives the highest score raises the final ranking of the evaluating participant in a project. Even the decision to punish or even purge abusive users is determined by those participants who will benefit the most by passing sound judgment.
5. Users should edit their own ideas throughout brainstorming sessions instead of adding yet another pile of unread entries.
By making the upgrading of ideas a core feature, user content never gets dated. For historical reference, versioning could be employed to maintain the logical continuity for reviewers in the future.
And here’s where the latest capabilities of payment rails stretch the possibilities even further…
6. Equate scores with asset distribution(money, credit, access, crypto assets, etc.)
Imagine ad hoc brainstorming for money for any subject matter, any amount of fees and with an unlimited number of participants. And the money/value used in the session is held in escrow and distributed as rewarded.
7. Make group brainstorming a community-managed social utility, not a standard privatized commercial business entity.
Ideally, an online group brainstorming system should be built as a coop to equitably generate value for all participants and stakeholders. Entrepreneurial users would then be free of the risk of building their ventures within the confines of someone else’s business model.