What is MoK

Molecules of Knowledge (MoK for short) is a model for knowledge self-organisation, conceived to pursue two main goals:

  • autonomously aggregate data to build more "complex" heaps of information — possibly conveying novel knowledge previously unknown or hidden
  • autonomously spread such information toward potentially interested knowledge prosumers — rather than be searched proactively

Thus, MoK promotes the idea that data is alive, that information is a living thing continuously and spontaneously interacting with other information as well as with its prosumers, evolving itself accordingly.

How MoK works

In order to do so, MoK is designed around three main sources of inspiration:

  • biochemistry, providing metaphors for its basic abstractions
  • biochemical coordination, as its computational and coordination model
  • behavioral implicit communication, driving knowledge evolution

As far as the basic MoK abstractions are concerned, in MoK knowledge atoms are generated by knowledge sources in shared spaces – compartments –, self-aggregate to shape knowledge molecules, and autonomously move toward knowledge prosumers – catalysts –, whose actions (either epistemic or not) are represented as enzymes.

As far as the MoK computational and coordination model is concerned, MoK features biochemical tuple space like repositories – compartments – for the creation, aggregation, diffusion and consumption of knowledge atoms and molecules. As such, compartments are in charge of  locally evolving knowledge and of (ii) distributing knowledge across networked compartments, according to the (biochemically inspired) coordination laws installed—reactions.

As far as the user interaction model is concerned, MoK borrows concepts from the cognitive theory of behavioral implicit communication to enable anticipative coordination driven by users' epistemic actions. Briefly, any action undertaken by users is interpreted by MoK so as to mind-read users intentions and react accordingly.


Summing up, a MoK system should be seen as a network of shared information repositories, in which some source entities continuously and spontaneously put data chunks.

Such data may then  aggregate so as to reify some (potentially) relevant "knowledge-related patterns" – e.g. linking two news stories talking about the same person or written by the same author, read by the same prosumer or both related to a third news story – and (ii) diffuse among these networked shared spaces toward the (potentially) interested users—e.g. papers about MAS should strive to reach MAS researchers' repositories.

Users can interact with the system through epistemic actions – e.g. read a post, contribute to a wiki, highlight words in an article, ... – which are tracked and exploited by the #mok() system to influence knowledge evolution transparently to the user—e.g., a user highlighting a given word may imply such user being highly interested in such topics, thus MoK can react by, e.g., increasing rank position of related topics in a search query.

Motivation & Context

Knowledge-intensive environments and socio-technical systems are systems combining business processes, technologies and people's skills to store, handle, make accessible – in one word, manage – very large repositories of information—e.g. wiki portals, online press, enterprise social networks, etc.

They pose peculiar challenges from the infrastructural standpoint:

  • data size—from GBs to TBs
  • scale—from organization-wide to world-wide
  • dynamism—new information produced/consumed at fast pace — e.g. tweets
  • diversity—both in information representation and usage destination openness — new users can enter/leave the system at any time
  • unpredictability—since they involve humans, whose behaviour is rarely fully predictable

These challenges are usually faced using brute force approaches relying on ever-increasing (hopefully, endless)  computational power and (ii) storage— "big data" techniques, non-relational large-scale DBs, "data-in-the-cloud" paradigm, other buzzwords.

This won't scale forever—e.g. what about the end of Moore's law?

One possible research line departs from the following question: why do we stick to view data as passive, "dead" things to run algorithms upon in the traditional I/O paradigm?

This is where Molecules of Knowledge comes in =)


ReSpecT    TuCSoN