Hello Everyone!


Technical Milestones

Over the past several months we have been focusing on scaling the network while maintaining node stability, ensuring long term scalability and seamless onboarding of our 100 nodes/node operator groups at the end of September. 



  • Cluster configuration: One of the most vital elements in ensuring a strong mainnet launch. We are currently testing and running backups for cold storage to streamline our node operator onboarding process. 
  • Explicit Executor/threadpool management: We have explicitly broken our executor processes to handle different types of tasks such as I/O, bounded and unbounded computation. This allows us to ensure that potentially blocking processes are assigned to their own executor and thread pools, preventing stalled processes and CPU spikes.
  • Inside scoop! See our network and performance in action. Here is a short walkthrough of our stability testing across configurations along with this update, which will demonstrate success criteria using our metrics dashboards. https://youtu.be/R-3udCBVksU
  • Implementation of reputation model: Our model, MEME, is an ensemble of Eigentrust, node influence calculation via self-avoiding walk and GURU – an improvement on Byzantine Fault Tolerance through p2p reputation based conflict resolution. We improve on traditional cryptographic security by calculating node influence with a space finding algorithm from fractal geometry and incorporating topological information to improve network security and prevent double-spending. See more below on our core pillars. 


Technical Communication

Over the next several months, we are going to be emphasizing the three core technical pillars of Constellation that articulate the advances our technology has made allowing data streams to be processed by our DAG. The three pillars are meant to enable Constellation’s elastic infrastructure. Elastic infrastructure ensures resilience for large batch processes and streams, such that individual node failures don’t crash an entire state channel cluster. Interoperability or the ability to compose/join different data sets is then defined in terms of an API algebra and coalgebra (this stems from our cohomology theory for blockchain/DAG protocols). This ultimately paves the way for a queryable and interoperable distributed database of data. 


At Constellation, we created a new blockchain that performs consensus in parallel, is horizontally scalable, and thus processes data/information concurrently. Elasticity in distributed systems means that node clusters are resilient and have the ability to decrease/increase size while reorganizing when failures occur. The only way for real-world big data to integrate with blockchain technology is to create blockchain technology with similar architectural features, concurrent consensus and blockchains that reorganize to optimize the network. Current P2P Networks lack elastic frameworks and existing sharding mechanisms marginally improve scalability yet don’t protect against uptime affecting outages. Constellation’s elastic framework works by indexing data while partitioning data around the network to optimize throughput.


Pillar 1: Cohomology Theory: enabling concurrent horizontally scalable blockchains/DAGs

  • Existing Homology Theory revolves around verifying blockchainsWyatt’s Cohomology Theory (Constellation’s Published Theory) reveals a way to merge data according to developer defined homology groups.
  • Rules on how certain data sets get mixed. Algebra defines your data. Co-Algebra defines how you license and encrypt data. 
  • Example: SQL Query across two streams of Data: Twitter Data and Financial data.
  • This is how we create a queryable network.
  • Developers will be able to hit an API and the data feed of existing state channels, no need to re-deploy existing state channels.

Pillar 2: Economic Model: Creating incentives for the network while creating units of value across state channels

  • Constellation creates a unit of value of $DAG through transactions per second/rate of positive contributions, measured via (Shannon) entropy, per second (Bits/Sec). Ironically, this is actually a tokenized utility measured in terms of bits (actually a bit-coin).
  • Other units of value are created on other state channels. This sets the foundation for a digital commodities exchange as we introduce quantifiable ways to measure various units of data. 
  • Incentives are distributed to validators to ensure optimal node performance. This serves as quantifiable and monetizable performance, ensuring validation criteria for state channels is upheld.
  • In its relationship to solving elasticity, Constellation quantifies throughput and node/participant contribution.

Pillar 3: Meme: Partitioning schemes that guarantee elasticity and uptime

  • Meme is an online machine learning model used to ensure optimal network orchestration, allowing for an elastic p2p network. The goal of any blockchain is to ensure stateful consistency using economic incentives and with probabilistic guarantees of accuracy enforced by cryptographic security. Constellation is making sure data is consistent (node validates the wrong data – it is rejected) and network can rebalance in real time while accessing information. 
  • Gossip: We use gossip protocols to orchestrate our partitions and sub-topology
  • Eigan Trust: Globally converged observations ensure consistent validator rewards
  • GURU: Scoring function for conflict handling in a decentralized network. Improvement on Byzantine Fault Tolerance. 
  • Node Centrality Metrics – Calculating node influence via a technique called a self avoiding walk, the result of which is actually a fractal generated by the transitive trust each node has in other nodes. This can be seen as a ‘useful proof of work’ where instead of acting as a rate limiting mechanism (IoTA-pow on tx’s) the sub-process calculates a fractal representation of all node’s transitive trust.
  • Proof of Reputable Observations (PRO) – our twist on meme with configurations determined by our governance model.



  • Cohomology Paper – This paper has been accepted for formal publication by The Modeling and Analysis of Complex Systems and Processes conference hosted in March 2019. As we mentioned above, Cohomology is one of the core pillars to Constellation’s protocol and our network. The paper will be released any day now. 
  • A new white paper on our reputation model is underway and will be released in tandem with our upcoming talk at the Hyperledger LA/USC Viterbi meetup on September 9th.
  • We are also in talks with a major University to engage in a formal verification study around our protocol and approach to reputation consensus.


Technical Meetups

  • Hyperledger meetup in LA at USC is scheduled for September 9th. We will be presenting on MEME, our reputation model and elastic infrastructure for P2P networks. If you can’t make it, we will be trying to capture video content. 

 As always, our project management is public and hosted along with our codebase on Github so please feel free to get in touch and contribute.