Momentum Connect

An enterprise-scale platform to ingest data from a wide variety of sources and automate data engineering.

Momentum Connect Architecture

For any data-driven development, engineers and data scientists spend 80% of their time in data wrangling. Momentum Connect helps automate this process so as to improve the productivity of all stakeholders. You can speed up the data wrangling process by ingesting, cleaning, blending, and transforming a wide variety of data formats from external systems at high speed and scale.

Momentum Connect consists of the following four components


Ingester provides a set of connectors to pull data from a large number of systems. For example, you can connect and ingest data at a large scale from:

  • RDBMS: Oracle, MySQL, Postgres, MS SQL, and more
  • NoSQL: MongoDB, MarkLogic, Cassandra, and CouchDB
  • Cloud Storage: Dropbox, Google Cloud Storage, and S3
  • Streaming and IoT devices
  • Web URLs and Restful APIs
  • Medical imaging systems

Ingester can ingest a wide variety of data formats:

  • Unstructured text files
  • Delimited files, such as CSV and TSV files
  • Structured files, such as XML and JSON
  • Images and videos
  • Sensor and satellite data
  • Binary files such as Word, Excel, and PDFs

Momentum provides a pluggable architecture to develop new connectors and attach to Ingester to be able to ingest data from systems that we do not currently support.


Having accurate data is important. Therefore, data ingested from the source may require cleaning or correction. Moreover, you may require to blend data from different sources or transform them to be usable, meaningful, and trustworthy. Most machine learning algorithms require data to be in certain formats.

The Transformer provides a UI-driven approach to do data wrangling. Here is what you can do with the Transformer

  • Transformer provides an ANSI SQL compliant query engine to work with your data
  • Using the power of SQL, you can blend data from different sources, even though they are not related in the form an RDBMS system requires.
  • You can write multi-step queries on a single UI page to perform all data cleaning and transformation work.
  • The series of transformation queries execute on a distributed cluster that gives us speed and scale.


All data processed within Momentum are stored within a distributed file system. This allows us to create enterprise-scale data warehouses. However, it may be needed to transmit the data from Momentum to external systems. The emitter is designed to do exactly that.

  • Using an emitter, you can transmit data from Momentum to virtually any external system.
  • Data can be emitted to external systems in:
  • Momentum also provides a pluggable architecture to develop a custom emitter.


Automate data ingestion and transformation using Pipeline. Here is what you can do with Pipeline:

  • Using intuitive UI and drag and drop tools, you can build a complex data automation pipeline.
  • You can either do on-demand or scheduled data automation that may include one or more ingesters, transformer, and emitters.
  • You can chain multiple pipelines together.
  • The pipeline runs on top of a scalable distributed cluster to provide speed and scale to process terabytes and petabytes of data.

Ready To Embrace The Future

If you are working on a data engineering or AI solution, trying to explore a use case, or building a proof-of-concept, please contact us for a one-on-one discussion.