So you’re wanting to move your data into another environment and its stuck in Hadoop. Or perhaps you have already moved some of your Hadoop Sequence Files into S3. Now what? You’re not looking to make any changes to the data, you just want to mobilize it outside of the Hadoop ecosystem. Reading the data through HIVE is slow and it requires the data to live in Hadoop. So how do you get to the data now that it's in S3 without spinning up Hadoop? What if there was a way of reading the sequence files and outputting them to standardized AVRO, JSON, or Parquet files? Intricity’s read SEQ offers this simple but powerful flexibility.

Get a quote and Save Weeks of Work!

Fill out the form below and we'll provide you with a cost per Gigabyte (GB).



Instead of spending weeks designing work arounds, you can leverage Intricity's read SEQ which delivers a hosted web service that can read your Hadoop Sequence files, and convert them to your preferred format.

Imagine being able to treat Hadoop data as a Parquet, AVRO, or JSON file straight from the sequence files! read SEQ simplifies the replication and mobility of Hadoop. With your Hadoop environment in an Amazon S3 bucket, Intricity is ready help you get your data in the format of your choice. Fill out the registration form on this page to start the process and get a quote.

Related Pages: