storage-logger

0.2.3 • Public • Published

A Node.js module that responsible for uploading the incoming logs, to a storage(S3), in a desired format. For example, to the format that Redshift DB will expect.

The module will upload to storage the:

  1. formatted version of the log.
  2. original version of the log.

The log will be written to disk before they will be uploaded to storage. A new log file will be created on every new date (configured). And when ever log size exceed the limit (configured).

After the logs are uploaded, the files will be moved in to an archive folder (unless you will configured differently) If the logs fail to upload, the files will be moved in to an errors-archive folder (unless you will configured differently)

The module comes with the following API: write(dataObjectOrJson) store(configurationFileOrObject) configure(configurationFileOrObject)

The module can be configured with the following json file:

{ "datafile" : "/path/to/data.log", [optional, default to ../storage-logger/data.log]

"origfile"			: "/path/to/orig.log", 					[optional, default to ../storage-logger/orig.log]

"archiveDir"     	: "/path/to/archive",  					[optional, default to ../storage-logger-archive]

"errorsDir"       	: "/path/to/archive-errors",  			[optional, default to ../storage-logger-archive-errors]

"archive"         	: true,  								[optional, default to true]

"archive_errors"  	: true,  								[optional, default to true]

"sizeLimit"			: "5000",  								[optional, default to true] 

"datePattern"		: "yyyy-MM-dd",  						[optional, default to yyyy-MM-dd]

"delimiter"			: ",",  								[optional, default to ',']

"event_schema": {
	"input_key1" 		: 	{ "name" : "output_key1", 	"type" : "string" },
	"input_key2" 		: 	{ "name" : "output_key2", 	"type" : "timestamp", "format" : "YYYY-MM-DD HH:mm:ss", "required": true },
	"input_key3" 		: 	{ "name" : "output_key2", 	"type" : "bool" },
	"input_key4" 		: 	{ "name" : "output_key3", 	"type" : "number" }
},

"aws": {
	"access_key" : "XXXXXXXXXXXXXXXXXX",
	"secret_key" : "XXXXXXXXXXXXXXXXXX",
	"bucket" : "path.to.s3.bucket"
}

}

As you can see the event_schema has a very detailed structure. For each "input_entry" there is a field object, that describe the:

  1. output_key (must).
  2. type (optional). Will help for the module to ensure the log is valid
  3. required (optional). If the field is required, but it missing or it's not valid, the log will be not wirtten and the data will be return to the sender. (otherwise null will be returned).

The configuration has been designed to fail fast, so if you will make error in the conf file it will fail. (After been that said... i hope i have NO bugs :) )

Readme

Keywords

none

Package Sidebar

Install

npm i storage-logger

Weekly Downloads

12

Version

0.2.3

License

none

Last publish

Collaborators

  • ofer.velich