benas on master
Update README.md (compare)
benas on master
Remove outdated modules from de… Add slf4j-simple to dependency … Add slf4j-simple as test depend… and 1 more (compare)
I have odd need, I have multiple input csv files, i need to read each one of them and based on the file name, enrich & add few more columns to each record/line of csv and finally persist that csv into db, How do get the file name in Writer or Mapper?
Basically, based on the file name, some computation needed to create few more columns in the input csv, e.g. if file name is FILENAME_datetime_userid_productcode.csv, then date time and product code needs to be used for further calculation and the userid needs to be used for linking few other incoming files.
I am planning to use 1> MultiFlatFileRecordReader and read incoming files, 2> JdbcRecordMapper for converting record into pojo and then user JDBCRecordWriter, But not sure how to read the incoming file name in mapper. can you please suggest?
Easy Batch v6.0 has been released a few days ago: https://github.com/j-easy/easy-batch/releases/tag/easy-batch-6.0.0. This is probably the best release ever since the beginning. It is based on Java 8 and provides some nice new features like batch scanning (j-easy/easy-batch#363). I would like to thank all contributors who helped making this release possible!
Easy Batch is now trusted by some serious companies like Splunk (see https://docs.splunk.com/Documentation/DBX/3.2.0/ReleaseNotes/easybatch) and others (https://speakerdeck.com/benas/easy-batch?slide=41). This is very rewarding and I'm proud to see this happening. Easy Batch needs more users so if you have used it successfully, do not hesitate to tweet about it, star it on github, write a blog post about it and spread the word!
Record<P>is generic: https://github.com/j-easy/easy-batch/blob/master/easy-batch-core/src/main/java/org/jeasy/batch/core/record/Record.java#L56. As far as the record writer is concerned, it has to write records (regardless of their payload), so it does not have to be generic. Why do you need a cast? If you share an example, I can help.
@repolevedavaj While reviewing the code after your feedback in regard to generic APIs, I noticed that almost all APIs are not generic and hence not type safe. I updated all APIs to be generic (including
RecordWriter, see the last 8 commits here), so there is no need to cast the record's payload in your writer anymore. I deployed v7.0.0-SNAPSHOT with the updated APIs. Can you give it try and share your feedback compared to your initial experience with v6?
With v7, it is required to specify the input/output types at the job builder level to enforce type safety across the job definition:
Job job = new JobBuilder<String, Tweet>() .batchSize(2) .reader(new FlatFileRecordReader(tweets)) .mapper(new DelimitedRecordMapper<>(Tweet.class, fields)) .writer(new JdbcRecordWriter<>(dataSource, query, new BeanPropertiesPreparedStatementProvider(Tweet.class, fields))) .build();
In this example, we explicitly say we are reading
Strings and writing
Tweets. Any attempt to pass a
RecordWriter<Integer> for example would not compile (which is not the case with v6..).
Looking forward to your feedback. Thank you upfront!