Baba took the whole clan to Fenway to see the Red Sox play the Chicago White Sox. It was an afternoon game so that the kids could come, too.
Baba generously bought us stadium food as well – no small sum for hot dogs, pretzels, pizza, and Del’s Lemonade.
The Sox stunk until the bottom of the 5th inning, at which point they came back from 4-0 lead by the White Sox to win 7-8.
Delta fell asleep by the 8th inning, which is surprising given the amount of noise every time the Sox got a run. Kappa, who’s about a year and a half, stayed awake and in mostly good spirits through the entire game. Beta was well behaved, and Alpha genuinely enjoyed herself.
We left as the 10th inning was starting so that we could avoid some of the crowds with the kids and headed down the street to get dinner at Wahlburgers.
I was greeted by a few cryptic things in NiFi this morning during my morning check-in.
A PutSQL processor was reporting an error:
"ERROR: PutSQL[id=$UUID>]failed to process due to java.lang.IndexOutOfBoundsException: Index: 1, Size: 1; rolling back session: java.lang.IndexOutOfBoundsException: Index: 1, Size: 1"
There were no recent errors counted in the LogAttribute counter we set up to record errors;
The Tasks/Time count in the PutSQL processor was though the roof, despite the errors and lack of successes.
Needless to say, the processor was all bound up and a number of tasks were queued. Not a good start to my day.
I checked the data provenance and didn’t see anything remarkable about the backed-up data. The error message suggests (to me) that the first statement parameter is at fault, and that parameter happened to be a date (which has been problematic for me in NiFi with a MySQL backend). Neither that value, nor the rest of the values, were remarkable or illegal for the fields they’re going into.
It wasn’t until I spent some time looking over the source data that I saw the problem: there is a duplicate key in the data. This error is NiFi’s way of complaining about it.
In our case the underlying table doesn’t have good keys, or a good structure in general, and I’m planning to replace it soon anyway, but updating the primary keys to allow the duplicate data (because it IS valid data, despite the table design) has solved the issue.