Error tokenizing data. C error: out of memory

Hello, I ran in to C error and not really sure how to remedy this.

I started initial testing of my project with csv file containing 2 columns with array in each one.
I validated that dataset csv file is clean and contains no empty or null values.

Things work fine with 7million lines in the training set(500mb) but as I go over that training trows out of memory error.
Test server is AMD epyc with 88gb of ram, used in CPU mode.

I tried loading data via file manager, nothing makes any difference. This appears to be an issue with batch size.

I want to train on datasets that will be 80-100GB so hitting issues at 1GB is problematic.

This is definitely caused by large file csv size and even with 256gb of ram I see this issue. I am not sure if that ram is allocated to the container though.
Large files are typically sliced in to smaller parts as data is fed and I though that batching option in data configuration did just that but that does not seem to be the case.
There are many references to chunksize option being the culprit, when loading the data, where can this setting be changed?

So I am confused as to what memory it is running out of when loading 1.5GB csv file with 256 GB of ram available on the server. And GPUs having 8GB each.