IMPLEMENTED: Add support for exporting in Matlab -v7.3 MAT File Format to support files >2GB.
Quote from brto on 08/02/2023, 16:20Add support for exporting in Matlab -v7.3 MAT File Format to support files >2GB.
In RTSAFileTool.exe and RTSA PRO File Blocks.
See the following discussion for background.
Add support for exporting in Matlab -v7.3 MAT File Format to support files >2GB.
In RTSAFileTool.exe and RTSA PRO File Blocks.
See the following discussion for background.
Quote from DevSF on 14/03/2023, 11:34The Matlab file format >= 7.3 is a proprietary file format based on the HDF5 file format.
Rather than only supporting the Matlab file format we implemented a HDF5 data file export, which is also supported by many other third party tools.
Matlab should handle the HDF5 format quiet well as stated here:
https://www.mathworks.com/help/matlab/hdf5-files.html
Any feedback is welcome.
The Matlab file format >= 7.3 is a proprietary file format based on the HDF5 file format.
Rather than only supporting the Matlab file format we implemented a HDF5 data file export, which is also supported by many other third party tools.
Matlab should handle the HDF5 format quiet well as stated here:
https://www.mathworks.com/help/matlab/hdf5-files.html
Any feedback is welcome.
Quote from brto on 14/03/2023, 12:13Excellent. Sounds great.
Does this mean that you have implemented it and it is available in a nightly build.
I would be happy to test it out when it is ready.
Also, is there an option to output both the IQ data and the sample timestamp in the HDF5 file. I think some of the output option are just the IQ data and a start time - sample rate: with the assumption that the data samples are continuous in time, so you don't need an explicit time column.
For some runs, we see some packet drop outs, so the IQ data is not continuous in time, so having a timestamp per sample is useful: especially for aligning multiple channels which might have different dropouts. Would need 64bit timestamps to give at least a 1ns resolution. Would make the files bigger though.
Cheers,
Brian
Excellent. Sounds great.
Does this mean that you have implemented it and it is available in a nightly build.
I would be happy to test it out when it is ready.
Also, is there an option to output both the IQ data and the sample timestamp in the HDF5 file. I think some of the output option are just the IQ data and a start time - sample rate: with the assumption that the data samples are continuous in time, so you don't need an explicit time column.
For some runs, we see some packet drop outs, so the IQ data is not continuous in time, so having a timestamp per sample is useful: especially for aligning multiple channels which might have different dropouts. Would need 64bit timestamps to give at least a 1ns resolution. Would make the files bigger though.
Cheers,
Brian
Quote from DevSF on 15/03/2023, 10:22Hi Brian,
yes, it is already implemented and available in the nightly build. If you want to test it, feel free to give us a new feedback.The HDF5 data export contains a start and an end time as a 64Bit floating point value (seconds since epoche) and also the sample rate as „Span“ in the attributes of the HDF5 dataset. To get the timestamp per sample, you still need to interpolate it down to one sample.
The IQ samples should be continuous in time, if you had no issues while processing/recording the IQ data with the File Writer block. Otherwise, there might be an issue. Or you are working with multiple record files and changed in between the measurement parameters. Our data export only exports the first measurement in the recorded .rtsa file.
One timestamp per sample would be a lot overhead (~size*2) and would still be a time interpolation for a single IQ sample. Internally we have blocks(time slices) of IQ data samples, each block has meta data like the start and end time of the IQ data slice etc.. For simplicity in the HDF5 data export this metadata is interpreted and written only once for the hole data export in the attributes of the HDF5 dataset.
The IQ data is stored in the HDF5 file as uncompressed continuous interleaved float values which should give a good performance for read/write operations. To add the metadata for each single timeslice, it would make more sense to add a separate dataset for the meta data with indexes to the data samples. Therefore, it would still need to be mapped to the data by the end user.
We didn’t implemented a separate HDF5 Filewriter block, but if we have enough customer interests we would do. Maybe as a licenced block for a small fee. In this case I guess we would need to add data compression in the HDF5 file because of the speed limitations of hard drives. But for sure this block would save the metadata for each time slice separate.
Hi Brian,
yes, it is already implemented and available in the nightly build. If you want to test it, feel free to give us a new feedback.
The HDF5 data export contains a start and an end time as a 64Bit floating point value (seconds since epoche) and also the sample rate as „Span“ in the attributes of the HDF5 dataset. To get the timestamp per sample, you still need to interpolate it down to one sample.
The IQ samples should be continuous in time, if you had no issues while processing/recording the IQ data with the File Writer block. Otherwise, there might be an issue. Or you are working with multiple record files and changed in between the measurement parameters. Our data export only exports the first measurement in the recorded .rtsa file.
One timestamp per sample would be a lot overhead (~size*2) and would still be a time interpolation for a single IQ sample. Internally we have blocks(time slices) of IQ data samples, each block has meta data like the start and end time of the IQ data slice etc.. For simplicity in the HDF5 data export this metadata is interpreted and written only once for the hole data export in the attributes of the HDF5 dataset.
The IQ data is stored in the HDF5 file as uncompressed continuous interleaved float values which should give a good performance for read/write operations. To add the metadata for each single timeslice, it would make more sense to add a separate dataset for the meta data with indexes to the data samples. Therefore, it would still need to be mapped to the data by the end user.
We didn’t implemented a separate HDF5 Filewriter block, but if we have enough customer interests we would do. Maybe as a licenced block for a small fee. In this case I guess we would need to add data compression in the HDF5 file because of the speed limitations of hard drives. But for sure this block would save the metadata for each time slice separate.
Quote from brto on 28/03/2023, 19:13Apologies for not getting back earlier, but that all sounds wonderful.
If you have a link to a nightly build that would be great and I can test it out.
Just out of curiosity, do you have any idea of dates for the next formal release for Windows and Linux. If that is soonish I can wait for that.
Warm regards,
Brian
Apologies for not getting back earlier, but that all sounds wonderful.
If you have a link to a nightly build that would be great and I can test it out.
Just out of curiosity, do you have any idea of dates for the next formal release for Windows and Linux. If that is soonish I can wait for that.
Warm regards,
Brian
Quote from DevSF on 31/03/2023, 12:52No problem,
please sent an email to [email protected] and request our nightly build.
Currently I am not sure when we will create the next release version.
No problem,
please sent an email to [email protected] and request our nightly build.
Currently I am not sure when we will create the next release version.