Knowledge Base

Working with Firedac BatchMove

Who ever knows or uses the Firedac component BatchMove? Or LocalSQL? For most developers, these components are relatively unknown, even though they are very useful. In two blogs, I want to explain the added value of these components and show how they work. This blog is about the BatchMove component.

The added value

BatchMove allows you to easily move data between different data sources and data structures. Think of loading data from a CSV into a DataSet, or exporting the contents of a DataSet to a JSON file, or writing the result of a database query to another database.

A batch move therefore always consists of a Reader and a Writer. You have readers for Text files, a SQL query and for DataSets. There are also Writers for Text files, SQL and DataSet, as well as for JSON. So you can make all possible combinations with these Readers and Writers, to move data between different sources. And this is much easier and faster than writing code manually for such situations.

The basic design

Below you can see the setup of my application. On the left, there is a connection to the database and the QueryCustomers which reads the data from the database. On the right, you see the CsvDataset, a Firedac MemTable. In this dataset, the CsvMove (TFDBatchMove) loads the data from the CSV. To do so, the CsvMove is linked to the CsvReader (TFDBatchMoveTextReader) and the CsvDatasetWriter (TFDBatchMoveDataSetWriter).

The grid first shows two columns with the customer data from the database. The three columns after that come from the CSV. This data is combined via the CombinedLocalSQL (TFDLocalSQL) component.

The configuration for reading

The first step is to read the CSV. For this, we configure the CsvReader component. The screenshots below show that there are several properties to specify the file and its format. Via the Fields property, you can specify which fields (or columns) are present in the file. You can use the WithFieldNames option to specify whether the fields also appear as column titles in your file.

For each field, you can specify the data type of the field and (if applicable) its size and precision. In the screenshots below, you can see how the fields are defined in the CsvReader and actually in the CSV. The titles are different, but this is interpreted correctly by the CSV Reader.

The configuration for writing

For writing out the CSV, I chose the DataSet as the target, so that I can display the data in a grid. This Writer is one of the simplest, there is little to configure. With the other Writers, there are options available for formatting or for queries (in the case of the SQL writer). The DataSet Writer has the options below.

Setting the Direct option to True means that, when writing the data to the dataset, the standard Append, Edit and Post functions of the dataset are not used. This means that no dataset events are fired either. The internal Firedac functions are then used. So this only applies to a Firedac-based dataset. In other cases, the setting of this option does not matter.

Something similar applies to the Optimise option. That also only affects Firedac datasets and does not apply to other types. If you are using a dataset associated with data-aware components, such as a TDBGrid, then turning on this option may cause problems. If optimisation is on, Firedac sets various fetch and update options that speed up writing to the dataset.

Reading in and writing the data

To finally get the Reader and Writer together and get the data into the CsvDataset, the TFDBatchMove component is needed. It appears on the screen as CsvMove. This is where you link Reader and Writer in the options. In the BatchMove you can also specify the mapping between the fields from the Reader and the fields from the Writer. In addition, there are options to have the files validated (Analyze) and other options that influence the behaviour, such as Mode (do you want to add, update or delete) and Options (empty data first, use transactions, etc.).

Executing the actual Move, can be done simply by calling the Execute procedure in the code.

CsvMove.Execute;

 

In summary

Moving data between CSV and JSON files, Databases and Datasets, is made very easy by the TFDBatchMove components. As writing to and from a dataset is also supported, it also offers many possibilities for other solutions that can make use of a dataset. Just think of moving data between two different databases or a Firedac dataset and an ADO dataset, for example. BatchMove is a nice Firedac option that deserves to be used more.

Written by Kees de Kraker
Directeur

Contact

Let us help you to realise your ambitions

GDK Software UK

(+44) 20 3355 4470

GDK Software USA

+1 (575) 733-5744