2025.3.2
Global Search
We implemented a global search functionality that allows you to easily navigate Fynapse screens and available actions. The functionality uses fuzzy search, i.e. matches results even for queries that do not perfectly match corresponding data, rather identifying results that are similar to the search query in terms of spelling, meaning, etc.
Flow Enhancements
New Flow Functions in Script Step
We added five new Flow functions:
get_current_business_date- returns current Business Date, i.e. Fynapse system date adjusted for the specified Subledger Node Configurations time zone.get_current_system_date- returns current System Date, i.e. the actual date and time of the machine you are using to work with Fynapselogger.x- three functions that allow you to create custom messages that will be logged to the Error Management log, you need to append a different suffix based on type of message you want to createlogger.info- allows to define an information type messagelogger.warning- allows to define a warning type messagelogger.error- allows to define an error type message
For logger.x functions, we recommend you carefully plan what messages you want to implement, as having too many messages may cause an information overload that will not be easy to analyze.
Extract Processing
We enhanced the data extraction functionality. Now, when you create an extract you can choose an Entity as the target where the data will be extracted to. The data from the target Entity can be then used as input in a Flow.
If you want to use an Entity with data from an extract as input for Flow, please remember that if you run the extract before the Flow is up and running, the data that would be exported to the Entity before the Flow is ready will not be processed.
This is especially important if you want to use the scheduler feature. Remember to schedule the date for the first extract after you know the Flow will be ready.
We recommend the following steps:
- Define the extract. It is key to do this first especially is you want to create a transient Entity (see below), as they won’t be available in the Input step otherwise.
If you are using scheduler, remember to schedule the date of the first extract after the Flow is up and running.
- Define your Flow.
- Run the first extract.
If you did run an extract before the Flow was ready, you can still process the already extracted data by re-running the extract on the Extract logs screen.
Also if a Flow is incorrect and an extract is run, but due to errors in Flow all data is errored, you can fix the Flow and re-run the extract.
Because each extract creates a set of data, re-running the extract will create duplicates of records.
To handle these duplicated records, we have extractId.
The extractId consists of ex_<extract_log_uuid>_<rerun_number>. It is added to all transient Entities. For Transaction type transient Entities it ensures correct deduplication of data. For Reference type transaction Entities it is ignored as reference data are not deduplicated.
In Transaction type transient Entities, the extractId is added to the Primary Key. Moreover, the Primary Key of Transaction type transient Entities consists of
- The Primary Key of the source Entity plus extractId if no aggregation is set
- The aggregation criteria plus extractId if aggregation is set For user-configured Transaction Entities, you have to add the extractId to the Entity Primary Key. If you don’t add the extractId the data will be deduplicated based on the other properties set in the Primary Key without differentiation for the extract run. For user-configured Reference Entities, we recommend not to add extractId as this will cause multiplication of records due to different extractIds.
The target Entity can be:
- An existing FDS Entity, which has to have an identical structure to the source Entity.
- A transient Entity, which is created when you select the Create new option in the Entity name drop-down while defining a target for a new extract. They inherit the temporalityType of the target Entity. These transient Entities are created exclusively for use as input for Flows. They are visible in the grid on the Finance Data Service screen, if they inherit the Reference, but the data contained in them cannot be browsed when you open the Entity details screen.
They are downloaded in the Configuration Data JSON file and can be reuploaded.
It is possible to define validations and enrichment for transient Entities by downloading the Configuration Data JSON file, adding the validations and enrichment and then reuploading the file.
Please remember that the added validations and enrichment have to be compatible with the existing data model.
The Extract only incremental data option is always set by default to avoid exponential increase of extracted data, as only increments are extracted after initial extract to avoid duplication of records.
It is set by default for all extracts apart from Balances.
Accounting Rules Navigator Download
We changed the structure of the CSV file with accounting rules navigator configuration you can download from the Accounting Rules Navigator screen. Previously, the file was divided into four tabs. Now the file structure is flat, i.e. the entire configuration is on one tab.
Bulk Operations
We enhanced the configuration upload functionality. Now when you want to update the configuration of an Entity with breaking changes, the affected data that became incompatible during processing when the change was made will be rejected and information about the rejected data will be logged in the Error Management.
Update of Environment Naming Convention
As part of our commitment to continuously improve release processes, starting from next release we are changing the naming convention of environments listed in the Components affected section from name of your environment to world regions. You will be able to see the name of your region in the Components affected section in the status e-mail informing you about the release.