Optimizely Configured Commerce alets you enable integrations with external systems. You can create job definitions to define all appropriate attributes used for the integration.
Additionally, you can import these jobs for ERP systems or product information management (PIM) systems. You can also move them from a pilot environment to a production environment. For example, you can import all field mappings and setup options for Infor SXe separately from the same integration job definitions for SAP.
You must identify the internal Windows Integration Service (WIS) in the Computers Allowed for Integration setting to avoid having every web server process each internal integration request. You can add multiple computer names. If a computer not in the list calls the integration service, the system sends an email alert to the administrator.
The Job Definitions page displays all current definitions, which can be filtered by Job Name in the search box. Go to Admin Console > Jobs > Job Definitions. Creating a Job Definition contains four sections: Details, Steps, Parameters, and History.
Details
Setup information
- Job Name – Identifies the job, such as Rebuild Search Index.
-
Job Type –
- Refresh – Pulls data from the integrated system and transcribes it into the Configured Commerce database.
- Submit – Takes a dataset from a preprocessor or submitted via code and makes it available to WIS. This job is typically used for customer creation, order creation, and payment submission.
- Report – Runs a data query (typically locally) and creates a report.
- Exectute – Runs an SQL statement or stored procedure. Typically used for scheduling maintenance and archiving tasks.
- Connection – Sets the job connection with the connections defined in Integration Connection. Internal is defined by the system and used for imports, exports, and any jobs initiated directly by the web server instead of the WIS.
- Preprocessor – Runs before the job is submitted to WIS. Use the drop-down list to select the plug-in for pre-processing the job. Use None for Execution jobs.
- Integration Processor – Sends the task DTO packet to indicate to WIS what process to run. Use None for Execution jobs.
- Post Processor – Runs once the result is returned. You can only select processors that support the job type listed or select None. To run a stored procedure, select ExecuteStoredProcedure and set a job parameter called StoredProcedureName.
- Description – Describes the job's purpose and other helpful information to users.
- Use Delta Dataset – Compares the acquired data to the last saved "good" set (if any) and constructs an add/change/delete dataset. You should use this function only for refresh job types and if the integration process takes a full snapshot of the source data. The WIS compares the snapshot against the last saved snapshot and sends up the differences. For example, if you are processing flat files, the function could disable or delete records that should not be modified.
- Debugging Enabled – Adds debug messages to all jobs under the connection. When disabled, debug messages only show for job definitions that have it enabled. Because enabling debugging causes jobs to run longer and collect more information than needed, you should only enable it when troubleshooting
Notifications
- Notification Email List – Identifies the email addresses that should receive a notification email. Use a comma to separate multiple addresses.
-
Notify Condition –
- Completion – Sends an email once the selected integration job has completed, regardless of success or failure.
- Success – Sends an email once the selected integration job has successfully completed.
- Failure – Sends an email once the selected integration job has completed in a status other than Success.
- Email Template – Sets the email template to use for the notification email, if generated. If you do not select a template, the system uses a default notification template.
- Linked Job – Sets another job to immediately run after the selected job has completed, depending on the link condition.
-
Link Condition –
- SuccessOnly – Runs the linked job only if the current job was successful
- SuccessOrWarning – Runs the linked job if the current job was successful or triggered a warning.
- SuccessWarningErrorOrFailure – Runs the linked job if the current job was successful, triggered a warning or error, or failed.
- SuccessWarningOrError – Runs the linked job if the current job was successful or triggered a warning or error.
Run Options
- Max Error Before Fail – Creates a threshold for error log messages. If this threshold is passed, the job execution stops, commits completed changes to the database, and displays "The job was failed because the maximum amount of allowed errors/warning ( {logCount} ) has been exceeded" in the History tab. Choosing 0 does not limit the number of errors encountered and runs the job to completion.
- Max Warnings Before Fail – Creates a threshold for warning log messages. If this threshold is passed, the job execution stops, commits completed changes to the database, and displays "The job was failed because the maximum amount of allowed errors/warning ( {logCount} ) has been exceeded" in the History tab. Choosing 0 does not limit the number of warnings encountered and runs the job to completion.
- Max Deactivation Percent – Skips the delete action if the number of records to be deleted or deactivated exceeds the specified percentage. This helps prevent accidental deletion of records. If the job has multiple steps, it only skips the delete action on the current step and executes subsequent steps. Choosing 0 uses 20% as a default.
-
Processing Batch Size – Allows each job to have a custom batch size or set a different default value for batch sizes. When the job runs, if there is a batch size > 0 in the job definition, the job uses this value instead of the default setting value. The default processing batch size is 50.
There is a limit of 2,100 parameters for a SQL statement. Batching helps work around this limit. However, increasing the number of records does not always improve performance. It depends on factors like the natural key's values and related tables. For a table with three parameters per record, like Specifications, the maximum batch size would be 700 (2,100 divided by 3). To optimize performance, test different batch sizes. Only jobs longer than five minutes benefit significantly from tuning the batch size. Avoid going above a batch size of 500 because if there is an issue with a record in the batch, the system switches to posting them individually, negating the benefits of batching.
Configured Commerce restricts all integration jobs to a maximum of 1000 records per batch in the FieldMap processor and does not honor sizes above that threshold.
Recurrence
- Run as a Recurring Job – Marks the job as a recurring job. If turned on, when the record is saved, the system automatically creates a queued job at the next time that matches the start date/time and repeat cycle. You must run the job manually first to trigger the change.
- Start Date/Time – Selects the starting date and time for the recurring job to operate.
- End Date/Time – Selects the ending date and time for the recurring job to operate. Leaving it blank means the job has no end date.
- Advanced recurring mode – Lets you use CRON scheduling for more flexible job scheduling. Turning this on exposes a setting for Cron Expression. If No, use the Repeat Every setting to set how often the job should repeat.
- Repeat Every – Sets how often the job repeats. Enter a value that corresponds to your selection of Minutes, Hours, Days, or Months. This setting does not adjust for Daylight Savings Time, as shown by the tooltip.
- Cron Expression – Adds a CRON string for the job.
Steps
Job Definitions can have one or more individual steps. The Steps tab contains a list of the existing Job Definitions Steps. Click Add Job Definition Step to create a new step, or click Edit on an existing step to make changes.
Details: Setup
- Sequence – Represents the execution sequence used to run the steps. You should enter a value other than 0.
- Step Name – Identifies what is being done for informational purposes.
- Target Object – Defines the Configured Commerce target object for Refresh jobs.
- Connection Override – Displays the connections available that may be overridden from the one at the job definition level.
- Integration Processor Override – Instructs the WIS to use a different set of code to process this step if using a custom processor. This should only be necessary on multi-step jobs.
- Only Update Existing Records – Updates only existing records. When mapping fields, you can choose to only create or delete records and prevent the field mapper from updating existing data using this option.
Details: Delete Behavior
-
Action – Defines what action to take when a record is not included in a full dataset or a delete record is sent up from a delta dataset.
- Delete Record – Attempts to delete the record. This could cause referential integrity issues or fail due to a database constraint.
- Ignore – Does not do anything.
- Set Field – Selects the field to set. This is often used to "deactivate" an item by setting its "Active" boolean value or DeactivateOn date but could set most fields to a specified value.
- Update – Updates record values.
-
Field to Set – Lists the object fields that can be set. This field is only available for refresh jobs if you select Set Field for Action.
In the transcription/post processor, the system determines how to set a field based on its type. For a date field, set it to the current date. Otherwise, set it to the value specified in Field Value.
When setting Value To Set, make sure you use a valid value. For example, use 0 or 1 (or true/false) for Boolean fields. Leaving the field blank for a date uses the current date/time.
- Delete Children – Removes children for OrderHistory, InvoiceHistory, and Shipment entities. Selecting Yes deletes OrderHistoryLines, InvoiceHistoryLines, ShipmentPackages, and custom properties from the Order Header Refresh step. See Remove deleted child records via integration for nformation.
Details: Flat File Options
The followings fields depend on the type of Connection Override you selected and show or hide accordingly.
-
File Masks – Sets the file mask for files that match. The integration connection defines the directory to scan for files. For example, if the files have a name such as ProductFeed_xxxx where xxxx is a date/time stamp, use ProductFeed*.* as the mask.
If several file names are valid for the step, you can enter multiple masks using a comma to separate them. You can also signify a subdirectory as part of the mask so that the system searches within a subdirectory, thus avoiding having to set up separate connections
-
Missing File Action – Defines how the system should behave under different conditions. Set this value based on when the integration job runs, if you expect any files to be present, and how to notify you if no files match the mask.
- Error – If the file is missing, the system issues an error message in the log with the number of errors. The job ultimately fails if there is no file.
- Warning – If the file is missing, the system issues a warning message in the log with the number of warnings. The job ultimately fails if it does not encounter errors after the missing file condition.
- Ignore – This indicates that no files present is a normal and expected condition and generates an info message. The job ends as successful as long as there are no steps with other issues.
- Skip Header Row – Skips the first row if the flat file has a header showing the column names. The system processes all rows if you do not turn on this setting.
-
Columns/Select Clause – Defines the columns the Refresh job should retrieve from the remote data source (the Select clause). For flat files, this is a simple list of columns. If you are using a flat file, the number of columns defined must match the data in the flat file to prevent an error. You do not need to map all the columns, but you do need to define them for ingestion.
You can use any functions supported by the connection. For example, you can use a SUBSTRING command or a subquery for a SQL Server.
- From/From Clause – Indicates the tables to use and the join conditions for Refresh jobs with a database connection.
- Where/Where Clause – Defines any data-based conditional expressions for the Refresh job (the Where clause). Use single quotes for static variables such as Status = "A". You can also conditionally include records from a flat file. You can use a parameterized value from the Application Settings by using braces for the name of the setting such as transdt > getdate() - {ERP_Lookbackdays}.
-
Paramaterized Where – Defines an override Where Clause when the job has a defined step parameter and can refresh all customers or a single customer. The purpose is to use a separate Where Clause only when the job has specified parameters. The system uses the parameterized clause for any defined parameters. To use the parameter within the Where Clause, prefix the name of the parameter with an @.
Version 4.2 eliminated this capability, and Optimizely may eliminate this field in the future
- SAP Connection Type
- SAP BAPI Options – Displays options for using the SAP DotNetConnector, when applicable.
- BAPI – Represents the specific BAPI within SAP to call.
-
Result Table Name – Returns Function (property list), Structure (like a single record with properties), or a Table. Enter the table name, function name with {curly braces}, or the structure name with [square brackets].
BAPI can reference and define specific values from these three options using the same convention. If you enter a value, you do not need to enter it again in the list of properties, as the value becomes the default.
- Parameter Table Name – Consolidates parameters into a parameter table, if desired. Enter a name for the parameter table.
-
BAPI Info – Represents all data fields to return from the BAPI call. The system sets the returned object names to the proper structure based on the Result Table Name. For example, if it is a function, the fields represent the function.
If the Target Object is Dataset, the system can return and access Function, Structure, and Table data. However, if the job is a refresh job with a specific target Configured Commerce object, you only need to provide a Result Table Name. The fields entered in the column list should all belong to that table, and there is no need to prefix them. You can also access any of the values returned by the BAPI call.
To retrieve Function data, enter the function return value name, like SHIP_TO_NUM. To retrieve Table data, prefix the property with the table name and period, such as HEADER_DETAIL.SHIP_TO_NUM. You can reference any table that is valid in the return. You do not have to specify the result table name. To retrieve Structure data, enter the structure name in curly braces, followed by a period and the structure property name, such as {HEADER_DETAIL}.SHIP_TO_NUM
Parameters
The Parameters tab displays a list of Job Definitions Step Parameters. To see these fields, click Add Job Definition Step Parameter to create a new Step, or click Edit of an existing Step Parameter to make changes.
The SystemSetting type of parameter in integration jobs (either Job or Step parameter) first attempts to resolve the setting name using the Name field. If it cannot find a match, it uses the Default Value field. If both attempts fail, it cannot set the value.
- Name – Defines the name of the parameter. It should be unique within the job definition.
- Value Type – Sets the value type for the parameter.
- Default Value – Lists the default for the entry. If the value type is Application Setting, the value comes from the application setting defined here.
- Prompt – Displays a prompt to users for report values.
Field mapping
Field mapping connects data from a source (such as an ERP) to Configured Commerce. This process is only necessary if you are using the FieldMap post processor.
If debugging is on, you can see a record count on every 100 records as the system runs integration jobs. If debugging is off, the system only displays a record count every five minutes within the field mapper for each thread to show the progress.
The Field Mapping tab displays a list of Job Step Field Mappings. Click Job Step Field Mapping to create a new Job Step, or click Edit of an existing Job Step to make changes. The Field Type determines what fields display.
-
Field Type – Indicates in context the type of data to read (Refresh job) or write (Submit job).
- Field – Indicates a data field from the object (Submit job) or ERP source (Refresh job).
- Static Value – Sets the data value as a fixed value.
- Application Setting – Replaces the From Property with a list of valid application settings.
- Content – Defines the default values, static values, or import data values. This is a special field type used for Refresh jobs.
-
Lookup – Looks up an object for a parent/child relationship. This field type is used when referencing another entity in the system and only displays target object entities. You must first select the To Property to select the From Property.
For example, refreshing products uses a valid lookup called Vendor. The "natural key" looks up the alternate entity (Vendor Number) and then embeds the underlying ID field into the product. Refreshing inventory (ProductWarehouse entity) looks up the product using the Product Number and the warehouse using the Warehouse Name.
-
Child Collection – Assigns records without data, such as for cross-matrix selections. One example is WebSiteCurrency, which has no specific entity or target object. The target is Website, and the child collection is Currency. The data stream must have the natural key of the website (name) and the currency (currency code) to create the record.
You should use the collection with the fewest children to prevent performance issues. For example, rather than updating the category with a list of products, update the product with a list of categories.
-
From Property – Defines the source of the data.
A Refresh job displays a list of the Select clause entries. Because functions and subselects in the query may alter the display, you can type the value to use.
-
To Property – Defines the target of the data for transcription.
A Refresh job displays a list of the entries in the Target Object.
- Language – Provides a list of data elements defined in the query and a list of available language codes for Content field types. Leave the field empty to use the default.
- Persona – Provides a list of data elements defined in the query and a list of available personas for Content field types. Leave this field empty to use the default.
- Device – Provides a list of data elements defined in the query and a list of available device types for Content field types. Leave this field empty to use the default.
- Lookup Error Handling – Determines what the job should do if it cannot find the related entity: log an error, log a warning, or ignore it. This is only used for Refresh jobs and Lookup field types.
-
Overwrite – Overwrites the data from the data feed during a Refresh job when set to True. This setting allows the Refresh to set an initial value but not overwrite it to prevent future maintenance. For example, the job can populate the Product Title field from the ERP without overwriting the data later.
Do not set a dataset key to overwrite.
This field does not have defined behavior for Submit jobs.
- Dataset Key – Determines if the data is a Dataset key. This only applies to Refresh jobs. The dataset key uniquely identifies dataset records for delta datasets and Refreshes. Set the flag to Yes for importing child collections to transcribe the data.
Parameters
This tab displays a list of the existing Job Definition Parameters. Click Add Job Definition Parameter to create a new Parameter, or click Edit of an existing Parameter to make changes.
Job Parameters enable a processor (pre/integration/post) to get additional information when running a job. Different processors require different entries to be valid and, if the parameter is not set up, generate an error message indicating that it is missing.
For example, a job with the CleanupExportSpreadsheets post processor requires a RetentionDays job parameter with a value type of number. The system uses the default value in the parameter unless running the job manually or creating the job from code, which can change the value.
The SystemSetting parameter type in integration jobs (either Job or Step parameter) first uses the Name field for the setting name before using the default value. If both attempts fail, it cannot set the value.
Click Manage Columns to see hidden fields.
- Sequence – Identifies the order for prompting or passing the parameters into the processors. The system hides this field by default and does not usually need to be filled in.
- Parameter Name – Defines the name of the parameter. It should be unique within the job definition.
- Value Type – Sets the value type for the parameter.
- Default Value – Sets the entry default. For value type of Application Setting, the value represents the setting name. For value type of RelativeDateTime, the value is an integer indicating the number of days forward or backwards from the current date, useful for running an integration that looks back a set number of days.
- Prompt – Displays a prompt to users for running the job.
History
The History tab lists all integrations jobs that have run with important information, such as scheduling dates, job run time, and status. Click View to see the job details, including the Parameters and Job Logs. You can use the search box to filter for a specific job. You can also select and delete any queued jobs to prevent processing. You cannot delete any other jobs.
The View icon displays the details of the job including when it was run, how long it took, if it was run in a chain, and the dataset size. You can also see what parameters the job set and view the detailed job log to determine the source of an error or warning.
Please sign in to leave a comment.