Release Notes > Digital Performance Management 1.2.1 Release Notes
Digital Performance Management 1.2.1 Release Notes
The following fixed issues and known issues and limitations are part of the Digital Performance Management (DPM) 1.2.1 release. Also provided are: database schema changes, and end-of-support information.
Fixed Issues
The following support cases are fixed in DPM 1.2.1:
Case Number
Description
16537141
Fixed an issue in Administration > Shifts & Calendars where the start time stored for a calendar in the database was being offset to the client time zone when the calendar was edited. The start time for a calendar is always 00:00:00 on the start date.
16598030
Fixed an issue in Performance Analysis where the View Loss Insights button was enabled on Pareto charts, though the analysis is performed against the loss categories rather than against individual loss reasons. The button is now enabled only on waterfall charts.
16605642
Fixed an issue in Performance Analysis where the total production time was being calculated differently for the waterfall chart and the Loss Insights page.
16630901
Fixed an issue in Performance Analysis where there was a discrepancy between the loss time displayed in the waterfall chart and in the bottom panes.
16636601
Fixed an issue on the trend charts in Performance Analysis and Action Tracker where the total time loss for a single day received through data automation was exceeding 24 hours.
16661356
Fixed an issue in the shift instances creation scheduler for customer calendar exception use cases.
Known Issues and Limitations
The following known issues and limitations are present in the DPM 1.2.1 release:
Known Issue
Solution
Users should not directly modify the database.
Changes to the database schema must be performed through the supported services provided by the PTC.DBConnection.Database_TS Thing Shape and the PTC.DBConnection.Manager_TS Thing Shape. For more information, see Database Connection Building Block.
Changes to stored procedures and functions that are provided by PTC are not supported. Stored procedures and functions can be overridden on upgrade, so any changes would be lost.
After updating the ideal cycle time for a material, existing job orders for the material are not updated with the new ideal cycle time.
Workaround: If you want job orders that have not yet started production (in the Pending or Dispatched states) to use the updated ideal cycle time, cancel the existing job orders and create new ones. The new job orders will reflect the updated ideal cycle time for the material.
Job orders that have already completed or are in production (in the Running or Held states) correctly use the previous ideal cycle time.
Very small scrap values (less than 0.1) appear as a blank in the Edit Loss Event window. The values appear as expected in the Event Log.
This issue will be addressed in a future release.
The Equipment List intermittently does not display if Administration is selected from the navigation menu more than once in a row.
Workaround: To view the Equipment List, select another item from the menu, or another page within Administration, then select Administration or Equipment List again.
If a user is entering a value in the Enter Quantity field on the Time Loss Accounting window, and an automated event is received before that quantity is saved, the quantity is cleared.
This issue will be addressed in a future release.
When data automation is configured, if the ThingWorx server time is behind the time on the server where Kepware is installed, automated events coming from Kepware are ignored.
This issue will be addressed in a future release.
Some error messages in Administration are not fully externalized. Portions of these messages display in English even when the user interface is viewed in another language.
This issue will be addressed in a future release.
When you sort two or more columns available in Reason Trees, Job Orders, Materials, Metrics, Shifts and Calendars, and Scorecard, multi-level sorting behaves differently on different tables.
This issue will be addressed in a future release.
After upgrading to DPM 1.2, some customized administration mashups that are opened in Google Chrome and Microsoft Edge browsers are showing unnecessary scrollbars.
Workaround: To view scrollbar-free customized mashups in Google Chrome and Microsoft Edge browsers, update your customized mashups to align with the updated mashups that are provided by PTC.
Automated events can be logged against an incorrect job order when the Kepware clock and the ThingWorx server clock are not synchronized.
Workaround: Ensure that the system clocks (the "Date & time settings" on Windows OS) on the Kepware server and the ThingWorx server are synchronized within 5 seconds and that this synchronization is maintained over time.
Disabled reason codes are displayed in the Select Loss Reason pane of the Edit Loss Event window which can lead to erroneous entries. A loss event can be logged for a disabled reason code belonging to an incorrect reason category.
This issue will be addressed in a future release.
Scheduler, timer, or other pacemaker data processing is not occurring. For example, shift instances are not being created, or automated events are not being processed. A possible cause is the lock property being set to true on a related entity.
Workaround: Reset the lock property on the related entity to false.
The following table lists the entities with lock properties and their property names:
Entity
Property
PTC.OperationKPIImpl.EventsAggregationScheduler
processingLock
PTC.OperationKPI.ShiftInstanceCreationScheduler
processingLock
PTC.OperationKPI.MonitoringScheduler
processingLock
PTC.OperationKPI.AutomationPurgeScheduler
processingLock
Any model (equipment)that implements or inherits PTC.TimeLoss.ModelLogic_TS
PTCAnalyticProcessingLock
Any model (equipment) that implements or inherits PTC.OperationKPI.AutomationEventsModelLogic_TS
PTCAutomationEventProcessingLock
Any model (equipment) that implements or inherits PTC.MfgModel.AnalysisModelLogic_TS
PTCProfileServiceLock
When appending data to a data set for time loss analytics, the first production block in the appended data for a given work center does not include values for time losses from the previous production block as expected. Instead, the following columns are empty in the appended data for those production blocks:
TimeLossScrapPrevious
TimeLossUnscheduledPrevious
TimeLossPlannedDowntimePrevious
TimeLossUnplannedDowntimePrevious
TimeLossChangeoverPrevious
TimeLossUnaccountedTimePrevious
TimeLossSpeedLossPrevious
TimeLossSmallStopsPrevious
This issue will be addressed in a future release.
When new work centers, materials, or shifts are added to an area, the existing time loss analytics data set for the area needs to be deleted and a new data set should be created. You can delete and recreate a data set from Area Settings > Pipeline Scheduling Tab > Create Manual Push > Delete and Recreate Data Set.
This issue will be addressed in a future release.
Loss time added for scrap can result in more than 24 hours of total time loss for a single day.
When adding scrap to a job order, either by entering scrap loss events or when scrap counts are received through data automation, it is possible to over-account for scrap in a given production block. As a result, while the total time loss for the job order is accurate, it is possible for more than 24 hours of total time loss to be logged for the day to which the scrap is added.
For sites or users in time zones that observe Daylight Savings Time (DST), query results and calculations which include the hour that is "lost" or "gained" during the change between DST and Standard Time may be off by an hour’s worth of data.
This issue will be addressed in a future release.
For performance reasons, DPM aggregates data in 1 hour chunks, starting at midnight UTC. In Performance Analysis and Scorecard, this can result in query result inaccuracies for time zones with offsets that are not in full hour increments from UTC.
This issue can be seen in the following ways:
Discrepancies between waterfall and Pareto chart data. The waterfall chart uses aggregated data in queries, while Pareto charts query live data. The Pareto chart should be considered the more accurate of the two values.
24.5 hours of unaccounted time shown for a day at the edge of the query time frame if there was unplanned production during that time and if the user is querying from a time zone with an offset of :30 rather than full hour.
An extra day appears at the start or end of the trend chart for the query period, representing a small amount of data, such as 30 or 45 minutes. Because of data aggregation, the query offset uses full hours, even for time zones with offsets that are not in full hour increments from UTC.
For example, if the time zone is +5:30 UTC, the offset used by the query is +5:00 UTC. This means that rather than querying from midnight to midnight in that time zone, the data is actually queried from 23:30 to 23:30 in that time zone. As a result, a data point is displayed on the trend chart for the 30 minutes of the day prior to the start of the query period.
This issue will be addressed in a future release.
Database Schema Changes
The following database schema changes have been made in DPM 1.2.1:
Database Table
Change
AggregatedJobOrder
New column added: overtimeDuration
End-of-Support Information
For information on content that has been deprecated in DPM 1.2.1, see Deprecated Entities, Services, and Other Content.
Was this helpful?