|
As of version 8.5.0 of ThingWorx platform, DSE is no longer for sale and will not be supported in a future release. Reference the End of Sale article for more information.
|
|
Getting started with DSE requires you to register, install, and configure DSE. Most of this process is performed independently of ThingWorx and is documented here.
|
For streams, value streams, and data tables, you can configure bucket settings. These settings override the DSE persistence provider instance configuration. |
Name | Default Value | Description | ||
---|---|---|---|---|
Connection Information | ||||
Cassandra Cluster Hosts | 192.168.234.136,192.168.234.136 | IP address(es) for the Cassandra cluster(s). These are the IP addresses or the host names configured during DSE setup to install the Cassandra cluster. | ||
Cassandra Cluster Port | 9042 | The port for the Cassandra cluster configured during DSE setup to install the Cassandra cluster. | ||
Cassandra User Name | n/a | Optional, unless you want to enable authentication on a cluster. In that case, this field is required.
| ||
Cassandra Password | n/a | Optional, unless you want to enable authentication on a cluster In that case, this field is required. (See above.) | ||
Cassandra Keyspace Name | thingworxnd | The location that ThingWorx data points to. Similar to a schema in a relational database.
| ||
Solr Cluster URL | http://localhost | If data tables are being used, provide IP or fully qualified host name including the domain or the IP configured during DSE setup to install the Cassandra cluster. | ||
Solr Cluster Port | 8983 | If data tables are being used, provide the port configured during DSE setup to install the Cassandra cluster. | ||
Cassandra Keyspace Settings | replication = {'class':'NetworkTopologyStrategy', 'Cassandra':1, 'Solr':1} | Dependent on your Cassandra cluster configuration created during DSE setup. Primarily defines the data centers used and the associated replication factors (refer to http://datastax.com/documentation/cql/3.1/cql/cql_reference/create_keyspace_r.html for details). If the administrators created the keyspace manually, then these settings should match the manually created keyspace settings. | ||
Cassandra Consistency Levels | {'Cluster' : { 'read' : 'ONE', 'write' : 'ONE' }} | Read and write consistency levels for the number of nodes.
| ||
CQL Query Result Limit | 5000 | Cassandra Query Language query result limit specifies the number of row returned when querying the data. This enhances the stability of ThingWorx by not allowing large results sets that would cause performance issues in the platform to be returned. | ||
Keep Connection Alive | true | Helps to keep the connections to the Cassandra cluster alive, especially across firewalls, where the inactive connections could be dropped.
| ||
Connection Timeout (Millis) | 30000 | Initial connection time-out in milliseconds. Depends on the network latency between ThingWorx and the Cassandra cluster. | ||
Compression Algorithm | none | When ThingWorx sends data to a cluster, there are three options: • Lz4 compression • Snappy compression • No compression If the network bandwidth between ThingWorx and the Cassandra cluster is low, then using a compression will increase the throughput.
| ||
Maximum Query Retries | 3 | The maximum number of retries enabled for queries. The default is three. | ||
Local Core Connections | 4 | Minimum number of connections that are able to read/ write data. | ||
Local Max Connections | 16 | Maximum number of connections that are able to read/write data | ||
Remote Core Connections | 2 | Minimum number of remote connections that are able to read/write data. | ||
Remote Max Connections | 16 | Maximum number of remote connections that are able to read/write data | ||
Enable Tracing | false | Logging. Can be enabled for debugging. | ||
Max Async Requests | 1000 | |||
Classic Stream Settings | ||||
Cache Initial Size | 10000 | The initial cache size. This is dependent on the number of sources.
| ||
Cache Maximum Size | 100000 | The max cache size. Controls memory usage. | ||
Cache Concurrency | 24 | The number of threads you can access at the same time. The minimum value should reflect the value set for Remote Max Connections. | ||
Classic Stream Defaults | ||||
Source Bucket Count | 1000 | Sources can be put into buckets. The number of sources equals the number of queries that need to be executed. For example, if you have 100,000 sources, this field determines how many buckets are used.
| ||
Time Bucket Size (Hours) | 24 | The time (in hours) to create buckets. Dependent on how the source bucket size is set up. For example, if the Time Bucket Size is set to 24, buckets will be created every 24 hours. The goal is to try not to exceed 2 million data points. So, depending on the data ingestion rate (R per second) per value stream or classic stream: Time Bucket Size = 2 mil / (R * 60 * 60)
| ||
Data Table Defaults | ||||
Data Table Bucket Count | 3 | A data table can be split into buckets. This allows a data table to be spread across DSE nodes. A value higher than the number of nodes in the cluster is recommended to allow data spread when the number of nodes increases depending on the load. The other factor to consider is the number of rows expected in the data table. Consider limiting to 200,000 rows per bucket. The setting here is the default. The bucket count can be specified per data table.
| ||
Value Stream Settings | ||||
Cache Initial Size | 10000 | The initial cache size. This is dependent on the number of sources multiplied by number of properties per source. | ||
Cache Maximum Size | 100000 | The maximum cache size. Controls memory usage. | ||
Cache Concurrency | 24 | The number of threads you can access at the same time. | ||
Value Stream Defaults | ||||
Source Bucket Count | 1000 | Sources can be put into buckets. The number of sources equals the number of queries that need to be executed. For example, if you have 100,000 sources, this field determines how many buckets are used.
| ||
Property Bucket Count | 1000 | The number of buckets depends on the number of properties per value stream and the query pattern. If there are queries spanning across properties, a smaller bucket size will yield the best performance. | ||
Time Bucket Size (Hours) | 24 | The size of the bucket(s). Dependent on how the source bucket size is set up. For example, if the Time Bucket Size is set to 24, buckets will be created every 24 hours.
|