Indexer Component
Properties Methods Events Config Settings Errors
The Indexer component enables the creation and modification of search indexes.
Syntax
nsoftware.IPWorksSearch.Indexer
Remarks
The Indexer component provides a simple way to create and manage Lucene 4.8 search indexes. The component is able to perform indexing operations such as adding, updating, and deleting index documents. It supports indexing operations with a wide variety of analyzers to allow for flexible control over the indexing process.
Preparing the Index
The first step in using the component is to initialize and prepare the search index for modifications. Set the IndexPath property to specify the location of the search index. This path can point to an existing index, or a new directory where the index will be stored. Once the property is set, call the OpenIndex method to load the index.If IndexPath does not point to a pre-existing search index, the component will create a new one at the specified location before loading it.
component.IndexPath = "PATH\\TO\\INDEX"; // Specify the path to the search index
component.OpenIndex(); // Load the index
It is important to note that the OpenIndex method will load a snapshot of the index at the time it was opened. Any external changes made to the index afterwards will not be visible to the component until the method is called again. OpenIndex can be called multiple times to reload the index and reflect its latest changes in indexing operations.
Creating Documents
Documents in the index are composed of fields that are managed through the Fields collection. Fields can be added to this collection via the AddDocumentField method.string name = "title"; // The identifier of the field
bool store = true; // Determines if the field's contents should be stored in the index
int type = (int)TFieldTypes.ftText; // The type of data the field contains
string text = "Sample document field"; // The text content of the field
int analyzerType = (int)TSearchIndexAnalyzerTypes.atStandard; // Controls the text processing performed on the field's text content
component.AddDocumentField(name, store, type, text, analyzerType); // Add the field to the DocumentFields collection
After populating the collection with the desired fields, the IndexDocument method can be called to create a new document from these fields and add it to the search index.
// Add two fields
component.AddDocumentField("field1", true, (int)TFieldTypes.ftText, "Sample content for field 1", (int)TSearchIndexAnalyzerTypes);
component.AddDocumentField("field2", true, (int)TFieldTypes.ftText, "Sample content for field 2", (int)TSearchIndexAnalyzerTypes);
component.IndexDocument(); // Add a document with these fields to the search index
Deleting Documents
Documents that contain a specific field can be deleted from the index via the Delete method.
// Deletes every document with a field that has a name of "field1" and a value of "Sample content for field 1"
component.Delete("field1", "Sample content for field 1");
To delete all of the documents from the search index, call the DeleteAll method.
// Delete all of the documents from the index
component.DeleteAll();
Saving the Index
After modifying the index, call the CloseIndex method to commit the changes that were made and save them to the disk.component.CloseIndex();
Property List
The following is the full list of the properties of the component with short descriptions. Click on the links for further details.
Analyzer | The global analyzer for the search index. |
Fields | A collection of document fields for updating and creating documents. |
IndexPath | The path to the search index. |
Method List
The following is the full list of the methods of the component with short descriptions. Click on the links for further details.
AddDocumentField | Creates a document field. |
CloseIndex | Commits and saves the changes made to the search index. |
Config | Sets or retrieves a configuration setting. |
Delete | Deletes documents from the search index. |
DeleteAll | Deletes all of the documents from the search index. |
IndexDocument | Creates a new document and adds it to the search index. |
OpenIndex | Opens an existing search index or creates a new one. |
Reset | Resets the component. |
Update | Updates documents in the search index. |
Event List
The following is the full list of the events fired by the component with short descriptions. Click on the links for further details.
Error | Fires to provide information about errors during indexing. |
Log | This event fires once for each log message. |
Config Settings
The following is a list of config settings for the component with short descriptions. Click on the links for further details.
LogLevel | The level of detail that is logged through the Log event. |
RAMBufferSize | The maximum amount of memory that can be used for caching index changes. |
UseCompoundFile | Whether or not to store index files in the Lucene Compound File Format. |
BuildInfo | Information about the product's build. |
GUIAvailable | Whether or not a message loop is available for processing events. |
LicenseInfo | Information about the current license. |
MaskSensitiveData | Whether sensitive data is masked in log messages. |
UseFIPSCompliantAPI | Tells the component whether or not to use FIPS certified APIs. |
UseInternalSecurityAPI | Whether or not to use the system security libraries or an internal implementation. |
Analyzer Property (Indexer Component)
The global analyzer for the search index.
Syntax
public IndexerAnalyzers Analyzer { get; set; }
enum IndexerAnalyzers { atStandard, atStop, atWhitespace, atSimple, atEmail }
Public Property Analyzer As IndexerAnalyzers
Enum IndexerAnalyzers atStandard atStop atWhitespace atSimple atEmail End Enum
Default Value
1
Remarks
Specifies the global analyzer for the document fields in the search index. Individual document fields may specify their own local analyzer types to override this global one. Please refer to the AnalyzerType field of the DocumentField type for more details.
Possible values are:
0 (atNone) | Does not correspond to any analyzer. |
1 (atStandard) | The most commonly used analyzer. Breaks down text into tokens based on the Unicode Text Segmentation algorithm, as specified in Unicode Standard Annex #29. Removes stop words and changes all letters to lowercase. Does not recognize URLs or emails. For example, the text This is a SAMPLE sentence. support@nsoftware.com https://www.nsoftware.com 1234 will be broken down into the following list of tokens: sample, sentence, support, nsoftware.com, https, www.nsoftware.com, 1234. |
2 (atStop) | The same as atSimple, but this analyzer removes stop words. For example, the text This is a SAMPLE sentence. support@nsoftware.com https://www.nsoftware.com 1234 will be broken down into the following list of tokens: sample, sentence, support, nsoftware, com, https, www, nsoftware, com. |
3 (atWhitespace) | Breaks down text into tokens whenever it encounters a whitespace character without applying any further processing to the input text. For example, the text This is a SAMPLE sentence. support@nsoftware.com https://www.nsoftware.com 1234 will be broken down into the following list of tokens: This, is, a, SAMPLE, sentence., support@nsoftware.com, https://www.nsoftware.com, 1234. |
4 (atSimple) | Breaks down text into tokens based on anything that is not a letter. This analyzer completely discards non-letter characters and changes all letters to lowercase. For example, the text This is a SAMPLE sentence. support@nsoftware.com https://www.nsoftware.com 1234 will be broken down into the following list of tokens: this, is, a, sample, sentence, support, nsoftware, com, https, www, nsoftware, com. |
5 (atEmail) | The same as atStandard, but this analyzer recognizes URLs and emails. For example, the text This is a SAMPLE sentence. support@nsoftware.com https://www.nsoftware.com 1234 will be broken down into the following list of tokens: sample, sentence, support@nsoftware.com, https://www.nsoftware.com, 1234. |
Analyzers and Tokenization
Before text is stored in a search index, it goes through an analyzer that breaks the text down into smaller, searchable parts known as tokens. The rules that determine how text is broken down into distinct tokens are defined by the analyzer type.The way tokens are created directly affects the precision and relevance of search results. For example, if an analyzer does not break down the terms "WORD" and "word" as the same value, searches for one may not bring up results for the other.
Stop Words
Words that are omitted from text processing because of their lack of significant meaning are known as stop words. When the analyzer type is set to atStandard, atStop, or atEmail, stop words will be completely ignored by the analyzer in to improve search performance and relevance.The full list of stop words that are ignored by these analyzers can be found below:
a, an, and, are, as, at, be, but, by, for, if, in, into, is, it, no, not, of, on, or, such, that, the, their, then, there, these, they, this, to, was, will, with
This property is not available at design time.
Fields Property (Indexer Component)
A collection of document fields for updating and creating documents.
Syntax
Remarks
Contains a list of document fields that make up the document that gets added to the search index when calling IndexDocument or Update. A document field can be added by accessing this property directly, or by calling the AddDocumentField method.
This property is not available at design time.
Please refer to the Field type for a complete list of fields.IndexPath Property (Indexer Component)
The path to the search index.
Syntax
Default Value
""
Remarks
The filesystem path to the search index. This property must be specified before calling the OpenIndex method.
This value can either be a path to a pre-existing search index in the Lucene 4.8 index format, or a path to a directory where the index will be stored.
Loading or Creating a Search Index
If IndexPath points to a pre-existing search index, it will be loaded by the component when OpenIndex is called. Otherwise, a call to OpenIndex will cause the component to create a new search index and store it in the specified directory.If the specified directory does not exist, the component will attempt to create a new one at the specified location before creating the search index.
Example:
component.IndexPath = "PATH\\TO\\INDEX"; // Specify the path to the search index
component.OpenIndex(); // Load the index
// ... Perform operations on the index ...
component.CloseIndex();
Relative and Absolute Paths
If the path value begins with a / or a drive letter such as D:/, it is considered an absolute path. The component will interpet any other value as a relative path to resolve in relation to the current directory.This property is not available at design time.
AddDocumentField Method (Indexer Component)
Creates a document field.
Syntax
Remarks
Creates a document field and adds it to the Fields collection.
Name specifies the name of the document field. This value acts as an identifier for the field and does not have to be unique.
Store determines if the field Text value will be stored in the index. Fields that are not stored in the index can be searched, but their full text contents will not be retrievable in search results. This value is only applicable when the field Type is set to ftText.
Type indicates the type of field that will be created. Its value determines how the field is indexed. Possible values are:
0 (ftText) | Contains text that needs to be tokenized. This is most commonly used for storing human-readable text, such as bodies of text and article titles. When this field is stored to the index, its Text value will be broken down and processed depending on the rules defined by the Analyzer. |
1 (ftString) | Contains text that does not need to be tokenized. The field Text will be treated as a single term. This field type is often used for identifiers, usernames, or text that should only be retrieved by specifying its exact value in a search query. |
2 (ftInt32) | Contains a 32-bit integer value. Fields of this type can only be searched with numeric range queries. |
3 (ftInt64) | A field that contains 64-bit integer value. Fields of this type can only be searched with numeric range queries. |
4 (ftFloat) | A field that contains a single-precision floating-point number. Fields of this type can only be searched with numeric range queries. |
5 (ftDouble) | A field that contains a double-precision floating-point number. Fields of this type can only be searched with numeric range queries. |
Text specifies the text contents of the field.
AnalyzerType specifies the analyzer for the field. This value determines how the field Text is analyzed and broken down into tokens when it is indexed. This parameter is only applicable when the field Type is set to ftText. Otherwise, this parameter is ignored.
Possible values are:
0 (atNone) | Does not correspond to any analyzer. |
1 (atStandard) | The most commonly used analyzer. Breaks down text into tokens based on the Unicode Text Segmentation algorithm, as specified in Unicode Standard Annex #29. Removes stop words and changes all letters to lowercase. Does not recognize URLs or emails. For example, the text This is a SAMPLE sentence. support@nsoftware.com https://www.nsoftware.com 1234 will be broken down into the following list of tokens: sample, sentence, support, nsoftware.com, https, www.nsoftware.com, 1234. |
2 (atStop) | The same as atSimple, but this analyzer removes stop words. For example, the text This is a SAMPLE sentence. support@nsoftware.com https://www.nsoftware.com 1234 will be broken down into the following list of tokens: sample, sentence, support, nsoftware, com, https, www, nsoftware, com. |
3 (atWhitespace) | Breaks down text into tokens whenever it encounters a whitespace character without applying any further processing to the input text. For example, the text This is a SAMPLE sentence. support@nsoftware.com https://www.nsoftware.com 1234 will be broken down into the following list of tokens: This, is, a, SAMPLE, sentence., support@nsoftware.com, https://www.nsoftware.com, 1234. |
4 (atSimple) | Breaks down text into tokens based on anything that is not a letter. This analyzer completely discards non-letter characters and changes all letters to lowercase. For example, the text This is a SAMPLE sentence. support@nsoftware.com https://www.nsoftware.com 1234 will be broken down into the following list of tokens: this, is, a, sample, sentence, support, nsoftware, com, https, www, nsoftware, com. |
5 (atEmail) | The same as atStandard, but this analyzer recognizes URLs and emails. For example, the text This is a SAMPLE sentence. support@nsoftware.com https://www.nsoftware.com 1234 will be broken down into the following list of tokens: sample, sentence, support@nsoftware.com, https://www.nsoftware.com, 1234. |
Analyzers and Tokenization
Before text is stored in a search index, it goes through an analyzer that breaks the text down into smaller, searchable parts known as tokens. The rules that determine how text is broken down into distinct tokens are defined by the analyzer type.The way tokens are created directly affects the precision and relevance of search results. For example, if an analyzer does not break down the terms "WORD" and "word" as the same value, searches for one may not bring up results for the other.
Example (Create a new document with a single field):
// text field
component.AddDocumentField("company", true, (int)TFieldTypes.ftText, "nsoftware", (int)SearchIndexAnalyzerTypes.atStandard);
// create a new document from the added fields and add it to the index
component.IndexDocument();
Example (Create a new document with multiple fields):
// text field
component.AddDocumentField("author", true, (int)TFieldTypes.ftText, "Generic Design Patterns", (int)SearchIndexAnalyzerTypes.atStandard);
// string field
component.AddDocumentField("code", true, (int)TFieldTypes.ftString, "GDP2024", (int)SearchIndexAnalyzerTypes.atStandard);
// int field
component.AddDocumentField("edition", true, (int)TFieldTypes.ftInt32, "2", (int)SearchIndexAnalyzerTypes.atStandard);
// create the document and add it to the index
// the added document will be comprised of all of the fields in the DocumentFields collection
component.IndexDocument();
CloseIndex Method (Indexer Component)
Commits and saves the changes made to the search index.
Syntax
public void CloseIndex();
Public Sub CloseIndex()
Remarks
Commits the changes made to the search index and saves them to the location specified by IndexPath.
Config Method (Indexer Component)
Sets or retrieves a configuration setting.
Syntax
Remarks
Config is a generic method available in every component. It is used to set and retrieve configuration settings for the component.
These settings are similar in functionality to properties, but they are rarely used. In order to avoid "polluting" the property namespace of the component, access to these internal properties is provided through the Config method.
To set a configuration setting named PROPERTY, you must call Config("PROPERTY=VALUE"), where VALUE is the value of the setting expressed as a string. For boolean values, use the strings "True", "False", "0", "1", "Yes", or "No" (case does not matter).
To read (query) the value of a configuration setting, you must call Config("PROPERTY"). The value will be returned as a string.
Delete Method (Indexer Component)
Deletes documents from the search index.
Syntax
Remarks
Deletes documents that contain a specific field from the search index.
FieldName specifies the name of the field that will be used to determine the documents that will be deleted.
FieldValue specifies the exact text content of the field that will be used to determine the documents that will be deleted. A document must have a field with this exact value and the name specified in the FieldName parameter for it to be affected by this method.
Example:
// Delete every document in the index that has a field named "content" with a value of "Sample data"
component.Delete("content", "Sample data");
DeleteAll Method (Indexer Component)
Deletes all of the documents from the search index.
Syntax
public void DeleteAll();
Public Sub DeleteAll()
Remarks
Deletes all of the documents from the search index.
IndexDocument Method (Indexer Component)
Creates a new document and adds it to the search index.
Syntax
public void IndexDocument();
Public Sub IndexDocument()
Remarks
Creates a new document and adds it to the search index. Its document fields will be the same as the ones in the Fields collection when this method is called.
Example (Create a new document with a single field):
// text field
component.AddDocumentField("company", true, (int)TFieldTypes.ftText, "nsoftware", (int)SearchIndexAnalyzerTypes.atStandard);
// create a new document from the added fields and add it to the index
component.IndexDocument();
Example (Create a new document with multiple fields):
// text field
component.AddDocumentField("author", true, (int)TFieldTypes.ftText, "Generic Design Patterns", (int)SearchIndexAnalyzerTypes.atStandard);
// string field
component.AddDocumentField("code", true, (int)TFieldTypes.ftString, "GDP2024", (int)SearchIndexAnalyzerTypes.atStandard);
// int field
component.AddDocumentField("edition", true, (int)TFieldTypes.ftInt32, "2", (int)SearchIndexAnalyzerTypes.atStandard);
// create the document and add it to the index
// the added document will be comprised of all of the fields in the DocumentFields collection
component.IndexDocument();
OpenIndex Method (Indexer Component)
Opens an existing search index or creates a new one.
Syntax
public void OpenIndex();
Public Sub OpenIndex()
Remarks
Loads the search index located in IndexPath and prepares it for modifications. This method should be called at least once before making changes to the search index via IndexDocument, Delete, DeleteAll, or Update. To save the changes and close the index, call CloseIndex instead.
Loading or Creating a Search Index
If IndexPath points to a pre-existing search index, it will be loaded by the component when OpenIndex is called. Otherwise, a call to OpenIndex will cause the component to create a new search index and store it in the specified directory.If the specified directory does not exist, the component will attempt to create a new one at the specified location before creating the search index.
Example:
component.IndexPath = "PATH\\TO\\INDEX"; // Specify the path to the search index
component.OpenIndex(); // Load the index
// ... Perform operations on the index ...
component.CloseIndex();
Index Updates
When this method is called, the component will load a snapshot of the index at the time it was opened. Any external changes made to the index afterwards will not be reflected in subsequent indexing operations until this method is called again.This method can be called multiple times to reload the index and reflect its latest changes in indexing operations.
Example:
component.OpenIndex(); // Load the search index
// ... Perform operations on the index ...
component.OpenIndex(); // Reload the index to reflect any new updates
Reset Method (Indexer Component)
Resets the component.
Syntax
public void Reset();
Public Sub Reset()
Remarks
Resets the component's properties to their default values.
Update Method (Indexer Component)
Updates documents in the search index.
Syntax
Remarks
Updates documents in the search index. When this method is called, every document in the index that contains the specified field will be deleted. Once deleted, a new document with the fields in the Fields collection will be created and added to the index to replace the deleted documents. This is effectively the same as calling Delete followed by IndexDocument.
FieldName specifies the name of the field that will be used to identify the documents that will be updated.
FieldValue specifies the exact text content of the field that will be used to identify the documents that will be updated. A document must have a field with this exact value and the name specified in the FieldName parameter for it to be affected by this method.
Example (Update a document that has incorrect fields)
// Assuming the index has a document with a field named "data"
// and a value of "incorrect data"
// Add the fields we want for the updated document
component.AddDocumentField("data", true, 1, "corrected data");
component.AddDocumentField("company", "nsoftware");
// Delete the incorrect docuement and replace it with the new one
//
// If the index contains multiple documents with this field, they
// will all be deleted, but only one document will be created to
// replace them
component.Update("data", "incorrect data");
Error Event (Indexer Component)
Fires to provide information about errors during indexing.
Syntax
public event OnErrorHandler OnError; public delegate void OnErrorHandler(object sender, IndexerErrorEventArgs e); public class IndexerErrorEventArgs : EventArgs { public int ErrorCode { get; } public string Description { get; } }
Public Event OnError As OnErrorHandler Public Delegate Sub OnErrorHandler(sender As Object, e As IndexerErrorEventArgs) Public Class IndexerErrorEventArgs Inherits EventArgs Public ReadOnly Property ErrorCode As Integer Public ReadOnly Property Description As String End Class
Remarks
Fires in case of exceptional conditions during indexing. Normally, the component will raise an exception instead.
ErrorCode contains an error code and Description contains a textual description of the error. For a list of valid error codes and their descriptions, please refer to the Error Codes section.
Log Event (Indexer Component)
This event fires once for each log message.
Syntax
public event OnLogHandler OnLog; public delegate void OnLogHandler(object sender, IndexerLogEventArgs e); public class IndexerLogEventArgs : EventArgs { public int LogLevel { get; } public string Message { get; } public string LogType { get; } }
Public Event OnLog As OnLogHandler Public Delegate Sub OnLogHandler(sender As Object, e As IndexerLogEventArgs) Public Class IndexerLogEventArgs Inherits EventArgs Public ReadOnly Property LogLevel As Integer Public ReadOnly Property Message As String Public ReadOnly Property LogType As String End Class
Remarks
This event fires once for each log message generated by the component. The verbosity is controlled by the LogLevel setting.
LogLevel indicates the level of detail of the log message. Possible values are:
0 (None - default) | No events are logged. |
1 (Info) | Informational events are logged. |
2 (Verbose) | Detailed data are logged. |
3 (Debug) | Debug data are logged. |
Message is the log entry.
LogType identifies the type of log entry.
Field Type
Holds information about a document field.
Remarks
This is used to access the data associated with a single document field.
Fields
AnalyzerType
TAnalyzerTypes (read-only)
Default: 0
The analyzer used to index this document field. This determines how the text content of the field is broken down and processed when added to the index.
This value is only applicable when Type is set to ftText. Otherwise, it is ignored.
Possible values are:
0 (atNone) | Does not correspond to any analyzer. |
1 (atStandard) | The most commonly used analyzer. Breaks down text into tokens based on the Unicode Text Segmentation algorithm, as specified in Unicode Standard Annex #29. Removes stop words and changes all letters to lowercase. Does not recognize URLs or emails. For example, the text This is a SAMPLE sentence. support@nsoftware.com https://www.nsoftware.com 1234 will be broken down into the following list of tokens: sample, sentence, support, nsoftware.com, https, www.nsoftware.com, 1234. |
2 (atStop) | The same as atSimple, but this analyzer removes stop words. For example, the text This is a SAMPLE sentence. support@nsoftware.com https://www.nsoftware.com 1234 will be broken down into the following list of tokens: sample, sentence, support, nsoftware, com, https, www, nsoftware, com. |
3 (atWhitespace) | Breaks down text into tokens whenever it encounters a whitespace character without applying any further processing to the input text. For example, the text This is a SAMPLE sentence. support@nsoftware.com https://www.nsoftware.com 1234 will be broken down into the following list of tokens: This, is, a, SAMPLE, sentence., support@nsoftware.com, https://www.nsoftware.com, 1234. |
4 (atSimple) | Breaks down text into tokens based on anything that is not a letter. This analyzer completely discards non-letter characters and changes all letters to lowercase. For example, the text This is a SAMPLE sentence. support@nsoftware.com https://www.nsoftware.com 1234 will be broken down into the following list of tokens: this, is, a, sample, sentence, support, nsoftware, com, https, www, nsoftware, com. |
5 (atEmail) | The same as atStandard, but this analyzer recognizes URLs and emails. For example, the text This is a SAMPLE sentence. support@nsoftware.com https://www.nsoftware.com 1234 will be broken down into the following list of tokens: sample, sentence, support@nsoftware.com, https://www.nsoftware.com, 1234. |
If this value is set to atNone (default), the global analyzer type specified by the Analyzer property will be used to process the text content of the field. If both are set to atNone, the component will index the field without applying any processing to its text content.
InputFile
string (read-only)
Default: ""
The path to a file that contains the text contents of the document field.
InputStream
System.IO.Stream (read-only)
Default: ""
The stream that contains the text content of the document field. This is only applicable when Type is set to Text.
InputText
string (read-only)
Default: ""
The text content of the document field.
Name
string (read-only)
Default: ""
The name of the document field. This is an identifier that does not have to be unique.
Store
bool (read-only)
Default: False
Determines if the text content of the field will be stored in the index when IndexDocument is called. Fields that are not stored in the index can be searched, but their full text contents will not be retrievable in search results.
This value is only applicable when the field Type is set to ftText. If InputStream or InputFile are used to specify the text contents of the field, this value must be set to false.
Type
TFieldTypes (read-only)
Default: 0
The type of field. This value determines how the field is indexed. Possible values are:
0 (ftText) | Contains text that needs to be tokenized. This is most commonly used for storing human-readable text, such as bodies of text and article titles. When this field is stored to the index, its Text value will be broken down and processed depending on the rules defined by the Analyzer. |
1 (ftString) | Contains text that does not need to be tokenized. The field Text will be treated as a single term. This field type is often used for identifiers, usernames, or text that should only be retrieved by specifying its exact value in a search query. |
2 (ftInt32) | Contains a 32-bit integer value. Fields of this type can only be searched with numeric range queries. |
3 (ftInt64) | A field that contains 64-bit integer value. Fields of this type can only be searched with numeric range queries. |
4 (ftFloat) | A field that contains a single-precision floating-point number. Fields of this type can only be searched with numeric range queries. |
5 (ftDouble) | A field that contains a double-precision floating-point number. Fields of this type can only be searched with numeric range queries. |
Constructors
Config Settings (Indexer Component)
The component accepts one or more of the following configuration settings. Configuration settings are similar in functionality to properties, but they are rarely used. In order to avoid "polluting" the property namespace of the component, access to these internal properties is provided through the Config method.Indexer Config Settings
0 (None - default) | No events are logged. |
1 (Info) | Informational events are logged. |
2 (Verbose) | Detailed data are logged. |
3 (Debug) | Debug data are logged. |
When index changes are made, the component will cache these changes in memory before automatically flushing and writing them to the disk. If the amount of memory used is greater than the threshold specified by this setting, the changes will be automatically written to the disk to a non-searchable index segment.
Higher values will cause the component to flush cached changes at a less frequent rate, leading to better indexing performance at the cost of larger memory usage.
Base Config Settings
In some non-GUI applications, an invalid message loop may be discovered that will result in errant behavior. In these cases, setting GUIAvailable to false will ensure that the component does not attempt to process external events.
- Product: The product the license is for.
- Product Key: The key the license was generated from.
- License Source: Where the license was found (e.g., RuntimeLicense, License File).
- License Type: The type of license installed (e.g., Royalty Free, Single Server).
- Last Valid Build: The last valid build number for which the license will work.
This setting only works on these components: AS3Receiver, AS3Sender, Atom, Client(3DS), FTP, FTPServer, IMAP, OFTPClient, SSHClient, SCP, Server(3DS), Sexec, SFTP, SFTPServer, SSHServer, TCPClient, TCPServer.
FIPS mode can be enabled by setting the UseFIPSCompliantAPI configuration setting to true. This is a static setting that applies to all instances of all components of the toolkit within the process. It is recommended to enable or disable this setting once before the component has been used to establish a connection. Enabling FIPS while an instance of the component is active and connected may result in unexpected behavior.
For more details, please see the FIPS 140-2 Compliance article.
Note: This setting is applicable only on Windows.
Note: Enabling FIPS compliance requires a special license; please contact sales@nsoftware.com for details.
Setting this configuration setting to true tells the component to use the internal implementation instead of using the system security libraries.
On Windows, this setting is set to false by default. On Linux/macOS, this setting is set to true by default.
If using the .NET Standard Library, this setting will be true on all platforms. The .NET Standard library does not support using the system security libraries.
Note: This setting is static. The value set is applicable to all components used in the application.
When this value is set, the product's system dynamic link library (DLL) is no longer required as a reference, as all unmanaged code is stored in that file.