ML MCONF: Difference between revisions
No edit summary |
No edit summary |
||
Line 1: | Line 1: | ||
{{DISPLAYTITLE:ML_MCONF}} | {{DISPLAYTITLE:ML_MCONF}} | ||
{{TAGDEF|ML_MCONF|[integer]| | {{TAGDEF|ML_MCONF|[integer]|see below}} | ||
Description: This tag sets the maximum number of structures stored in memory | Description: This tag sets the maximum number of structures stored in memory that are used for training in the machine learning force field method. | ||
---- | ---- | ||
The default value is usually a safe value but should be set to a higher value as soon as it is reached. When this happens the code stops and gives an error instructing to increase {{TAG|ML_MCONF}}. | The defaults for {{TAG|ML_MCONF}} are different for each different {{TAG|ML_MODE}} setting. Here are the defaults for each mode: | ||
This flag sets also the maximum number of rows for the design matrix, which is usually a huge matrix. The design matrix is to be allocated statically at the beginning of the program | *{{TAG|ML_MODE}}='TRAIN': | ||
**No {{TAG|ML_AB}} present (learning from scratch): <math>\quad</math> min(1500, max(1,{{TAG|NSW}})) | |||
**{{TAG|ML_AB}} present (continuation of learning): <math>\quad</math> MCONF_AB + min(1500, max(1,{{TAG|NSW}})) | |||
*{{TAG|ML_MODE}}='SELECT': <math>\quad</math> MCONF_AB + 1 | |||
*{{TAG|ML_MODE}}='REFIT': <math>\quad</math> MCONF_AB + 1 | |||
*{{TAG|ML_MODE}}='REFITBAYESIAN': <math>\quad</math> MCONF_AB + 1 | |||
*{{TAG|ML_MODE}}='RUN': <math>\quad</math> 1 | |||
using the following definition: | |||
*MCONF_AB = Number of training structures read from the {{TAG|ML_AB}} file. | |||
The default value for {{TAG|ML_MODE}}=''TRAIN'' is usually a safe value for solids and easy-to-learn liquids but should be set to a higher value as soon as it is reached. When this happens the code stops and gives an error instructing to increase {{TAG|ML_MCONF}}. | |||
This flag sets also the maximum number of rows for the design matrix, which is usually a huge matrix. The design matrix is to be allocated statically at the beginning of the program since several parts of the code use MPI shared memory and dynamic reallocation of these arrays can cause severe problems on some systems. So most of the main arrays are statically allocated in the code. | |||
An estimate of the design matrix and all other large arrays are printed out to the {{TAG|ML_LOGFILE}} before allocation. The design matrix is fully distributed in a block cyclic fashion for scaLAPACK and should almost perfectly linearly scale with the number of used processors. | |||
== Related tags and articles == | == Related tags and articles == |
Revision as of 09:03, 28 April 2023
ML_MCONF = [integer]
Default: ML_MCONF = see below
Description: This tag sets the maximum number of structures stored in memory that are used for training in the machine learning force field method.
The defaults for ML_MCONF are different for each different ML_MODE setting. Here are the defaults for each mode:
- ML_MODE='TRAIN':
- ML_MODE='SELECT': MCONF_AB + 1
- ML_MODE='REFIT': MCONF_AB + 1
- ML_MODE='REFITBAYESIAN': MCONF_AB + 1
- ML_MODE='RUN': 1
using the following definition:
- MCONF_AB = Number of training structures read from the ML_AB file.
The default value for ML_MODE=TRAIN is usually a safe value for solids and easy-to-learn liquids but should be set to a higher value as soon as it is reached. When this happens the code stops and gives an error instructing to increase ML_MCONF.
This flag sets also the maximum number of rows for the design matrix, which is usually a huge matrix. The design matrix is to be allocated statically at the beginning of the program since several parts of the code use MPI shared memory and dynamic reallocation of these arrays can cause severe problems on some systems. So most of the main arrays are statically allocated in the code.
An estimate of the design matrix and all other large arrays are printed out to the ML_LOGFILE before allocation. The design matrix is fully distributed in a block cyclic fashion for scaLAPACK and should almost perfectly linearly scale with the number of used processors.