📜 ⬆️ ⬇️

Implementation of the minimization of logical functions by the method of Kvayna \ Mac Klasky

One of the possible implementations of the minimization algorithm for logical (Boolean) functions (LF) specified in the form of a perfect disjunctive normal form (CDNF) by the Quine \ Mac Klaski method (hereinafter referred to simply as Mac Klaski) and the problems identified during its testing are proposed for consideration. In the studied variant, the McClasky algorithm is implemented in C # using the .NET Generic collections.

I would like to note that the task of minimizing LF, in my opinion, is unjustly bypassed by the subject of machine learning algorithms, since in its sense it implements a learning procedure with a teacher for a certain set of input terms (simple conjunctions), on which the optimized function accepts true (true) value. Consequently, this set of input terms, out of their total number $ inline $ 2 ^ N $ inline $ , where N is the number of two class categorical (binary) variables in terms, is a training sample for a teaching task with a teacher with a known (in this case, true) output value of the objective function. For all other possible terms that are not included in the training sample, the minimized LF should take a false (false) value.

One of the easy-to-implement LF minimization algorithms for any number of variables is the McClasky method. According to the theory, the McClasky method consists of two main stages:

  1. Finding all simple LF terms using the gluing rule (laws):
    a) (A & B) ∨ (A &! B) ≡ A;
    b) (A ∨ B) & (A! B) ≡ A;
    where & is the logical “AND” operation; ∨ - logical operation "OR" ;! - the operation of logical negation "NOT".
    ')
  2. Minimizing the number of simple terms of the resulting set, as the problem of finding the optimal coverage of a set of subsets of different lengths.

The implementation code is as follows (click to view):
using System; using System.IO; using System.Collections.Generic; using System.Linq; /// <summary> ///      /// </summary> public abstract class LogicFunction { //    public readonly ICollection<char[]> Terms = new LinkedList<char[]>(); //   public abstract bool Calculate(bool[] X); //   public virtual bool Calculate(char[] X) { return Calculate(X.Select(p => p != '0').ToArray()); } } /// <summary> ///    /// </summary> public class Dnf : LogicFunction { public override bool Calculate(bool[] X) { bool bResult = false; foreach (char[] term in Terms) { bool TermVal = true; for (int i = 0; i < term.Length; i++) { if (term[i] == '*') continue; TermVal &= (term[i] != '0' ? X[i] : !X[i]); } bResult |= TermVal; } return bResult; } } /// <summary> ///    /// </summary> public class Knf : LogicFunction { public override bool Calculate(bool[] X) { bool bResult = true; foreach (char[] term in Terms) { bool TermVal = false; for (int i = 0; i < term.Length; i++) { if (term[i] == '*') continue; TermVal |= (term[i] != '0' ? X[i] : !X[i]); } bResult &= TermVal; } return bResult; } } /// <summary> ///     ---- /// </summary> public class Quine_McCluskey { private readonly Dnf _result = new Dnf(); public Dnf Result { get { return _result; } } //  public void Start(IEnumerable<char[]> TermInput) { LogicFuncMinimize(TermInput, _result.Terms); } //  public void Start(string sFileName) { Start(LoadDsnfFromFile(sFileName)); } //    private static void LogicFuncMinimize(IEnumerable<char[]> X1, ICollection<char[]> OutR) { Dictionary<int, LinkedList<TreeNodeEnd>> OutTemp = new Dictionary<int, LinkedList<TreeNodeEnd>>(); if (X1.First().Length <= 40) { Dictionary<UInt64, TreeNodeEnd> X1Tree = new Dictionary<UInt64, TreeNodeEnd>(); DeleteDublicatingTerms(X1, X1Tree); int iLevelCounter = 0; //        while (X1Tree.Count != 0) { Dictionary<UInt64, TreeNodeEnd> X2Tree = new Dictionary<UInt64, TreeNodeEnd>(); Skleivanie(X1Tree, X2Tree, OutTemp, iLevelCounter++); X1Tree = X2Tree; GC.Collect(); //        !!! } } else { TreeFuncTerm X1Tree = new TreeFuncTerm(); DeleteDublicatingTerms(X1, X1Tree); int iLevelCounter = 0; //        while (X1Tree.Count != 0) { TreeFuncTerm X2Tree = new TreeFuncTerm(); Skleivanie(X1Tree, X2Tree, OutTemp, iLevelCounter++); X1Tree = X2Tree; GC.Collect(); //        !!! } } HashSet<TreeNodeEnd> OutRes = new HashSet<TreeNodeEnd>(); ReduceRedundancyTerms(OutTemp, OutRes); OutR.Clear(); foreach (TreeNodeEnd term in OutRes) OutR.Add(term.Term); } //         private static ICollection<char[]> LoadDsnfFromFile(string sFullFileName) { ICollection<char[]> DSNF = new LinkedList<char[]>(); //    using (StreamReader InFile = new StreamReader(sFullFileName)) { string sLine = ""; while ((sLine = InFile.ReadLine()) != null) { DSNF.Add(sLine.ToArray()); } } return DSNF; } //      //       private static void DeleteDublicatingTerms(IEnumerable<char[]> InX1, Dictionary<UInt64, TreeNodeEnd> OutX2Tree) { OutX2Tree.Clear(); TreeNodeEnd pCurrNode = null; foreach (char[] x1 in InX1) { UInt64 iCode = GetTermCode(x1); if (OutX2Tree.ContainsKey(iCode)) continue; pCurrNode = new TreeNodeEnd(x1, null, null, pCurrNode, OutX2Tree.Count); OutX2Tree.Add(iCode, pCurrNode); } } //      //       private static void DeleteDublicatingTerms(IEnumerable<char[]> InX1, TreeFuncTerm OutX2Tree) { OutX2Tree.Clear(); foreach (char[] x1 in InX1) { OutX2Tree.AddTerm(x1, null, null); } } //     private static void Skleivanie(TreeFuncTerm X1Tree, TreeFuncTerm X2Tree, Dictionary<int, LinkedList<TreeNodeEnd>> OutResult, int iLevel) { LinkedList<TreeNodeEnd> OutR = new LinkedList<TreeNodeEnd>(); OutResult.Add(iLevel, OutR); Dictionary<int, TreeNodeEnd> FindTerms = new Dictionary<int, TreeNodeEnd>(); TreeNodeEnd x1 = X1Tree.Last; while (x1 != null) { X1Tree.SearchDiff1(x1, FindTerms); if (FindTerms.Count == 0) { //    ,       OutR.AddLast(x1); } else { foreach (KeyValuePair<int, TreeNodeEnd> kvp in FindTerms) { //,         if (x1.Number < kvp.Value.Number) continue; //      char cSymbSav = x1.Term[kvp.Key]; x1.Term[kvp.Key] = '*'; //  X2Tree.AddTerm(x1.Term, x1, kvp.Value); x1.Term[kvp.Key] = cSymbSav; } } x1 = x1.Prev; } } //     private static UInt64 GetTermCode(char[] pTerm) { UInt64 iMultip = 1, iCode = 0; for (int i = 0; i < pTerm.Length; i++) { switch (pTerm[i]) { case '0': break; case '1': iCode += iMultip; break; case '*': iCode += (iMultip + iMultip); break; } iMultip *= 3; } return iCode; } //     private static void Skleivanie(Dictionary<UInt64, TreeNodeEnd> X1Tree, Dictionary<UInt64, TreeNodeEnd> X2Tree, Dictionary<int, LinkedList<TreeNodeEnd>> OutResult, int iLevel) { LinkedList<TreeNodeEnd> OutR = new LinkedList<TreeNodeEnd>(); OutResult.Add(iLevel, OutR); foreach (KeyValuePair<UInt64, TreeNodeEnd> x1 in X1Tree) { bool bIsSkleiv = false; UInt64 iMultip = 1; for (int iPos = 0; iPos < x1.Value.Term.Length; iPos++) { char cSymbSav = x1.Value.Term[iPos]; if (cSymbSav != '*') { UInt64 iCode; if (cSymbSav == '0') iCode = x1.Key + iMultip; else //_if (cSymbSav == '1') iCode = x1.Key - iMultip; TreeNodeEnd pSkleivNode = null; X1Tree.TryGetValue(iCode, out pSkleivNode); if (pSkleivNode != null) { bIsSkleiv = true; //,         if (x1.Value.Number > pSkleivNode.Number) { //      if (cSymbSav == '0') iCode = x1.Key + iMultip + iMultip; else //_if (cSymbSav == '1') iCode = x1.Key + iMultip; x1.Value.Term[iPos] = '*'; //  if (!X2Tree.ContainsKey(iCode)) X2Tree.Add(iCode, new TreeNodeEnd(x1.Value.Term, x1.Value, pSkleivNode, null, X2Tree.Count)); x1.Value.Term[iPos] = cSymbSav; } } } iMultip *= 3; } if (!bIsSkleiv) { //    ,       OutR.AddLast(x1.Value); } } } //           . private static void ReduceRedundancyTerms(Dictionary<int, LinkedList<TreeNodeEnd>> X, HashSet<TreeNodeEnd> ResultNumbers) { //   ResultNumbers.Clear(); //       -    HashSet<TreeNodeEnd> pNumbersForAdd = new HashSet<TreeNodeEnd>(); Dictionary<TreeNodeEnd, uint> Numbers2Terms = new Dictionary<TreeNodeEnd, uint>(); Dictionary<TreeNodeEnd, HashSet<TreeNodeEnd>> Terms2Numbers = new Dictionary<TreeNodeEnd, HashSet<TreeNodeEnd>>(); //    foreach (int iLevel in X.Keys.OrderByDescending(p => p).AsQueryable()) { foreach (TreeNodeEnd term in X[iLevel]) { HashSet<TreeNodeEnd> TermNumberCont = new HashSet<TreeNodeEnd>(); HashSet<TreeNodeEnd> ListNumbers = new HashSet<TreeNodeEnd>(); ListNumbers.Add(term); while (ListNumbers.Count > 0) { TreeNodeEnd pCurrNode = ListNumbers.First(); ListNumbers.Remove(pCurrNode); if (pCurrNode.Parent1 != null && pCurrNode.Parent2 != null) { ListNumbers.Add(pCurrNode.Parent1); ListNumbers.Add(pCurrNode.Parent2); } else { if (!Numbers2Terms.ContainsKey(pCurrNode)) Numbers2Terms.Add(pCurrNode, 0); Numbers2Terms[pCurrNode]++; //    TermNumberCont.Add(pCurrNode); } } //int iIntersectNumbers = pNumbersForAdd.Intersect(TermNumberCont).Count(); //if (iIntersectNumbers < TermNumberCont.Count) { pNumbersForAdd.UnionWith(TermNumberCont); Terms2Numbers.Add(term, TermNumberCont); } } } //   -      pNumbersForAdd = new HashSet<TreeNodeEnd>(pNumbersForAdd.OrderBy(p => Numbers2Terms[p])); //           Dictionary<TreeNodeEnd, HashSet<TreeNodeEnd>> SelectedTerms = new Dictionary<TreeNodeEnd, HashSet<TreeNodeEnd>>(); //     while (pNumbersForAdd.Count > 0) { //   TreeNodeEnd pSrcTerm = pNumbersForAdd.First(); //   ,         TreeNodeEnd BestTerm = null; int iBestTermNumbersCount = 0; foreach (KeyValuePair<TreeNodeEnd, HashSet<TreeNodeEnd>> CurrTerm in Terms2Numbers) { if (!CurrTerm.Value.Contains(pSrcTerm)) continue; int iCurrTermNumbersCount = CurrTerm.Value.Intersect(pNumbersForAdd).Count(); if ((BestTerm == null) || (iBestTermNumbersCount < iCurrTermNumbersCount)) { BestTerm = CurrTerm.Key; iBestTermNumbersCount = iCurrTermNumbersCount; } } ResultNumbers.Add(BestTerm); HashSet<TreeNodeEnd> pBestTermSrcNodes = Terms2Numbers[BestTerm]; Terms2Numbers.Remove(BestTerm); SelectedTerms.Add(BestTerm, pBestTermSrcNodes); pNumbersForAdd.RemoveWhere(p => pBestTermSrcNodes.Contains(p)); } //    Terms2Numbers = SelectedTerms; //        TreeNodeEnd termForDelete = null; do { termForDelete = null; int iTermForDeleteUsedNumbCount = 0; foreach (KeyValuePair<TreeNodeEnd, HashSet<TreeNodeEnd>> term1 in SelectedTerms) { int iUsedNumbCounter = 0; foreach (TreeNodeEnd numb in term1.Value) { bool bFind = false; foreach (KeyValuePair<TreeNodeEnd, HashSet<TreeNodeEnd>> term2 in SelectedTerms) { if (term1.Key == term2.Key) continue; if (bFind = term2.Value.Contains(numb)) break; } if (bFind) iUsedNumbCounter++; } if ((iUsedNumbCounter == term1.Value.Count) && ((termForDelete == null) || (iUsedNumbCounter <= iTermForDeleteUsedNumbCount))) { termForDelete = term1.Key; iTermForDeleteUsedNumbCount = iUsedNumbCounter; } } if (termForDelete != null) { ResultNumbers.Remove(termForDelete); SelectedTerms.Remove(termForDelete); } } while (termForDelete != null); } } /// <summary> ///   /// </summary> public class TreeFuncTerm { // private readonly TreeNodeMiddle rootNode = new TreeNodeMiddle(); //          private TreeNodeEnd _lastTreeNode = null; public TreeNodeEnd Last { get { return _lastTreeNode; } } //     private int _count = 0; public int Count { get { return _count; } } // public TreeFuncTerm() { Clear(); } //  public void Clear() { rootNode.Clear(); _count = 0; _lastTreeNode = null; } //     public void AddTerm(char[] term, TreeNodeEnd pParent1, TreeNodeEnd pParent2) { TreeNodeMiddle pCurrNode = rootNode; for (int j = 0; j < term.Length; j++) { TreeNodeBase item = pCurrNode.GetChild(term[j]); if (item == null) { if (j + 1 < term.Length) { item = new TreeNodeMiddle(); } else { item = new TreeNodeEnd(term, pParent1, pParent2, _lastTreeNode, _count); _lastTreeNode = (TreeNodeEnd)item; _count++; } pCurrNode.AddChild(item, term[j]); } if (item is TreeNodeMiddle) pCurrNode = (TreeNodeMiddle)item; } } //          public void SearchDiff1(TreeNodeEnd x1, Dictionary<int, TreeNodeEnd> result) { result.Clear(); TreeNodeMiddle pCurrNode = rootNode; Dictionary<int, TreeNodeMiddle> OneDiffNodesList = new Dictionary<int, TreeNodeMiddle>(); for (int iPos = 0; iPos < x1.Term.Length; iPos++) { char cSymbol = x1.Term[iPos]; if (pCurrNode != null) { if (cSymbol != '*') { TreeNodeBase item = pCurrNode.GetChild(cSymbol == '0' ? '1' : '0'); if (item != null) { if (item is TreeNodeMiddle) { OneDiffNodesList.Add(iPos, (TreeNodeMiddle)item); } else //if (item is TreeNodeEnd) { //     result.Add(iPos, (TreeNodeEnd)item); } } } TreeNodeBase pNextNode = pCurrNode.GetChild(cSymbol); if ((pNextNode != null) && (pNextNode is TreeNodeMiddle)) { pCurrNode = (TreeNodeMiddle)pNextNode; } } // ,    , //     for (int iKey = 0; iKey < iPos; iKey++) { if (!OneDiffNodesList.ContainsKey(iKey)) continue; TreeNodeMiddle pValue = OneDiffNodesList[iKey]; TreeNodeBase item = pValue.GetChild(cSymbol); if (item == null) { OneDiffNodesList.Remove(iKey); } else if (item is TreeNodeMiddle) { OneDiffNodesList[iKey] = (TreeNodeMiddle)item; } else //if (item is TreeNodeEnd) { //     result.Add(iKey, (TreeNodeEnd)item); } } } } } /// <summary> ///      /// </summary> public interface TreeNodeBase { //   void Clear(); } /// <summary> ///     /// </summary> public class TreeNodeEnd : TreeNodeBase { //,   private char[] _term; public char[] Term { get { return _term; } } //            private TreeNodeEnd _prev; public TreeNodeEnd Prev { get { return _prev; } } //        private TreeNodeEnd _parent1; public TreeNodeEnd Parent1 { get { return _parent1; } } //        private TreeNodeEnd _parent2; public TreeNodeEnd Parent2 { get { return _parent2; } } // ,           public readonly int Number; // public TreeNodeEnd(char[] pTermRef, TreeNodeEnd pParent1, TreeNodeEnd pParent2, TreeNodeEnd pPrevTreeNode, int iNumber) { pTermRef.CopyTo(_term = new char[pTermRef.Length], 0); _parent1 = pParent1; _parent2 = pParent2; _prev = pPrevTreeNode; Number = iNumber; } //   public void Clear() { _term = null; _parent1 = null; _parent2 = null; _prev = null; } } /// <summary> ///     /// </summary> public class TreeNodeMiddle : TreeNodeBase { //  private TreeNodeBase Childs0; private TreeNodeBase Childs1; private TreeNodeBase Childs2; //   public void Clear() { if ((Childs0 != null) && (Childs0 is TreeNodeMiddle)) ((TreeNodeMiddle)Childs0).Clear(); if ((Childs1 != null) && (Childs1 is TreeNodeMiddle)) ((TreeNodeMiddle)Childs1).Clear(); if ((Childs2 != null) && (Childs2 is TreeNodeMiddle)) ((TreeNodeMiddle)Childs2).Clear(); Childs0 = Childs1 = Childs2 = null; } //   public TreeNodeBase GetChild(char cSymbol) { switch (cSymbol) { case '0': return Childs0; case '1': return Childs1; case '*': return Childs2; } return null; } //   public void AddChild(TreeNodeBase newChild, char cSymbol) { switch (cSymbol) { case '0': Childs0 = newChild; break; case '1': Childs1 = newChild; break; case '*': Childs2 = newChild; break; } } } 


The Quine_McCluskey class is actually an implementation of this algorithm, which is assisted by other classes and interfaces: Dnf, TreeNodeBase, TreeNodeMiddle, TreeNodeEnd, TreeFuncTerm. To start the optimization, you need to call one of the overloaded Start methods.

In paragraph 1, repeated pairwise search of a term occurs in order to identify the possibility of their gluing. Two terms are glued together if they differ from each other in values ​​only in one of the positions: one has '0', and the other has '1'. Adhering source (parent) terms are excluded from further work, and instead of two of them, the next child term is considered at the next iteration of Section 1 of the algorithm, with the term '*' in the position of differing parent variable variables. Thus, the binary alphabet of the input source terms inside the algorithm expands to the ternary one: '0', '1', '*'.

Point 1 in the Quine_McCluskey class is implemented in the LogicFuncMinimize procedure in two fundamentally different ways. The first method, based on 64-bit integer hash codes and searching in the .NET Dictionary <UInt64, ...> dictionary, works when the number of input variables of the Boolean function is less than or equal to 40, and the second one, based on the search in the ternary tree, is activated when more quantity. This bifurcation is due to the fact that the first method works somewhat faster than the second one, therefore its use is higher priority, but it works correctly only with the number of input variables up to 40, and with more of them, the bit grid of the integer hash code used for the operation of the algorithm. Indeed, since ternary logic is used in terms within the algorithm, then when the number of input variables is 41, the maximum hash code value $ inline $ 3 ^ {41} $ inline $ already exceeds the maximum value ( $ inline $ 2 ^ {64} $ inline $ -1), which can be written to 64 bit variable.

The second method of operation of clause 1, which operates when the number of variables in the input terms is more than 40, is based on the search tree term TreeFuncTerm. Intermediate nodes of the tree are implemented using the TreeNodeMiddle class. Each of them can refer to a maximum of three subsequent nodes, depending on whether the corresponding terms have been added to the tree. All leaf nodes are instances of the TreeNodeEnd class, the depth to which they are all the same from the root and equal to the number of input variables. Each leaf node also has a link to another TreeNodeEnd node that was added to it, thus implementing a unidirectional linked list. This kind of list is used to quickly search through all the final nodes of the search tree in the process of gluing them.

If in clause 1 for a term there is no other term that differs from it only in one position, i.e., the term does not stick to anyone, then it is considered to be one of the results of clause 1 of the algorithm, excluded from further work in it and enters to the input of paragraph 2 of the algorithm, implemented in the procedure ReduceRedundancyTerms.

Discarding redundant terms in ReduceRedundancyTerms is performed using an algorithm for approximate solving the problem of covering a set of variable-length sets with subsets. A close to the shortest coverage gives a cover table (TP) conversion algorithm based on the “minimum column-maximum row” method, which can be viewed, for example, here .

The approximate logic of his work is as follows:

  1. The source table is considered to be the current transformable TP, the set of rows of covers is empty;
  2. The current table is highlighted in the column with the smallest number of units. Among the rows containing the units in this column, the one with the most units is selected. This row is included in the coverage, the current table is shortened by deleting all columns in which the selected row has one;
  3. If there are not crossed out columns in the table, then p.2 is executed, otherwise the coating is constructed.

Note: When counting the number of units in a row, the units in the uncut columns are taken into account.

To test the algorithm, two test functions are used. The first procedure allows for a set of input variables of the number N to form a set of term number $ inline $ 2 ^ N $ inline $ , calculate the random value of LF for each term and upload to the file only those terms for which the corresponding value of the function was TRUE. After that, the correctness of the minimized form of the function is checked by calculating its value for each term and comparing with the original one.

TestQuineMcCluskeyRandom (click to view)
 public static void TestQuineMcCluskeyRandom(string sFileName, int iVariableAmount) { int iTotalCombines = 1 << iVariableAmount; ICollection<KeyValuePair<string, bool>> FuncResult = new List<KeyValuePair<string, bool>>(iTotalCombines); File.Delete(sFileName); Random rnd = new Random(); int iLogicFuncMask = 0; while (iLogicFuncMask == 0) iLogicFuncMask = rnd.Next(iTotalCombines); for (int iCounter = 0; iCounter < iTotalCombines; iCounter++) { int iCurValue = iCounter; string sLine = ""; for (int i = 0; i < iVariableAmount; i++) { sLine += (iCurValue % 2).ToString(); iCurValue /= 2; } bool bFuncValue = (iCounter & iLogicFuncMask) > 0; if (bFuncValue) { File.AppendAllText(sFileName, sLine + Environment.NewLine); } FuncResult.Add(new KeyValuePair<string, bool>(sLine, bFuncValue)); } //   DateTime DtStart = DateTime.Now; Console.WriteLine(" - " + DtStart.ToLongTimeString()); Quine_McCluskey Logic = new Quine_McCluskey(); Logic.Start(sFileName); DateTime DtEnd = DateTime.Now; TimeSpan Elapsed = DtEnd - DtStart; Console.WriteLine(" - " + DtEnd.ToLongTimeString()); Console.WriteLine(" - " + String.Format("{0:00}:{1:00}:{2:00}", Elapsed.Hours, Elapsed.Minutes, Elapsed.Seconds)); //  int iErrorsCounter = 0; foreach (KeyValuePair<string, bool> kvp in FuncResult) { if (Logic.Result.Calculate(kvp.Key.ToArray()) != kvp.Value) iErrorsCounter++; } Console.WriteLine("-  = " + Logic.Result.Terms.Count); Console.WriteLine("-  = " + iErrorsCounter); Console.ReadLine(); } 


The second test procedure allows you to load from the file the sets recorded by the previous test function TestQuineMcCluskeyRandom, for which the boolean procedure should be TRUE and verify the correctness of the minimized function.

TestQuineMcCluskeyFile (click to view)
 public static void TestQuineMcCluskeyFile(string sFileName, int iVariableAmount) { int iTotalCombines = 1 << iVariableAmount; ICollection<KeyValuePair<string, bool>> FuncResult = new List<KeyValuePair<string, bool>>(iTotalCombines); string sFileContent = File.ReadAllText(sFileName); for (int iCounter = 0; iCounter < iTotalCombines; iCounter++) { int iCurValue = iCounter; string sLine = ""; for (int i = 0; i < iVariableAmount; i++) { sLine += (iCurValue % 2).ToString(); iCurValue /= 2; } bool bFuncValue = sFileContent.Contains(sLine + Environment.NewLine); FuncResult.Add(new KeyValuePair<string, bool>(sLine, bFuncValue)); } //   DateTime DtStart = DateTime.Now; Console.WriteLine(" - " + DtStart.ToLongTimeString()); Quine_McCluskey Logic = new Quine_McCluskey(); Logic.Start(sFileName); DateTime DtEnd = DateTime.Now; TimeSpan Elapsed = DtEnd - DtStart; Console.WriteLine(" - " + DtEnd.ToLongTimeString()); Console.WriteLine(" - " + String.Format("{0:00}:{1:00}:{2:00}", Elapsed.Hours, Elapsed.Minutes, Elapsed.Seconds)); //  int iErrorsCounter = 0; foreach (KeyValuePair<string, bool> kvp in FuncResult) { if (Logic.Result.Calculate(kvp.Key.ToArray()) != kvp.Value) iErrorsCounter++; } Console.WriteLine("-  = " + Logic.Result.Terms.Count); Console.WriteLine("-  = " + iErrorsCounter); Console.ReadLine(); } 


When testing the implementation of the Mac-Klasky algorithm, its essential drawback was revealed: when the number of input variables in terms is more or equal to 16, the program is terminated due to the lack of memory in the Skleivanie procedure on the lines:

 X2Tree.Add(iCode, new TreeNodeEnd(x1.Value.Term, x1.Value, pSkleivNode, null, X2Tree.Count)); X2Tree.AddTerm(x1.Term, x1, kvp.Value); 

Moreover, this problem occurs when working on the first and second versions of paragraph 1, respectively, the order of the lines above, which are briefly described at the beginning of the article.

However, it should be noted that the problem of low memory arises only if optimization occurs on a set of input terms of a size slightly smaller than the maximum number $ inline $ 2 ^ N $ inline $ where N is the number of variables. If an abbreviated (incomplete) set is used with the number Q <Q p < $ inline $ 2 ^ N $ inline $ , then the problem does not arise until the size of the input data set exceeds a certain threshold Qth critical for the algorithm.

As possible ways to overcome this problem, the possibility of defining TreeNodeEnd and TreeNodeMiddle not in the form of C # language classes, but in the form of structures was investigated. But due to the fundamental difference between classes and structures, consisting in the fact that it is impossible to get a reference to the structure, this path turned out to be a dead end due to the fact that the implementation in question is based on the references to TreeNodeEnd and TreeNodeMiddle.

Another way to overcome the memory problem was to maximize memory clearing by the garbage collector before each iteration in the inside of item 1, where extreme memory consumption occurs. To do this, explicit calls to the garbage collector have been added to the LogicFuncMinimize function code. It turned out that working time with garbage is much less than without it! Although it would seem that time should be wasted on the work of the garbage collector, which should have worsened the result. A possible explanation of this “phenomenon” is that the release of memory by the collector reduces its defragmentation in the heap, which has a positive effect on the speed of searching for free sites during its subsequent mass allocation.

I do not see any other possibilities for overcoming troubles with a lack of memory. In this connection, the question arises whether it is possible to change something in order to achieve stable operation of the algorithm on the full set of input terms with the number of variables at least up to 30? Or is it a congenital property of this algorithm that cannot be corrected?

Nevertheless, in spite of possible memory problems, the proposed implementation of the McClasky algorithm in real-world problems can be used as a sufficiently fast and reliable LF optimizer, given by a set of terms of its input variables, in which the logical function takes the true value when the number of variables N <16 or the number of term Q << $ inline $ 2 ^ N $ inline $ .

Source: https://habr.com/ru/post/328506/


All Articles