📜 ⬆️ ⬇️

Implementation of the minimization of logical functions by the Kvayna-Mak'Klaski method with an incomplete input set

This article is, to some extent, a continuation of my article on the minimization of logical functions by the Quine-Mak'Klaski method ( https://habr.com/post/328506 ). It dealt with the case of fully-defined logical functions (although this was not explicitly mentioned in it, but only implied). In reality, such a case is quite rare when the number of input variables is small. Logical functions are partially or not completely defined, whose values ​​are given only for the part Q from the full set P = 2npossible sets (terms) of their arguments (variables) by the number N , i.e. Q <P. This situation occurs in practice in most cases of application of algorithms for optimizing logical functions. Indeed, for example, if the number of input variables is N = 30, which is an ordinary case, for example, in financial markets, then the volume of the input training sample should be of the order 230> 109elementary unique terms. Such an array of data is not found in every very large organization, not to mention private individuals, that is, this is already a BigData domain, the use of data centers, etc.

Therefore, in practice, the most often minimized logical functions will not be completely determined simply due to the lack of the necessary amount of accumulated data or due to various other objective reasons (for example, there is not enough space to store them). The question arises about the possibility of "circumventing" this trouble when using an algorithm that works with a fully defined set of term logic functions, such as, for example, from my previous article.


The standard practice in this case is to define an incomplete input set of variable values ​​(terms) to complete so that it gives the optimal result for the existing data set. But, in this case, a problem arises in the enumeration of all possible options for the definitions, the total number of which is V = 2PQin order to select the best option for determining in accordance with a given criterion. Obviously, for the really used Q and P values, the number of iterated definitions is astronomically large and this approach is not practical due to the enormity of the computational cost.
')
Thus, a different approach is needed that would eliminate the need to iterate over various options for definitions. Therefore, it is necessary to modernize the original algorithm, which initially works only with a fully defined input set, so that it can also work with a truncated set. It is this implementation of the algorithm that is proposed in this article, based on the fact that in the process of minimization, two incomplete lists of terms are processed simultaneously, on which the function is set as FALSE (0) and TRUE (1).

From the point of view of machine learning, the Quine-Mak'Klasky algorithm implements a learning paradigm with a teacher, when simultaneously with the input data in the learning process (in this case minimization) the corresponding output values ​​of the objective function participate. Let me remind you that the principle of operation of the Quine-Mak'Klasky basic method, according to the theory, consists of two main stages:
  1. Stage. Finding all simple LF terms using the gluing rules (laws):
    a) (A & B)? (A &! B)? A;
    b) (a? b) & (a?! b)? A;
    where & is the operation of the logical "AND" ;? - logical operation "OR" ;! - the operation of logical negation "NOT". From these formulas it follows that two terms are glued together if they differ from each other only in one of the positions of the variables. In the position where two terms differ from each other, the sign “*” is put. Thus, the alphabet in the glued terms in comparison with the original expands to three values:
    • 0 => false;
    • 1 => true value;
    • 2 => glued variable (*).
  2. Stage. Minimizing the number of glued terms obtained after the first stage, as the task of finding the optimal coverage of the initial set of terms by the number Q. That is, since each output term covers only a certain subset of the source terms, it is necessary to choose such a minimum set of output terms so that identified with with them, subsets of different lengths in aggregate completely covered all the original input terms. By coating, in this case, it is meant that the bitwise operation of the disjunction of the output term on the input gave a true value. For example, the output glued term has the following form: 10 * 0110 *.
    Then he covers term 10101100:
    10 * 0110 * & 10101100 = TRUE
    but does not cover the term 00101100:
    10 * 0110 * & 00101100 = FALSE
    That is, the input term and output term must be the same everywhere except for positions where there is a “*” symbol — in this position, the variable of the input term can take any value, since in this position, the variable is excluded from consideration.


The implementation code is as follows (click to view):
using System; using System.Collections.Generic; using System.Linq; /// <summary> ///      /// </summary> public abstract class LogicFunction { //   public const byte cStarSymb = 2; //    public readonly ICollection<byte[]> Terms = new LinkedList<byte[]>(); //   public abstract bool Calculate(bool[] X); //   public virtual bool Calculate(char[] X) { return Calculate(X.Select(p => p != '0').ToArray()); } //   public virtual bool Calculate(byte[] X) { return Calculate(X.Select(p => p != 0).ToArray()); } } /// <summary> ///    /// </summary> public class Dnf : LogicFunction { public static bool Calculate(byte[] X, byte[] term) { bool bResult = true; for (int i = 0; i < term.Length; i++) { if ((term[i] == cStarSymb) || (term[i] == X[i])) continue; bResult = false; break; } return bResult; } public override bool Calculate(bool[] X) { bool bResult = false; foreach (byte[] term in Terms) { bool TermVal = true; for (int i = 0; i < term.Length; i++) { if (term[i] == cStarSymb) continue; TermVal &= (term[i] != 0 ? X[i] : !X[i]); } bResult |= TermVal; } return bResult; } } /// <summary> ///    /// </summary> public class Knf : LogicFunction { public override bool Calculate(bool[] X) { bool bResult = true; foreach (byte[] term in Terms) { bool TermVal = false; for (int i = 0; i < term.Length; i++) { if (term[i] == cStarSymb) continue; TermVal |= (term[i] != 0 ? X[i] : !X[i]); } bResult &= TermVal; } return bResult; } } /// <summary> ///   /// </summary> public class TreeFuncTerm { // private readonly TreeNodeMiddle rootNode = new TreeNodeMiddle(); // ()  private int _rang = 0; public int Rang { get { return _rang; } } //    private int enumerationPos = 0; private TreeNodeMiddle[] enumerationBuf; //,     private byte[] enumerationTerm; public byte[] EnumerationTerm { get { return enumerationTerm; } } //     private UInt32 _count = 0; public UInt32 Count { get { return _count; } } // public TreeFuncTerm() { Clear(); } //  public void Clear() { _count = 0; _rang = 0; enumerationPos = 0; enumerationBuf = null; enumerationTerm = null; rootNode.Clear(); } //      public TreeNodeEnd EnumerationInit() { enumerationPos = 0; enumerationTerm = new byte[_rang]; enumerationTerm[0] = 0; enumerationBuf = new TreeNodeMiddle[_rang]; enumerationBuf[0] = rootNode; //    return EnumerationNextNode(); } //     public TreeNodeEnd EnumerationNextNode() { int iIsNext = (enumerationPos > 0 ? 1 : 0); TreeNodeEnd pRetTreeNode = null; while ((pRetTreeNode == null) && (enumerationPos >= 0)) { TreeNodeBase[] pCurrNodes = enumerationBuf[enumerationPos].Childs; TreeNodeBase pNextNode = null; int i = enumerationTerm[enumerationPos] + iIsNext; for (; i < 3; i++) if ((pNextNode = pCurrNodes[i]) != null) break; if (pNextNode == null) { //    enumerationPos--; iIsNext = 1; } else { enumerationTerm[enumerationPos] = (byte)i; if (pNextNode is TreeNodeMiddle) { //    enumerationPos++; enumerationBuf[enumerationPos] = (TreeNodeMiddle)pNextNode; enumerationTerm[enumerationPos] = 0; iIsNext = 0; } else //if (pNextNode is TreeNodeEnd) { //   pRetTreeNode = (TreeNodeEnd)pNextNode; } } } return pRetTreeNode; } //     public TreeNodeEnd AddTerm(byte[] term) { _rang = Math.Max(_rang, term.Length); TreeNodeBase pCurrNode = rootNode; for (int j = 0; j < term.Length; j++) { TreeNodeBase item = ((TreeNodeMiddle)pCurrNode).Childs[term[j]]; if (item == null) { if (j + 1 < term.Length) { item = new TreeNodeMiddle(); } else { item = new TreeNodeEnd(); _count++; } ((TreeNodeMiddle)pCurrNode).Childs[term[j]] = item; } pCurrNode = item; } return (TreeNodeEnd)pCurrNode; } //      public TreeNodeEnd Remove(byte[] term) { TreeNodeEnd pRemovedNode = null; TreeNodeMiddle pCurrNode = rootNode; foreach (byte cSymb in term) { TreeNodeBase pNextNode = pCurrNode.Childs[cSymb]; if (pNextNode == null) break; if (pNextNode is TreeNodeMiddle) { pCurrNode = (TreeNodeMiddle)pNextNode; } else { //      pRemovedNode = (TreeNodeEnd)pNextNode; //     pCurrNode.Childs[cSymb] = null; // -  _count--; //  -  break; } } return pRemovedNode; } //     public TreeNodeEnd IsContains(byte[] term) { TreeNodeBase pCurrNode = rootNode; foreach (byte cSymb in term) { pCurrNode = ((TreeNodeMiddle)pCurrNode).Childs[cSymb]; if (pCurrNode == null) break; } return ((pCurrNode != null) && (pCurrNode is TreeNodeEnd) ? (TreeNodeEnd)pCurrNode : null); } //          public int SearchDiff1(byte[] term, TreeNodeBase[] pOneDiffNodesList) { int iOneDiffNodesListCount = 0; TreeNodeBase pCurrNode = rootNode; for (int iPos = 0; iPos < term.Length; iPos++) { pOneDiffNodesList[iPos] = null; byte cSymbol = term[iPos]; if (pCurrNode != null) { if (cSymbol != LogicFunction.cStarSymb) { TreeNodeBase item = ((TreeNodeMiddle)pCurrNode).Childs[1 - cSymbol]; if (item != null) { //     pOneDiffNodesList[iPos] = item; iOneDiffNodesListCount++; } } pCurrNode = ((TreeNodeMiddle)pCurrNode).Childs[cSymbol]; } else if (iOneDiffNodesListCount == 0) { //            for (int i = iPos + 1; i < term.Length; i++) pOneDiffNodesList[i] = null; break; } // ,    , //     for (int iKey = 0; iKey < iPos; iKey++) { TreeNodeBase item = pOneDiffNodesList[iKey]; if (item == null) continue; item = ((TreeNodeMiddle)item).Childs[cSymbol]; if (item == null) { //     pOneDiffNodesList[iKey] = null; iOneDiffNodesListCount--; } else { pOneDiffNodesList[iKey] = item; } } } return iOneDiffNodesListCount; } } /// <summary> ///      /// </summary> public interface TreeNodeBase { //   void Clear(); } /// <summary> ///     /// </summary> public class TreeNodeEnd : TreeNodeBase { //   public void Clear() { } } /// <summary> ///     /// </summary> public class TreeNodeMiddle : TreeNodeBase { //  public readonly TreeNodeBase[] Childs = new TreeNodeBase[3]; //   public void Clear() { for (int i = 0; i < 3; i++) { if ((Childs[i] != null) && (Childs[i] is TreeNodeMiddle)) ((TreeNodeMiddle)Childs[i]).Clear(); Childs[i] = null; } } } /// <summary> ///     ---- /// </summary> public class Quine_McCluskey { private readonly Dnf _result = new Dnf(); public Dnf Result { get { return _result; } } //     private static void Skleivanie(TreeFuncTerm X1Tree, TreeFuncTerm X2Tree, LinkedList<byte[]> NegTerms, Dictionary<int, LinkedList<byte[]>> OutResult, TreeFuncTerm NegativTree, int iLevel) { LinkedList<byte[]> OutR = new LinkedList<byte[]>(); if (OutResult != null) OutResult.Add(iLevel, OutR); bool IsVirtSkleivOn = ((NegativTree != null) && (NegativTree.Count != 0)); TreeNodeEnd x1 = X1Tree.EnumerationInit(); TreeNodeBase[] FindTerms = new TreeNodeBase[x1 != null ? X1Tree.Rang : 1]; TreeNodeBase[] FindNegTerms = new TreeNodeBase[x1 != null ? X1Tree.Rang : 1]; TreeNodeBase[] FindVirtTerms = new TreeNodeBase[x1 != null ? X1Tree.Rang : 1]; while (x1 != null) { bool bIsSkleiv = false; byte[] pCurrTerm = X1Tree.EnumerationTerm; X1Tree.SearchDiff1(pCurrTerm, FindTerms); if (IsVirtSkleivOn) NegativTree.SearchDiff1(pCurrTerm, FindNegTerms); for (int iPos = 0; iPos < pCurrTerm.Length; iPos++) { byte cSymbSav = pCurrTerm[iPos]; if (cSymbSav == LogicFunction.cStarSymb) continue; //       //   ,    NegativTree if (FindTerms[iPos] != null) { bIsSkleiv = true; if (cSymbSav == 0) { pCurrTerm[iPos] = LogicFunction.cStarSymb; //  X2Tree.AddTerm(pCurrTerm); pCurrTerm[iPos] = cSymbSav; } } else if (IsVirtSkleivOn && (FindNegTerms[iPos] == null)) { pCurrTerm[iPos] = LogicFunction.cStarSymb; //  bool bIsNotCanAdd = false; foreach (byte[] NegTerm in NegTerms) { if (bIsNotCanAdd = Dnf.Calculate(NegTerm, pCurrTerm)) break; } if (!bIsNotCanAdd) { bIsSkleiv = true; X2Tree.AddTerm(pCurrTerm); } pCurrTerm[iPos] = cSymbSav; } } //    ,       if (!bIsSkleiv) OutR.AddLast(pCurrTerm.ToArray()); //    x1 = X1Tree.EnumerationNextNode(); } } //     private static UInt64 GetTermCode(byte[] pTerm) { UInt64 iMultip = 1, iCode = 0; for (int i = 0; i < pTerm.Length; i++) { iCode += (iMultip * pTerm[i]); iMultip *= 3; } return iCode; } //     private static byte[] GetTermByCode(UInt64 iCode, int iTermLength) { byte[] pTerm = new byte[iTermLength]; int iCounter = 0; while (iCode != 0) { pTerm[iCounter++] = (byte)(iCode % 3); iCode /= 3; } return pTerm; } //     private static void Skleivanie(SortedSet<UInt64> X1Tree, SortedSet<UInt64> X2Tree, LinkedList<byte[]> NegTerms, Dictionary<int, LinkedList<byte[]>> OutResult, SortedSet<UInt64> NegativTree, int iLevel, int iTermLength) { LinkedList<byte[]> OutR = new LinkedList<byte[]>(); if (OutResult != null) OutResult.Add(iLevel, OutR); bool IsVirtSkleivOn = ((NegativTree != null) && (NegativTree.Count != 0)); foreach (UInt64 x1 in X1Tree) { byte[] pCurrTerm = (IsVirtSkleivOn ? GetTermByCode(x1, iTermLength) : null); bool bIsSkleiv = false; UInt64 iMultip = 1; for (int iPos = 0; iPos < iTermLength; iPos++) { byte cSymbSav = (pCurrTerm != null ? pCurrTerm[iPos] : (byte)((x1 / iMultip) % 3)); if (cSymbSav != LogicFunction.cStarSymb) { UInt64 iCode = (cSymbSav == 0 ? x1 + iMultip : x1 - iMultip); //       //   ,    NegativTree if (X1Tree.Contains(iCode)) { bIsSkleiv = true; if (cSymbSav == 0) { X2Tree.Add(x1 + (byte)(LogicFunction.cStarSymb - cSymbSav) * iMultip); } } else if (IsVirtSkleivOn && !NegativTree.Contains(iCode)) { bool bIsNotCanAdd = false; pCurrTerm[iPos] = LogicFunction.cStarSymb; //  foreach (byte[] NegTerm in NegTerms) { if (bIsNotCanAdd = Dnf.Calculate(NegTerm, pCurrTerm)) break; } pCurrTerm[iPos] = cSymbSav; if (!bIsNotCanAdd) { bIsSkleiv = true; X2Tree.Add(x1 + (byte)(LogicFunction.cStarSymb - cSymbSav) * iMultip); } } } iMultip *= 3; } //    ,       if (!bIsSkleiv) OutR.AddLast(pCurrTerm != null ? pCurrTerm : GetTermByCode(x1, iTermLength)); } } //      //       private static void DeleteDublicatingTerms(IEnumerable<byte[]> InX1, SortedSet<UInt64> OutX2Tree) { OutX2Tree.Clear(); foreach (byte[] x1 in InX1) { UInt64 iCode = GetTermCode(x1); if (OutX2Tree.Contains(iCode)) continue; OutX2Tree.Add(iCode); } } //      //       private static void DeleteDublicatingTerms(IEnumerable<byte[]> InX1, TreeFuncTerm OutX2Tree) { OutX2Tree.Clear(); foreach (byte[] x1 in InX1) OutX2Tree.AddTerm(x1); } //    private static bool IsEqualTerms(byte[] pTermC, byte[] pTermB) { if ((pTermC == null) || (pTermB == null) || (pTermC.Length != pTermB.Length)) return false; bool bIsEqual = false; int iLength = Math.Min(pTermC.Length, pTermB.Length); for ( int i = 0; i < iLength; i++) { if (!(bIsEqual = (pTermB[i] == pTermC[i]))) break; } return bIsEqual; } //           . private static void ReduceRedundancyTerms(LinkedList<byte[]> InpTerms, LinkedList<byte[]> NegTerms, Dictionary<int, LinkedList<byte[]>> OutputTerms, ICollection<byte[]> ResultTerms) { //   ResultTerms.Clear(); //   ,      HashSet<byte[]> pNumbersForAdd = new HashSet<byte[]>(); //         ,    Dictionary<byte[], HashSet<byte[]>> Numbers2Terms = new Dictionary<byte[], HashSet<byte[]>>(); //        ,    Dictionary<byte[], HashSet<byte[]>> Terms2Numbers = new Dictionary<byte[], HashSet<byte[]>>(); //    foreach (int iLevel in OutputTerms.Keys.OrderByDescending(p => p).AsEnumerable()) { //       foreach (byte[] term in OutputTerms[iLevel]) { //  ,      term HashSet<byte[]> InTermsCont = new HashSet<byte[]>(); //     foreach (byte[] InpTerm in InpTerms) { if (Dnf.Calculate(InpTerm, term)) InTermsCont.Add(InpTerm); } //     (  ) if (NegTerms != null) { foreach (byte[] NegTerm in NegTerms) { if (!Dnf.Calculate(NegTerm, term)) InTermsCont.Add(NegTerm); } } Terms2Numbers.Add(term, InTermsCont); } //  ,           int iTerms2NumbersCountPrev = 0; foreach (byte[] term in OutputTerms[iLevel].OrderByDescending(p => Terms2Numbers[p].Count)) { //  ,      term HashSet<byte[]> InTermsCont = Terms2Numbers[term]; int iIntersectNumbers = pNumbersForAdd.Intersect(InTermsCont).Count(); if ((iIntersectNumbers < InTermsCont.Count) || (iTerms2NumbersCountPrev == InTermsCont.Count)) { pNumbersForAdd.UnionWith(InTermsCont); iTerms2NumbersCountPrev = InTermsCont.Count; foreach (byte[] pSrcNode in InTermsCont) { if (!Numbers2Terms.ContainsKey(pSrcNode)) Numbers2Terms.Add(pSrcNode, new HashSet<byte[]>()); Numbers2Terms[pSrcNode].Add(term); } } } } //   ,   -    while (pNumbersForAdd.Count > 0) { byte[] term = Numbers2Terms[pNumbersForAdd.OrderBy(p => Numbers2Terms[p].Count).First()].OrderByDescending(q => pNumbersForAdd.Intersect(Terms2Numbers[q]).Count()).First(); ResultTerms.Add(term); pNumbersForAdd.ExceptWith(Terms2Numbers[term]); } } //    public static void LogicFuncMinimize(IEnumerable<byte[]> PositivTerms, IEnumerable<byte[]> NegativTerms, ICollection<byte[]> OutR) { int iTotalLevels = (PositivTerms.Count() > 0 ? PositivTerms.First().Length : (NegativTerms != null && NegativTerms.Count() > 0 ? NegativTerms.First().Length : 0)); Dictionary<int, LinkedList<byte[]>> SkleivTerms = new Dictionary<int, LinkedList<byte[]>>(iTotalLevels); LinkedList<byte[]> InpTerms = new LinkedList<byte[]>(); LinkedList<byte[]> NegTerms = new LinkedList<byte[]>(); if (iTotalLevels < 40) { SortedSet<UInt64> X1PositivTree = new SortedSet<UInt64>(); DeleteDublicatingTerms(PositivTerms, X1PositivTree); SortedSet<UInt64> X1NegativTree = null; if (NegativTerms != null) { X1NegativTree = new SortedSet<UInt64>(); DeleteDublicatingTerms(NegativTerms, X1NegativTree); //        UInt64[] pNumbList = X1PositivTree.Intersect(X1NegativTree).ToArray(); foreach(UInt64 iNumb in pNumbList) { // -    X1   NegativTerms int iPos_Count = PositivTerms.Count(p => GetTermCode(p) == iNumb); int iNeg_Count = NegativTerms.Count(p => GetTermCode(p) == iNumb); if (iPos_Count > iNeg_Count) { X1NegativTree.Remove(iNumb); } else if (iPos_Count < iNeg_Count) { X1PositivTree.Remove(iNumb); } else //if (iPos_Count == iNeg_Count) { X1PositivTree.Remove(iNumb); X1NegativTree.Remove(iNumb); } } //           foreach (UInt64 code in X1NegativTree) { NegTerms.AddLast(GetTermByCode(code, iTotalLevels)); } } //          foreach (UInt64 code in X1PositivTree) { InpTerms.AddLast(GetTermByCode(code, iTotalLevels)); } int iLevelCounter = 0; //        while ((X1PositivTree.Count != 0) && (iLevelCounter < iTotalLevels)) { SortedSet<UInt64> X2Tree = new SortedSet<UInt64>(); Skleivanie(X1PositivTree, X2Tree, NegTerms, SkleivTerms, X1NegativTree, iLevelCounter, iTotalLevels); if ((X1NegativTree != null) && (X1NegativTree.Count != 0)) { SortedSet<UInt64> X2NegativTree = new SortedSet<UInt64>(); Skleivanie(X1NegativTree, X2NegativTree, InpTerms, null, X1PositivTree, iLevelCounter, iTotalLevels); //   X1NegativTree.Clear(); X1NegativTree = X2NegativTree; } //   X1PositivTree.Clear(); X1PositivTree = X2Tree; iLevelCounter++; GC.Collect(); } } else { TreeFuncTerm X1PositivTree = new TreeFuncTerm(); DeleteDublicatingTerms(PositivTerms, X1PositivTree); TreeFuncTerm X1NegativTree = null; if (NegativTerms != null) { X1NegativTree = new TreeFuncTerm(); DeleteDublicatingTerms(NegativTerms, X1NegativTree); //         TreeNodeEnd x1 = X1PositivTree.EnumerationInit(); while (x1 != null) { if (X1NegativTree.IsContains(X1PositivTree.EnumerationTerm) != null) { // -    PositivTerms   NegativTerms int iPos_Count = PositivTerms.Count(p => IsEqualTerms(p, X1PositivTree.EnumerationTerm)); int iNeg_Count = NegativTerms.Count(p => IsEqualTerms(p, X1PositivTree.EnumerationTerm)); if (iPos_Count > iNeg_Count) { X1NegativTree.Remove(X1PositivTree.EnumerationTerm); } else if (iPos_Count < iNeg_Count) { X1PositivTree.Remove(X1PositivTree.EnumerationTerm); } else //if (iPos_Count == iNeg_Count) { X1PositivTree.Remove(X1PositivTree.EnumerationTerm); X1NegativTree.Remove(X1PositivTree.EnumerationTerm); } } x1 = X1PositivTree.EnumerationNextNode(); } //           x1 = X1NegativTree.EnumerationInit(); while (x1 != null) { NegTerms.AddLast(X1NegativTree.EnumerationTerm.ToArray()); x1 = X1NegativTree.EnumerationNextNode(); } } //          TreeNodeEnd X1Term = X1PositivTree.EnumerationInit(); while (X1Term != null) { InpTerms.AddLast(X1PositivTree.EnumerationTerm.ToArray()); X1Term = X1PositivTree.EnumerationNextNode(); } int iLevelCounter = 0; //        while ((X1PositivTree.Count != 0) && (iLevelCounter < iTotalLevels)) { TreeFuncTerm X2Tree = new TreeFuncTerm(); Skleivanie(X1PositivTree, X2Tree, NegTerms, SkleivTerms, X1NegativTree, iLevelCounter); if ((X1NegativTree != null) && (X1NegativTree.Count != 0)) { TreeFuncTerm X2NegativTree = new TreeFuncTerm(); Skleivanie(X1NegativTree, X2NegativTree, InpTerms, null, X1PositivTree, iLevelCounter); //   X1NegativTree.Clear(); X1NegativTree = X2NegativTree; } //   X1PositivTree.Clear(); X1PositivTree = X2Tree; iLevelCounter++; GC.Collect(); } } //    ReduceRedundancyTerms(InpTerms, NegTerms, SkleivTerms, OutR); } //  public void Start(IEnumerable<byte[]> TermsInput) { LogicFuncMinimize(TermsInput, null, _result.Terms); } //  public void Start(IEnumerable<byte[]> TermsInput, IEnumerable<byte[]> NegativTerms) { LogicFuncMinimize(TermsInput, NegativTerms, _result.Terms); } //  public void Start(IEnumerable<char[]> TermsInput) { Start(TermsInput.Select(t => t.Select(p => (byte)(p == '0' ? 0 : 1)).ToArray())); } //  public void Start(IEnumerable<char[]> TermsInput, IEnumerable<char[]> NegativTerms) { Start(TermsInput.Select(t => t.Select(p => (byte)(p == '0' ? 0 : 1)).ToArray()), NegativTerms.Select(t => t.Select(p => (byte)(p == '0' ? 0 : 1)).ToArray())); } public void PrintResult() { Console.WriteLine("--------Otvet-------"); char[] pTermSymbs = new char[] { '0', '1', '*' }; foreach (byte[] Term in _result.Terms) { for (int j = 0; j < Term.Length; j++) { Console.Write(pTermSymbs[Term[j]].ToString() + " "); } Console.WriteLine(); } } } 



The Quine_McCluskey class is an implementation of this algorithm that uses other classes and interfaces: Dnf, TreeNodeBase, TreeNodeMiddle, TreeNodeEnd, TreeFuncTerm. To start the optimization, you need to call one of the overloaded Start methods, which calls the LogicFuncMinimize function, where the minimization algorithm itself is implemented. The minimization mechanism is implemented in two versions:
• using a .NET SortedSet container for storing and searching terms.
• without using .NET containers based on the TreeFuncTerm ternary tree.

In terms of speed, these two options are about equal (with .NET containers, perhaps a little faster, but not always), but the need to implement TreeFuncTerm is due to several factors:
• The first option, based on 64-bit integer hash codes and searching in the SortedSet .NET dictionary, works correctly only with the number of input variables in terms up to 40, and with more of them beyond the 64-bit bit grid of the integer hash code, used to operate the container. Indeed, since the glued terms inside the algorithm use ternary logic, then with the number of input variables equal to 41, the maximum hash code value 341already exceeds maximum 264-1, which can be written to 64 bit variable. With more variables, the variant based on the author's threefold search tree TreeFuncTerm is used.
• It is necessary to check the work of the implementation on .NET containers with another independent implementation free of them.
• You just need an option that is free of .NET containers, which could be easily implemented on platforms where there is no .NET platform (for example, in microcontrollers, FPGAs, etc.).
The operation of the TreeFuncTerm search tree is based on the configuration of the links of the TreeNodeMiddle and TreeNodeEnd classes, which are implementations of the TreeNodeBase interface. The TreeNodeMiddle class is the intermediate node of the tree, and the TreeNodeEnd class is the end leaf of the tree. In the tree, using the EnumerationInit () and EnumerationNextNode () functions, a non-recursive mechanism is used to iterate through all the final leaves of the TreeNodeEnd. The EnumerationInit () function initializes the search and returns the first available leaf of the tree. The EnumerationNextNode () function returns the next leaf of the tree or NULL if there are no more leaves to be selected. In this case, the auxiliary internal structure of EnumerationTerm, which reflects the position of the search "cursor" inside the tree, is also the term code of the found sheet in the ternary logic {0,1,2}. It should be noted that the order of sampling leaves from a tree does not coincide with the order of adding them to it.

The algorithm for functional purposes can be divided into three stages.
  1. Training. To solve the above problem of eliminating brute-force definitions in the implementation under consideration, the LogicFuncMinimize function receives two initial data sets PositivTerms and NegativTerms, on which the optimized function takes, respectively, true (TRUE, 1) and false (FALSE, 0) values. First of all, these lists are checked for consistency of the source data. It is necessary that each of the data sets is guaranteed to contain only unique terms that are present only in any one of the lists. To ensure this, each unique input term is viewed and the number of its entries in each of the source lists is found. If a term is found in both lists, then it remains only in the list in which it occurs more, and is removed from the other. If a term occurs in each of the lists equally often, then it is removed from both lists, which ensures uniqueness.
  2. Gluing. Next is an iterative cycle of gluing the input term. At each iteration in the glued terms, one sign * of the glued position is added. Therefore, the number of iterations can not be more than the number of variables N. In contrast to the previous implementation, the Skleivanie function of gluing input terms adds the ability of gluing not only to terms from its list, but also if there is no term in it with one difference and so-called “virtual” terms. By “virtual” terms are meant artificially defined terms that are not found in any of the term lists of the current level set. But gluing is possible only if the “virtual” term does not cover a single term of the original set of the opposite list.
    The Skleivanie function is called to process the lists on each iteration twice so that in the first call, the meaning of using PositivTerms and NegativTerms lists their actual content, and in the second call, PositivTerms and NegativTerms change their meaning in terms of use, i.e. the PositivTerms list contains negative terms, and the NegativTerms list contains positive terms:
    Skleivanie (X1PositivTree, ..., X1NegativTree, ..., SkleivTerms, ...);
    Skleivanie (X1NegativTree, ..., X1PositivTree, ..., null, ...);
    Thus, simultaneous interdependent gluing of the term of two lists occurs.
    If for a term there is no other real or virtual term that differs from it only in one position, i.e., the term does not stick to anyone, then it is considered one of the results of clause 1 of the algorithm, excluded from further work in it and enters to the input of stage 2 of the algorithm implemented in the procedure ReduceRedundancyTerms. Not glued terms come to the output of the algorithm only on that call of the Skleivanie function, for which the meaning of using the PositivTerms and NegativTerms lists coincides with their actual content, that is, on the first call.
  3. Reduction. Dropping redundant glued terms into ReduceRedundancyTerms is done using an approximate algorithm to solve the problem of covering the original set with variable length sets. A close to the shortest coverage gives a cover table (TP) conversion algorithm based on the “minimum column-maximum row” method (which can be viewed, for example, here http://www.studfiles.ru/preview/5175815/page ) .
    The approximate logic of his work is as follows:
    0. The source table is considered to be the current transformable TP, the set of rows of covers is empty;
    1. In the current table, the column with the smallest number of units is highlighted. Among the rows containing the units in this column, the one with the most units is selected. This line is included in the coverage, the current table is reduced by deleting all columns in which the selected line has one.
    2. If there are columns in the table that are not crossed out, then clause 1 is executed, otherwise the covering is constructed. Note: When counting the number of units in a row, units are counted in unclosed columns.
    This algorithm works fairly quickly and gives the result close to optimal.
    To test the operation of the algorithm, it is proposed to use the test function TestQuineMcCluskeyRandomPart, which, out of the total set of possible terms, 2nrandomly selects only the specified part 0 <dPart <= 1 (is a function parameter) for which optimization is performed. At dPart <1, the truncated set of input terms will be obtained, and at dPart = 1, the complete set of initial data will be obtained.

TestQuineMcCluskeyRandomPart (click to view)
 public static void TestQuineMcCluskeyRandomPart(int iVariableAmount, double dPart=1) { if (dPart < 0) throw new ArgumentException(" dPart    0   1"); if (dPart > 1) dPart = 1; //   ulong iTotalCombines = (ulong)1 << iVariableAmount; LinkedList<byte[]> pTrueCol = new LinkedList<byte[]>(); LinkedList<byte[]> pFalseCol = new LinkedList<byte[]>(); HashSet<ulong> pUsedTerms = new HashSet<ulong>(); Random rnd = new Random(); byte[] buf = new byte[8]; while (pUsedTerms.LongCount() < (iTotalCombines * dPart)) { rnd.NextBytes(buf); ulong iCurValue = (ulong)BitConverter.ToInt64(buf, 0) % iTotalCombines; if (pUsedTerms.Contains(iCurValue)) { //  -     do { iCurValue = ++iCurValue % iTotalCombines; } while (pUsedTerms.Contains(iCurValue)); } pUsedTerms.Add(iCurValue); byte[] sLine = new byte[iVariableAmount]; for (int i = 0; i < iVariableAmount; i++) { sLine[i] += (byte)(iCurValue % 2); iCurValue >>= 1; } if (rnd.Next(2) != 0) { pTrueCol.AddLast(sLine); } else { pFalseCol.AddLast(sLine); } } //   DateTime DtStart = DateTime.Now; Console.WriteLine(" - " + DtStart.ToLongTimeString()); Quine_McCluskey Logic = new Quine_McCluskey(); Logic.Start(pTrueCol, pFalseCol); DateTime DtEnd = DateTime.Now; Logic.PrintResult(); Console.WriteLine(" - " + DtStart.ToLongTimeString()); Console.WriteLine(" - " + DtEnd.ToLongTimeString()); TimeSpan Elapsed = DtEnd - DtStart; Console.WriteLine(" - " + String.Format("{0:00}:{1:00}:{2:00}", Elapsed.Hours, Elapsed.Minutes, Elapsed.Seconds)); //  int iErrorsCounter = 0; foreach (byte[] kvp in pTrueCol) { if (Logic.Result.Calculate(kvp) != true) iErrorsCounter++; } foreach (byte[] kvp in pFalseCol) { if (Logic.Result.Calculate(kvp) != false) iErrorsCounter++; } Console.WriteLine("-   = " + pUsedTerms.Count); Console.WriteLine("-   = " + Logic.Result.Terms.Count); Console.WriteLine("-  = " + iErrorsCounter); Console.ReadLine(); } 



As a result of the test function, the number of terms in the minimal disjunctive normal form and the number of errors covered by the initial set of terms are calculated.

In conclusion, I would like to note that in practice this implementation of the algorithm proved to be an effective and reliable means of minimizing logical functions defined by two incomplete sets of terms on which the logical function takes TRUE and FALSE values, respectively. Of course, this implementation can also be used in the classical form in the case of a completely defined input logic function, when only one or another list of terms is applied to the input. As a drawback, it is necessary to check the Skleivanie function for the absence of coverage errors for each virtual term of the entire list of source terms at each iteration of the algorithm, which leads to significant time costs with a large number of input terms.

Source: https://habr.com/ru/post/424517/


All Articles