Faster Algorithm for String Comparison: Difference between revisions
Created page with "Title: Faster Algorithm for String Comparison Abstract: This research aimed to develop a faster algorithm for string comparison. The study built upon previous work that used a token-based approach to determine string similarity, which had proven to be effective but could be further improved. The new algorithms introduced in this paper achieved higher accuracy and faster time complexity than the previous approach. The results showed that the proposed algorithms could sig..." |
No edit summary |
||
Line 1: | Line 1: | ||
Title: Faster Algorithm for String Comparison | Title: Faster Algorithm for String Comparison | ||
Research Question: How can we develop a faster algorithm for comparing strings while maintaining high accuracy? | |||
Methodology: The researchers proposed a package of substring-based new algorithms to determine Field Similarity. These algorithms are designed to improve upon the token-based approach proposed in [LL+99]. The new algorithms aim to achieve higher accuracy and better time complexity. | |||
Results: The researchers found that their algorithms outperformed the existing token-based approach in terms of both accuracy and time complexity. The time complexity of their algorithms was O(knm) for the worst case, O(β*n) for the average case, and O(1) for the best case. Experimental results showed that their algorithms could significantly improve the accuracy and time complexity of the calculation of Field Similarity. | |||
Implications: The development of these new algorithms has important implications for fields such as computational biology, pattern recognition, and data cleaning. The improved time complexity and accuracy make these algorithms more efficient and reliable for tasks that require string comparison. | |||
Link to Article: https://arxiv.org/abs/0112022v2 | |||
Link to Article: https://arxiv.org/abs/ | |||
Authors: | Authors: | ||
arXiv ID: | arXiv ID: 0112022v2 | ||
[[Category:Computer Science]] | [[Category:Computer Science]] | ||
[[Category:Algorithms]] | [[Category:Algorithms]] | ||
[[Category:Accuracy]] | [[Category:Accuracy]] | ||
[[Category:Time]] | [[Category:Time]] | ||
[[Category: | [[Category:Complexity]] | ||
[[Category:Based]] |
Latest revision as of 03:48, 24 December 2023
Title: Faster Algorithm for String Comparison
Research Question: How can we develop a faster algorithm for comparing strings while maintaining high accuracy?
Methodology: The researchers proposed a package of substring-based new algorithms to determine Field Similarity. These algorithms are designed to improve upon the token-based approach proposed in [LL+99]. The new algorithms aim to achieve higher accuracy and better time complexity.
Results: The researchers found that their algorithms outperformed the existing token-based approach in terms of both accuracy and time complexity. The time complexity of their algorithms was O(knm) for the worst case, O(β*n) for the average case, and O(1) for the best case. Experimental results showed that their algorithms could significantly improve the accuracy and time complexity of the calculation of Field Similarity.
Implications: The development of these new algorithms has important implications for fields such as computational biology, pattern recognition, and data cleaning. The improved time complexity and accuracy make these algorithms more efficient and reliable for tasks that require string comparison.
Link to Article: https://arxiv.org/abs/0112022v2 Authors: arXiv ID: 0112022v2