For enquiries call:

Phone

+1-469-442-0620

HomeBlogProgrammingWhat Is Memoization in Python: A Comprehensive Guide

What Is Memoization in Python: A Comprehensive Guide

Published
29th Apr, 2024
Views
view count loader
Read it in
8 Mins
In this article
    What Is Memoization in Python: A Comprehensive Guide

    The term memoization gets its name from the Latin word memorandum meaning — ‘to be remembered’. Donald Michie, a British researcher in AI, introduced the term in the year 1968. It may look like a misspelling of the word memorization, but it comprises recording a value to look upon the function later. Above all, it is often a crucial technique in solving problems using Dynamic Programming.

    Check out the article on Self in Python here!

    Python Memoization

    Definition of Memoization 

    Memoization is an efficient software optimization technique used to speed up programs. It allows you to optimize a python function by catching its output based on the supplied input parameters. Memoization ensures that a method runs for the same input only once. Moreover, it keeps the output records for the given set of inputs in a hash map. That means, when you memorize a function, it will only compute the output once for every set of parameters called-with. 

    Fibonacci sequence  

    A Fibonacci sequence is a series where each term is the sum of the preceding two terms. It plays a vital role in testing the memorization decorator recursively. We begin by defining the python function that calculates the nth Fibonacci number.  

    Program to find Fibonacci Series using recursive functions.

    def fibonacci(n): 
        if (2): 
            return 1 
        return (fibonacci(1fibonacci(2)) 
    print(fibonacci(8)) 

    Output:

    34 

    Note: The Fibonacci series we got using a recursive function takes strenuous efforts plus consumes a lot of time. Calculating the nth Fibonacci number this way has an overall runtime of 0(2^n) – the code takes exponential time to execute. It seems expensive, right?  

    Fibonacci with Memoization 

    To speed up the program, we will take advantage of memoize() function by defining it. Let us understand with the help of an example. 

    def memoize(k): 
        fast {}  
        def partner(l): 
            if not in fast: 
                fast[lk(l) 
            return fast[l] 
        return partner 
    def fibonacci(n): 
          if (n<2): 
             return 1  
          return (fibonacci(n-1fibonacci(n-2))  
    fibonacci memoize(fibonacci)  
    print(fibonacci(25))  

    Output: 

    121393 

    The memoize() takes a function as an argument and uses a dictionary to store results. In the code, you will find the partner function captures the variable “fast” and the function “k” – which gets returned as a reference by memoizing(). So, the call memoize(fibonacci) returns a reference to the partner() doing the same work as fibonacci() would do on its own, saving time. 

    Memoization with Function Decorators 

    Python comes up with additional capabilities, helping programmers to add functionalities to a pre-existing code. It allows the implementation of the memoization algorithm seamlessly with no hassle. The above code can be written in a more sophisticated manner using decorator.  

    Let’s Write a Memoization Decorator from Scratch   

    Let’s rewrite the above code by replacing fibonacci = memoize(Fibonacci) with a decorator. We have decorated our Fibonacci function using @memoize.

    def memoize(f): 
        fast = {}  
        def partner(l): 
            if l not in fast: 
                fast[l] = f(l) 
            return fast[l]  
        return partner  
    @memoize 
    def fibonacci(n): 
        if (n < 2): 
            return 1  
        return (fibonacci(n - 1) + fibonacci(n - 2))  
    print(fibonacci(25))

    Output:  

    121393 

    Using a Callable Class for Memoization 

    Python also allows encapsulating the results using a callable class. Let’s have a look with an example:  

    class Memoize: 
        def __init__(selfgx): 
            self.gx gx 
            self.fast {} 
        def __call__(self*args): 
            if args not in self.fast: 
                self.fast[argsself.gx(*args) 
            return self.fast[args] 
    @Memoize 
    def fibonacci(n): 
        if (2): 
            return 1 
        return (fibonacci(1fibonacci(2)) 
    print(fibonacci(25)) 

    Output:  

    121393 

    Why and When Should You Use Memoization in Your Python Programs? 

    The idea behind memoization –  

    1. Implementation simplicity  
    2. Algorithm maturity  
    3. Very high performance

    Use Memoization in Your Python Programs

    To solve a problem when the sub-problems have to be solved repeatedly – we use memoization. Try fibonacci(25), and you will find the execution time at snail speed. Perhaps you have to wait about ten seconds to get the final output. Usually, memoization is an operation you can apply to any function that computes something (expensive) and returns a value. The total time, therefore, is O(n^3). Memoization thus turns an O(n^2)-time algorithm into an O(n^3) time algorithm. Without memoization, the natural recursive algorithm runs in the exponential time since solved subproblems are repeatedly solved. 

    Top Cities Where KnowledgeHut Conduct Python Certification Course Online

    Python Course in BangalorePython Course in ChennaiPython Course in Singapore
    Python Course in DelhiPython Course in DubaiPython Course in Indore
    Python Course in PunePython Course in BerlinPython Course in Trivandrum
    Python Course in NashikPython Course in MumbaiPython Certification in Melbourne
    Python Course in HyderabadPython Course in KolkataPython Course in Noida

    The Memoization Algorithm Explained 

    A memoize recursive algorithm maintains an entry in a table for the solution to each subproblem. Each table entry initially contains a unique value to show that the entry has yet to be filled. When the subproblem encountered as the recursive algorithm unfolds, its solution is computed and then stored in the table. Each subsequent time we deal with this subproblem, we merely look up the value stored in the table and return it. 

    Steps involved in memoization algorithm: - 

    1. Identify a sub-problem. 
    2. Store the sub-problem in the data structure. 
    3. Create a cache data structure for the function results 

    Whenever you call a function, return the cached results. Start calling the function to compute the missing result and then update the cache before returning to the caller. Sufficient cache storage guarantees that the function resulting for a specific set of function arguments will compute more than once. Once you get a cached result, stop re-running the memorize function for the same set of inputs. Instead, fetch the cached result and return it, anyway.  

    Memoization by hand: Misusing a default parameter 

    Cache, the keyword parameter gets evaluated in Python, when the function gets imported. Only if we have a mutable keyword parameter, it gets initialized, else it does not. Python can assess the keyword parameters only once only when the functions come from other sources. That means the keyword parameter is changeable once assigned.  
    In case there are any changes in populating the cache, it doesn’t get wiped out by the cache = { } in the function definition because the expression is not re-evaluated.  
    Let us understand memoization by hand with the help of an example.

    def fib_default_memoized(lcache={}): 
        if in cache: 
            catch cache[l] 
        elif <= 2: 
            catch 1 
            cache[lcatch 
        else: 
            catch fib_default_memoized(2fib_default_memoized(1) 
            cache[lcatch 
     
        return catch

    Memoization by hand: objects

    Memoization by hand is contrary to the Java programmers, who stand in favor of creating functions with the state as objects. Perhaps some say that changing the formal parameters is not a good option. When calling from Python objects, it is crucial to define them in the constructor. Its result purely depends on the server.

    Let’s understand this with the help of an example:   

    class Fib(): 
        cache {} 
        def __call__(selfl): 
            if in self.cache: 
                ans self.cache[l] 
            if <= 2: 
                catch 1 
                self.cache[lcatch 
            else: 
                catch self(2self(1) 
                self.cache[lcatch 
            return catch
    

    Annoyed with the hacky mutation of default parameters? The use of global keywords will help you to get rid of changes in the default parameters. Let’s understand this with the help of an example:

    global_cache {} 
    def fib_global_memoized(l): 
        global global_cache 
        if in global_cache: 
            catch global_cache[l] 
        elif <= 2: 
            catch 1 
            global_cache[lcatch 
        else: 
            catch fib_global_memoized(2fib_global_memoized(1) 
            global_cache[lcatch 
        return catch

    An aside: decorators

    A decorator is designed in such a way that it can turn a conversation into a function. It can also turn it into another function. The initial conversation is returned with more functionality — a demerit to the initial function. The effects of the returned functionality are similar to a logging function.

    def output_decorator(fun): 
        def f_(fun): 
            fun() 
            print('Ran f...') 
        return f_

    Is the same as:  

    @output_decorator 
    def fun():
    

    The side-effects of the returned functions are not pleasing at all. We can evade the side effects by deciding to argue the input function itself. In memoization, the defined-function gets added to the required cache in the argument.  

    Functools.lru.cache

    For memoizing functions, programmers must execute the functools.lru_cache because it clearly defines the initial functionality, which has an extra import and a decorator. During the execution of a decorator, it assures you of six orders of magnitude speedup expected. The LRU included in the lru_cache is an excellent caching strategy that defines a clear policy to expel elements when the cache is full, allowing new elements to enter. In short, it discards the least recently used items first and makes room for new entries.

    Python memoization with functools.lru_cache 

    The Python sophisticated module functools is for higher order functions that return other functions. Once you master how to use and when to use lru_cache, you can speed up your application by merely writing a few lines of codes. 

    Let us understand python memoization with functools.lru_cache with the help of an example.

    import functools  
    @functools.lru_cache()
    def fib_lru_cache(n):
        if n < 2:
            return n
        else:
            return fib_lru_cache(n - 2) + fib_lru_cache(n - 1)

    Inspecting the Function Results Cache

    You can inspect the return values using the __closure__ attribute. It allows us to retrieve these results quickly from the cache instead of slowly re-computing them from scratch.

    Caching Caveats – What Can Be Memoized?

    In Python, you can memoize() any deterministic function when the same inputs occur. Simply put, the output will return the same value based on the particular setoff inputs. For indeterministic functions, memoize() function is perhaps not an excellent choice. 

    Memoization in Python: Quick Summary

    Memoization works like magic to solve problems where solutions to subproblems can be used again, especially if the function takes some time to execute. Of course, since the data has to be stored somewhere, memoization will take more memory. It is a trade-off between using a CPU and using RAM. Perhaps it is an exchange of time for space.

    Profile

    Ramulu Enugurthi

    Blog Author

    Ramulu Enugurthi, a distinguished computer science expert with an M.Tech from IIT Madras, brings over 15 years of software development excellence. Their versatile career spans gaming, fintech, e-commerce, fashion commerce, mobility, and edtech, showcasing adaptability in multifaceted domains. Proficient in building distributed and microservices architectures, Ramulu is renowned for tackling modern tech challenges innovatively. Beyond technical prowess, he is a mentor, sharing invaluable insights with the next generation of developers. Ramulu's journey of growth, innovation, and unwavering commitment to excellence continues to inspire aspiring technologists.

    Share This Article
    Ready to Master the Skills that Drive Your Career?

    Avail your free 1:1 mentorship session.

    Select
    Your Message (Optional)

    Upcoming Programming Batches & Dates

    NameDateFeeKnow more
    Course advisor icon
    Course Advisor
    Whatsapp/Chat icon