September 22, 2025

Name Mangling in Python With Examples

If you are writing a class in Python and want to follow Encapsulation OOPS concept in Python then how will you stop outside access to the variables as there are no explicit access modifiers like public, private, protected in Python and all the variables are public by default. In Python there is limited support for making class member private and that process is known as name mangling in Python.

Python name mangling mechanism

In name mangling mechanism any identifier with at least two leading underscores, at most one trailing underscore is textually replaced with _classname__identifier where classname is the current class name. For example if there is a variable __test in the class then it is replaced with _classname__test.

Since the name is changed internally by the interpreter so you can’t access the variable using its original name that’s how you get data hiding in Python.

Name mangling is helpful for letting subclasses override methods without breaking intraclass method calls.

Name mangling Python example

class User:
  def __init__(self, name, age):
    self.name = name
    self.__age = age

  def display_user(self):
    print('User Name:', self.name)
    print('User Age:', self.__age)


user = User('Mike Dallas', 34)
# calling class method
user.display_user()
# Accessing variables directly
print(user.name)
print(user.__age)
Output
User Name: Mike Dallas
User Age: 34
Mike Dallas
Traceback (most recent call last):
  File "F:/knpcode/Python/Programs/NameMangling.py", line 16, in 
    print(user.__age)
AttributeError: 'User' object has no attribute '__age'

In the class User there is a field __age (declared with double underscores) when it is accessed using the class method that is OK but trying to access it directly results in an error as its name is changed to (_User__age) by the name mangling mechanism.

You can check the name change by using the dir() function which returns a list of valid attributes for the passed object.

class User:
  def __init__(self, name, age):
    self.name = name
    self.__age = age

  def display_user(self):
    print('User Name:', self.name)
    print('User Age:', self.__age)


user = User('Mike Dallas', 34)
print(dir(user))
Output
['_User__age', '__class__', '__delattr__', '__dict__', '__dir__', '__doc__', '__eq__', '__format__', '__ge__',
 '__getattribute__', '__gt__', '__hash__', '__init__', '__init_subclass__', '__le__', '__lt__', '__module__',
 '__ne__', '__new__', '__reduce__', '__reduce_ex__', '__repr__', '__setattr__', '__sizeof__', '__str__',
 '__subclasshook__', '__weakref__', 'display_user', 'name']

Here you can see that __age is changed to _User__age.

Name mangling with method names

Since any class member with at least two leading underscores, at most one trailing underscore is textually replaced with _classname__identifier so name mangling is applied to the method name too.

class User:
  def __init__(self, name, age):
    self.name = name
    self.__age = age

  def __display_user(self):
    print('User Name:', self.name)
    print('User Age:', self.__age)


user = User('Mike Dallas', 34)
user.__display_user()
Output
Traceback (most recent call last):
  File "F:/knpcode/Python/Programs/NameMangling.py", line 12, in 
    user.__display_user()
AttributeError: 'User' object has no attribute '__display_user'

How to access name mangled variable

In name mangling process name is replaced with _classname__membername so you can still access the member name by using the mangled name. That’s why Python states that there is only limited support for making class member private.

class User:
  def __init__(self, name, age):
    self.name = name
    self.__age = age

  def display_user(self):
    print('User Name:', self.name)
    print('User Age:', self.__age)


user = User('Mike Dallas', 34)
# calling class method
user.display_user()
# Accessing variables directly
print(user.name)
# Accessing using the mangled name
print(user._User__age)
Output
User Name: Mike Dallas
User Age: 34
Mike Dallas
34

Python name mangling with method overriding

Name mangling is also helpful in method overriding in Python. It lets subclasses override methods without breaking intraclass method calls. Consider following example where class B extends class A and overrides parent class test method. In the init() of class A there is also a call to test method.

class A:
  def __init__(self):
    print('in init')
    self.test()

  def test(self):
    print('In test method of class A')


class B(A):
  def test(self):
    print('In test method of class B')


obj = B()
obj.test()
Output
in init
In test method of class B
In test method of class B

As you can see test() method of class B is getting called both of the times as the reference is of class B. But what you intended was to call test() method of class A when doing self.test(). To avoid breaking intraclass method calls in such scenario you can create a private copy of the original method.

class A:
  def __init__(self):
    print('in init')
    self.__test()

  def test(self):
    print('In test method of class A')
      
  # private copy
  __test = test

class B(A):
  def test(self):
    print('In test method of class B')


obj = B()
obj.test()
Output
in init
In test method of class A
In test method of class B

That's all for the topic Name Mangling in Python With Examples. If something is missing or you have something to share about the topic please write a comment.


You may also like

September 18, 2025

Quick Sort Python Program

In this post we'll see how to write Quick sort program in Python. Quick sort is considered one of the most efficient algorithm and it is classified as "divide and conquer algorithm".

How does Quick sort algorithm work

Quick sort works as follows-

  1. Choose one of the array elements as pivot and then partition all the elements around that pivot.
  2. All the elements having value less than the pivot come before the pivot.
  3. All the elements having value higher than the pivot come after the pivot.
  4. Swap the pivot element with the first element of the higher values so that the pivot element is placed in between the lower and higher values.

Once the elements are partitioned around pivot you get two sub-arrays. One on the left of the pivot having values smaller than the pivot and another on the right of the pivot having values greater than the pivot. Steps 1-3 are executed recursively for these 2 sub-arrays.

One of the decisions you have to make while implementing Quick sort is what value should be chosen as pivot, options are-

  • Pick the first element as a pivot
  • Pick the last element as a pivot
  • Pick any random element as a pivot
  • Pick median of the array elements as a pivot

Most commonly, right most element (last element) is chosen as pivot. In the implementation given here last element is chosen as pivot.

Let's try to understand how partitioning works with the help of the following array.

Quick Sort Python Program

Initialize a variable i to keep track of the index where elements smaller than the pivot will be placed. Then start with the leftmost element of the array and compare it with the pivot, if it is less than the pivot then swap it with the element at index i. Increment i by 1.

Once this comparison is done, Swap the pivot element with the first element of the higher values so that the pivot element is placed in between the lower and higher values. After pivot placement array can be visualized as given below.

Quick sort pivot

Now, you can divide array into two subarrays around the pivot for further processing of choosing the last element in each sub-array as pivot and partitioning the array again using recursive calls.

Quick sort partition

Quick Sort Python program

def partition(array, low, high):
  pivot = array[high]
  i = low - 1
  
  for j in range(low, high):
    #get all the elements lower than pivot to the left side
    if array[j] <= pivot:
      i += 1
      array[i], array[j] = array[j], array[i]
  
  #bring pivot to its intended position so that all the elements on the
  # left of pivot are less than it and on the right are greater than it
  (array[i + 1], array[high]) = (array[high], array[i + 1])
  return i+1
    
    
def quick_sort(arr, low, high):
  #basecase for recursion
  if(high - low <= 0):
    return;
   
  pivot_index = partition(arr, low, high)
  # once the pivot is it in its right place
  # divide the array in 2 parts and recursively call
  # 2 parts to sort them
  quick_sort(arr, low, pivot_index-1)
  quick_sort(arr, pivot_index+1, high)

mylist = [64, -34, 25, 5, 22, 11, 0, 2, 105, 6, 22]
quick_sort(mylist, 0, len(mylist) - 1)

print('Sorted Array:', mylist)
Output
Sorted Array: [-34, 0, 2, 5, 6, 11, 22, 22, 25, 64, 105]

Quick sort using list comprehension

Another way to write quick sort program in Python is using list comprehension. By selecting a pivot and using list comprehension with a condition to check if list elements are less than the pivot or list elements are greater than the pivot, two separate lists can be created around the pivot.

def quick_sort(arr): 
  if len(arr) <= 1:
    return arr
  else:
    #last element as pivot
    pivot = arr[-1]
 
    left = [a for a in arr[0:len(arr)-1] if a <= pivot]
    right = [a for a in arr[0:len(arr)-1] if a > pivot]
    
    return quick_sort_comp(left) +[pivot] +quick_sort_comp(right)

my_list = [64, -34, 25, 5, 22, 11, 0, 2, 105, 6, 22]
sort_list = quick_sort(my_list)
print('Sorted Array:', sort_list)
Output
Sorted Array: [-34, 0, 2, 5, 6, 11, 22, 22, 25, 64, 105]

Quick sort time and space complexity

Average and best case time complexity of Quick sort is O(n log n). In worst case, when pivot value doesn't partition elements properly, time complexity can be O(n2).

When implemented recursively extra space for recursive call method stacks is required so the worst case space complexity of Quick sort is O(n). While best and average case space complexity is O(log n).

That's all for the topic Quick Sort Python Program. If something is missing or you have something to share about the topic please write a comment.


You may also like

Bubble Sort Python Program

In this post we'll see how to write Bubble sort program in Python. Bubble sort works by comparing and swapping adjacent elements.

Bubble sort algorithm

Bubble sort works as follows for sorting an array in ascending order-

You start from the left end and compare the first two elements (i.e. element at index 0 and index 1). If element on the left is greater than the element on right (element at 0 > element at 1) you swap those two.

Move to next index and compare the next two adjacent elements (i.e. element at index 1 and index 2). Again, if element on the left is greater than the element on right you swap them.

This process is repeated until you reach the rightmost element.

This is the first pass of the bubble sort and by the end of the first pass you have the greatest element at the rightmost end. In this sorting technique greatest elements bubble up to the higher indices of the array thus the name bubble sort.

This process is repeated n-1 times for an array of length n. Since there are a large number of comparisons and swaps Bubble sort is considered one of the slowest sorting algorithm.

For example, if you have an array [6, 3, 10, 7, 1] then the first pass can be depicted as follows.

Bubble Sort in Python

In the first iteration, first two elements (6 and 3) are compared, since 6 is greater than 3 so elements are swapped. Same way in the next iteration 6 and 10 are compared and so on. By the end of the first pass greatest element in the array (10) bubbles up to the top of the array.

Because the last element is already at its intended position after the first pass, in the next pass only (N - 1) elements are considered thus the part of the array used is [3, 6, 7, 1].

Bubble Sort Python program

def bubble_sort(arr):
  arr_length = len(arr)
  for i in range(arr_length-1):
    # reduce length by 1 in each pass
    for j in range(arr_length-i-1):
      if(arr[j] > arr[j+1]):
        # swap using tuple assignment
        arr[j+1], arr[j] = arr[j], arr[j+1]
                

arr = [17, 85, 0, 34, 7, -10, 82, 107, 2, 35]
bubble_sort(arr)
print('Sorted array ', arr)
Output
Sorted array  [-10, 0, 2, 7, 17, 34, 35, 82, 85, 107]

Bubble sort time and space and complexity

Time complexity of bubble sort is O(n2) for worst and average case. Best case time complexity is O(n) which happens when the array is already sorted and you keep track of it.

def bubble_sort(arr):
  arr_length = len(arr)
  for i in range(arr_length-1):
    #tracks if swapping happens or not
    swapped = False  
    # reduce length by 1 in each pass
    for j in range(arr_length-i-1):
      if(arr[j] > arr[j+1]):
        # swap using tuple assignment
        arr[j+1], arr[j] = arr[j], arr[j+1]
        swapped = True
    if not swapped:
      break

arr = [-10, 0, 2, 7, 17, 34, 35, 82, 85, 107]
bubble_sort(arr)
print('Sorted array ', arr)

No extra space is required so the space complexity of bubble sort is O(1).

That's all for the topic Bubble Sort Python Program. If something is missing or you have something to share about the topic please write a comment.


You may also like

Dijkstra's Algorithm Java Program

Dijkstra's algorithm is a process to find the shortest paths from a single source node to all other nodes in a weighted graph. In this post we'll see the Java implementation of the Dijkstra's algorithm.

Some points about Dijkstra's algorithm

  1. Dijkstra's algorithm requires that all edge weights should be non-negative.
  2. It can be used to find the shortest path for both directed or undirected graphs.
  3. A greedy algorithm is defined as an algorithm that makes the locally optimal choice at each stage. Dijkstra's algorithm is classified as a greedy algorithm because, at each step the algorithm makes the locally optimal choice by picking one of the unvisited nodes with the smallest distance from the source.

How does Dijkstra's algorithm work

To start with, you need a data structure to keep track of visited nodes. You can use a boolean array for that, marking index of the visited node as true. Another data structure is needed to store distance of each node from the source node, for that you can use an array of type int.

A min heap to store immediate neighbours of any node and corresponding weights. For that PriorityQueue can be used.

Initially, array which stores distances should assign 0 to the source node and infinity to all the other nodes. As a substitute of infinity you may use Integer.MAX_VALUE

int[] dist = new int[graph.vertices];
Arrays.fill(dist, Integer.MAX_VALUE);
dist[src] = 0;

In PriorityQueue add the source node with distance as 0.

Let's take the following undirected graph as an example with source node as 0.

Dijkstra's algorithm weighted graph

In each iteration (for which condition is that PriorityQueue is not empty)

  1. Get the root of the min heap. Which means the head of the priority queue. In the first iteration it'll be 0. Let's call it c.
  2. Find the immediate neighbours of source node and verify the following for each neighbour node. Let’s call each neighbour node v-
    If the path through c to v is less than the currently stored distance to v
    if (dist[c] + weight between c and v < dist[v])
    then store this new shorter distance as  dist[v]
    dist[v] = dist[c] + w;
    
  3. Also add these neighbour nodes with their calculated distance from source node in PriorityQueue.
    pq.add(new Edge(v, dist[v]));
    

If we take our example graph then immediate neighbours of 0 are node 1 and 4. So these two will be added to the priority queue and their distance from the source node will be updated in dist array.

dist array at this point- [0, 10, 2147483647, 2147483647, 5]

pq at this point [(1,10), (4,5)]

In next iteration, node 4 will be extracted from the pq (Because that has shorter distance). Immediate neighbours of 4 are 0 and 3. Note that, 0 is selected as an immediate neighbour when graph is an undirected graph. Since 0 is already visited so calculation won't be done for it.

For node 3 distance is checked using the above mentioned logic- if (dist[c] + weight between c and v < dist[v])

dist[4] + 50 < dist[3] which translates to- 5+50 < 2147483647

So, now the dist array is- [0, 10, 2147483647, 55, 5]

pq at this point [(1,10), (3,50)]

In next iteration node 1 will be extracted from the pq with immediate neighbours as 0, 2 and 3.

For node 2 distance is calculated as dist[1] + 30 < 2147483647 which translates to- 10+30=40

And node 3 distance is calculated as dist[1] + 40 < 55 which translates to- 10+40=50 becoming the new distance.

So, now the dist array is- [0, 10, 40, 50, 5]

And that’s how the iterations will continue until priority queue is empty.

Dijkstra's algorithm - Java Program

Class to create graph is written as a separate class. For creating adjacency list, Map storing List data structure is used.

Edge is defined as a static inner class which also implements comparable to keep edges in order of weights. That helps with creating a min heap.

import java.util.ArrayList;
import java.util.HashMap;
import java.util.List;
import java.util.Map;

public class Graph {
  
  static class Edge implements Comparable<Edge>{
    int destination;
    int weight;
    Edge(int destination, int weight){
      this.destination = destination;
      this.weight = weight;
    }
    @Override
    public int compareTo(Edge o) {
      // TODO Auto-generated method stub
      return Integer.compare(this.weight, o.weight);
    }
  }

  int vertices;
  Map<Integer, List<Edge>> adjList;

  Graph(int vertices){
    this.vertices = vertices;
    adjList = new HashMap<>();
  }

  void addEdge(int source, int destination, int weight) {
    adjList.computeIfAbsent(source, k->new ArrayList<>()).add(new Edge(destination, weight));
    // For undirected graph
    adjList.computeIfAbsent(destination, k->new ArrayList<>()).add(new Edge(source, weight));
  }

  void printGraph() {
    for(Map.Entry<Integer, List<Graph.Edge>> es :adjList.entrySet()) {
      System.out.print("Vertex " + es.getKey() + " connects to: ");
      List<Edge> list = es.getValue();
      for (Edge e : list) {
        System.out.print("[" + e.destination + " with weight " + e.weight + "] ");
      }
      System.out.println();
    }
  }
 }

Dijkstra's algorithm Java implementation

import java.util.Arrays;
import java.util.Collections;
import java.util.PriorityQueue;

public class Dijkstra {
  public void dijkstra(int src, Graph graph) {
    boolean[] visited = new boolean[graph.vertices];
    int[] dist = new int[graph.vertices];
    Arrays.fill(dist, Integer.MAX_VALUE);
    dist[src] = 0;
    PriorityQueue<Graph.Edge> pq = new PriorityQueue<Graph.Edge>();
    pq.add(new Graph.Edge(src, 0));
    
    while(!pq.isEmpty()) {
      Graph.Edge current = pq.poll();
      int n = current.destination;
      if(visited[n])
        continue;
      visited[n] = true;
      System.out.println(Arrays.toString(visited));
      for(Graph.Edge next : graph.adjList.getOrDefault(n, Collections.emptyList())) {
        int v = next.destination;
        int w = next.weight; 
        System.out.println("Next " + v);
        System.out.println("Distance " + w);
        if(!visited[v] && dist[n] + w < dist[v]) {
          dist[v] = dist[n] + w;
          pq.add(new Graph.Edge(v, dist[v]));
        }
      }
      System.out.println(Arrays.toString(dist));      
    }
    printDistances(dist, src);
  }
  
  public void printDistances(int[] dist, int src) {
    System.out.println("Shortest distances from vertex " + src + ":");
    for(int i = 0; i < dist.length; i++) {      
      System.out.println("To vertex " + i + " -> " +dist[i]);
    }
  }
  
  public static void main(String[] args) {
    Graph graph = new Graph(5);
    graph.addEdge(0, 1, 10);
    graph.addEdge(0, 4, 5);
    graph.addEdge(1, 2, 30);
    graph.addEdge(1, 3, 40);
    graph.addEdge(3, 4, 50);

    graph.printGraph();

    Dijkstra d = new Dijkstra();
    d.dijkstra(0, graph);
  }
}
Output
Vertex 0 connects to: [1 with weight 10] [4 with weight 5] 
Vertex 1 connects to: [0 with weight 10] [2 with weight 30] [3 with weight 40] 
Vertex 2 connects to: [1 with weight 30] 
Vertex 3 connects to: [1 with weight 40] [4 with weight 50] 
Vertex 4 connects to: [0 with weight 5] [3 with weight 50] 
[true, false, false, false, false]
Next 1
Distance 10
Next 4
Distance 5
[0, 10, 2147483647, 2147483647, 5]
[true, false, false, false, true]
Next 0
Distance 5
Next 3
Distance 50
[0, 10, 2147483647, 55, 5]
[true, true, false, false, true]
Next 0
Distance 10
Next 2
Distance 30
Next 3
Distance 40
[0, 10, 40, 50, 5]
[true, true, true, false, true]
Next 1
Distance 30
[0, 10, 40, 50, 5]
[true, true, true, true, true]
Next 1
Distance 40
Next 4
Distance 50
[0, 10, 40, 50, 5]
Shortest distances from vertex 0:
To vertex 0 -> 0
To vertex 1 -> 10
To vertex 2 -> 40
To vertex 3 -> 50
To vertex 4 -> 5

Time and Space complexity of Dijkstra's algorithm

When implemented using PriorityQueue in Java, time complexity of Dijkstra's algorithm is

O((V+E) logV)

If there are V vertices in the graph then each insertion in PriorityQueue takes log V time so time for V insertions is V log V.

Then we also need to update distance by going through the neighbours of each vertex. For that we again make an entry in PriorityQueue with the updated distance. For each vertex upto E edges may cause re-insertion in the queue. Which means E log V time.

Initially we also fill the distance array with infinity that also takes O(V) time for V vertices.

So total time is - O(V) + O(V log V) + O(E log V), resulting in O((V+E) logV) time complexity.

Auxiliary space requirement is-

One distance array- O(V)

One visited array- O(V)

PriorityQueue- O(V)

Adjacency list where Edges reachable from each vertex are stored- O(V + E)

So resultant space complexity after removing constant factors is-

O(V + E)

That's all for the topic Dijkstra's Algorithm Java Program. If something is missing or you have something to share about the topic please write a comment.


You may also like

September 17, 2025

Inheritance in Python With Examples

In this tutorial you’ll learn about OOPS concept inheritance and how to use inheritance in Python.

Inheritance concept

Inheritance allows us to create a class that acquires, all the properties and methods of another class.

The class whose members are inherited is called the Super class. Also known as parent class or base class.

The class that inherits from another class is called the Sub class. Also known as child class or derived class.

Python inheritance syntax

If there is a class called ParentClass defined as-

class ParentClass:
  body of parent class

Then a ChildClass that inherits from this ParentClass can be defined as-

class ChildClass(ParentClass):
  body of child class

Inheritance Python example

In the example there is a class called Person that acts as a base class and another class Employee that inherits from Person class.

class Person:
  def __init__(self, name, age):
    self.name = name
    self.age = age

  def display_person(self):
    print('In display_person method')
    print('Name:', self.name)
    print('Age:', self.age)

class Employee(Person):
  pass

e = Employee("Michael Weyman", 42)
e.display_person()
Output
In display_person method
Name: Michael Weyman
Age: 42

As you can see in Employee class just pass keyword is used to specify that it doesn’t add any property or method to a class. It just inherits all the properties and methods of the class it inherits from.

You can create an object of Employee class and initialize the ‘name’ and ‘age’ properties because Employee class inherits these properties from Person class. Same way you can also call the method display_person() method using the object of Employee class.

Constructor overriding and use of super with inheritance

When a class inherits another class in Python, by default constructor of the super class is also available to the child class. If you have extra fields in the child class which you need to initialize in the child class then you can override the constructor in the child class to initialize the fields there.

In most of the scenarios you’ll inherit from a base class and add properties and methods of its own in the child class as well. To initialize the properties of the child class you can add __init__() function in the child class too. In our Employee class let’s add two fields person_id and department and add a method display_employee() too.

class Employee(Person):
  def __init__(self, person_id, department, name, age):
    self.name = name
    self.age = age
    self.person_id = person_id
    self.department = department

  def display_employee(self):
    print('In display_employee method')
    print('Id:', self.person_id)
    print('Name:', self.name)
    print('Age:', self.age)
    print('Department:', self.department)

In the above class you can notice the redundancy of initializing the parent class’ fields in the constructor though there is a constructor in the parent class which is already doing that. Same way in the display_employee () method we have print statements to print name and age too though there is a method in Person class which is already doing that.

If you want to call super class constructor and methods from sub-class that can be done using super() function which helps in avoiding code redundancy as present in the above Employee class. Here is the modified Employee class with usage of super() function.

class Employee(Person):
  def __init__(self, person_id, department, name, age):
    # call constructor of super class
    super().__init__(name, age)
    self.person_id = person_id
    self.department = department

  def display_employee(self):
    # call method of super class
    super().display_person()
    print('In display_employee method')
    print('Id:', self.person_id)
    print('Department:', self.department)

e = Employee(1, "IT", "Michael Weyman", 42)
e.display_employee()
Output
In display_person method
Name: Michael Weyman
Age: 42
In display_employee method
Id: 1
Department: IT

Advantages of inheritance

  1. Inheritance helps in writing reusable code where you can use the existing functionality just by inheriting from an existing class.
  2. Inheritance helps in writing hierarchical code where you write more generalized code in the super class and then move on to inherit it and add more specific methods. For example you can have a Vehicle super class with more generic functionality like accelerate(), brake(), gear(). Then inherit it to create more specialized classes like Car, Bus, MotorCycle and further down to inherit from Car to create more specific classes like SUV, SportsCar.
  3. Also makes managing the code easy because you don’t put all the functionality in the same class you rather create several classes to create a hierarchical structure with code distributed among those classes.

That's all for the topic Inheritance in Python With Examples. If something is missing or you have something to share about the topic please write a comment.


You may also like

September 10, 2025

Weighted Graph Java Program

In this post we'll see how to write a Java program to create a weighted graph.

What is a weighted graph

A weighted graph is a graph structure where numerical values called weights are assigned to edges. These weights may represent distance, cost, time etc.

Weighted graph may be directed (traversed one way only from source to destination) or undirected.

Weighted Graph

Weighted graph Java program

A weighted graph in Java can be implemented using either an adjacency matrix or an adjacency list.

If it is a sparse graph, meaning it has relatively few edges compared to the number of vertices, then adjacency list is preferred because it will take less space. It takes less memory because adjacency list only stores actual connections unlike adjacency matrices that allocate space for all possible edges.

If we take the below graph as example-

Weighted Graph

Then adjacency list can be visualized as-

Graph Adjacency List

And adjacency matrix can be visualized as-

Graph Adjacency Matrix
Note that in both visualisations, graph is represented as undirected graph.

Weighted graph Java implementation using adjacency list

import java.util.ArrayList;
import java.util.List;

public class Graph {
  int vertices;
  List<List<Edge>> adjList;
  static class Edge{
    int destination;
    int weight;
    Edge(int destination, int weight){
      this.destination = destination;
      this.weight = weight;
    }
  }

  Graph(int vertices){
    this.vertices = vertices;
    adjList = new ArrayList<>();
    for(int i = 0; i < vertices; i++) {
      adjList.add(new ArrayList<>());
    }
  }

  public void addEdge(int source, int destination, int weight) {
    adjList.get(source).add(new Edge(destination, weight));
    // For undirected graph need both sides
    adjList.get(destination).add(new Edge(source, weight));
  }

  public void printGraph() {
    for(int i = 0; i < vertices; i++) {
      System.out.print("Vertex " + i + " connects to: ");
      for (Edge e : adjList.get(i)) {
        System.out.print("[" + e.destination + " with weight " + e.weight + "] ");
      }
      System.out.println();

    }
  }

  public static void main(String[] args) {
    Graph graph = new Graph(5);
    graph.addEdge(0, 1, 10);
    graph.addEdge(0, 4, 5);
    graph.addEdge(1, 2, 30);
    graph.addEdge(1, 3, 40);
    graph.addEdge(3, 4, 50);

    graph.printGraph();
  }
}
Output
Vertex 0 connects to: [1 with weight 10] [4 with weight 5] 
Vertex 1 connects to: [0 with weight 10] [2 with weight 30] [3 with weight 40] 
Vertex 2 connects to: [1 with weight 30] 
Vertex 3 connects to: [1 with weight 40] [4 with weight 50] 
Vertex 4 connects to: [0 with weight 5] [3 with weight 50] 

As you can see here adjacency list is declared as an ArrayList of ArrayList, which should be clear from the visual representation of the adjacency list shown above. There is a static inner class Edge with fields as destination and weight.

Weighted graph Java implementation using adjacency matrix

public class GraphMatrix {
  private int vertices;
  private int[][] adjMatrix;
  GraphMatrix(int vertices){
    this.vertices = vertices;
    adjMatrix = new int[vertices][vertices];
  }

  public void addEdge(int source, int destination, int weight) {
    if((source < 0 || source > vertices) || (destination < 0 || destination > vertices)) {
      System.out.println("Invalid edge addition");
      return;
    }
    adjMatrix[source][destination] = weight;
    // For undirected graph reverse setting also required
    adjMatrix[destination][source] = weight;
  }

  public void printGraph() {
    for(int i = 0; i < adjMatrix.length; i++) {
      for(int j = 0; j < adjMatrix[0].length; j++) {
        System.out.print(adjMatrix[i][j] + "\t");
      }
      System.out.println();
    }
  }
  public static void main(String[] args) {
    GraphMatrix graph = new GraphMatrix(5);
    graph.addEdge(0, 1, 10);
    graph.addEdge(0, 4, 5);
    graph.addEdge(1, 2, 30);
    graph.addEdge(1, 3, 40);
    graph.addEdge(3, 4, 50);

    graph.printGraph();

  }
}
Output
0	10	0	0	5	
10	0	30	40	0	
0	30	0	0	0	
0	40	0	0	50	
5	0	0	50	0

That's all for the topic Weighted Graph Java Program. If something is missing or you have something to share about the topic please write a comment.


You may also like

September 8, 2025

AVL Tree Java Program - Insertion, Traversal, Search

In this post we'll see how to write a AVL tree Java program. Operations covered in this post are-

  • Inserting a node in AVL tree
  • Searching a node in AVL tree
  • AVL tree traversal

AVL Binary Tree

An AVL tree (named after inventors Adelson-Velsky and Landis) is a self-balancing binary search tree. In an AVL tree, the heights of the left subtree and right subtree of any node differ by at most one. If any time that height difference goes beyond one, rebalancing is done so that height difference is not more than one.

Because of this self-balancing property, AVL tree will not get skewed like Binary Search Tree. AVL tree height is kept to a minimum so that searching, inserting and deleting nodes have time complexity of O(log n). It will never be O(n) which may be the time complexity of BST under certain scenarios.

For example, suppose you are inserting {10, 20, 30, 40} in a BST then the BST is skewed towards right.

Skewed BST

Which means searching for 40 becomes an O(n) operation.

Same nodes with AVL tree.

Balanced AVL Tree

As you can see, because of self-balancing, AVL tries to keep the height as minimum, in the above AVL tree you can see that the difference between left subtree height and right subtree height is 1 from root node.

How does self-balancing work

In an AVL tree, the heights of the left subtree and right subtree of any node differ by at most one. Whenever insertion and deletion operations are performed on the tree, balance factor is calculated for the nodes to check if height difference is going beyond the permissible limit for any node. If yes, then rebalancing is done.

The height of a subtree is the number of edges between the root node of that subtree and the leaf node farthest down in that subtree.

Equation for Balance factor for any node n can be given as-

Balance Factor(n) = height of left subtree(n) - height of right subtree(n)

For a tree to be considered balanced, for every node balance factor should be in the range

-1 <= balance factor <= 1

Which can be explained as-

  • BF = 0 means binary tree is perfectly balanced at that node
  • BF = +1 means left-heavy at that node but permissible
  • BF = -1 means right-heavy at that node but permissible
  • BF < -1 or BF > +1 means binary tree is imbalanced and rotation is required to balance it.
AVL Tree with Balance Factor

Since height is constantly calculated while constructing AVL tree, in order to save time height for each node is stored with the node.

class Node{
  int data;
  int height;
  Node left;
  Node right;
  Node(int data){
    this.data = data;
    this.height = 1;
  }
  public void displayData(){
    System.out.print(data + " ");
  }
}

Rotation cases

Following are the scenarios for the AVL tree to become imbalance and each of these scenarios requires different rotation operation.

  1. Left-Left scenario- When balance factor of Node n is more than 1 and its left child node is also left heavy that is LL scenario. In that case a right rotation is needed to make it balanced. For example, consider the following tree. Here node 30 has BF of 2 (left height is 2 and right height is 0).
    AVL Tree LL Scenario
    So, it has to be right rotated around that node (node with data as 30) so that its left child becomes the parent node and any right child of that left child becomes the left child of that node.
    AVL Tree LL Balanced
    If the node around which right rotation is done is called node and we have to write that same thing in programming logic, it'll be as given below.
    Node temp1 = node.left;
    Node temp2 = temp1.right;
    // from where rotation started (origin node) should become right child
    temp1.right = node;
    // Right child of the left child of origin node should become left child
    // of origin node
    node.left = temp2;
    
  2. Right-Right scenario- When balance factor of Node n is less than -1 and its right child node is also right heavy that is RR scenario. In that case a left rotation is needed to make it balanced. For example, consider the following tree. Here node 10 has BF of -2 (left height is 0 and right height is 2).
    AVL Tree RR Scenario
    So, it has to be left rotated around that node (node with data as 10) so that its right child becomes the parent node and any left child of that right child becomes the right child of that original root node.
    AVL Tree RR Balanced
    If the node around which left rotation is done is called node and we have to write that same thing in programming logic, it'll be as given below.
    Node temp1 = node.right;
    Node temp2 = temp1.left;
    // from where rotation started (origin node) should become left child
    temp1.left = node;
    // left child of the right child of origin node should become right child
    // of origin node
    node.right = temp2;
    
  3. Left-Right Scenario- This scenario occurs when parent node is imbalanced as its balance factor is more than +1 (left side imbalance) but it's left child is right heavy. This results in a zig-zag kind of structure which can't be balanced just by right rotation. For example, consider the following tree.
    AVL Tree LR Scenario
    Right rotation around 30 will make it a wrong structure as 10 as root node and 20 as its left child violates the BST structure (20 is greater than 10 so it should be on right side).
    What you need is left rotation around 10 making 20 the parent of 10.
    That makes it a LL scenario needing a right rotation around 30 to finally balance it.
    AVL Tree LR Balanced
  4. Right-Left Scenario- This scenario occurs when parent node is imbalanced as its balance factor is less than -1 (right side imbalance) but its right child is left heavy. This results in a zig-zag kind of structure which can't be balanced just by left rotation. For example, consider the following tree.
    AVL Tree RL Scenario
    Single left rotation around 10 won't make it right. What you need is right rotation around 30 making 20 the parent of 30.
    That makes it a RR scenario needing a left rotation around 10 to finally balance it.

AVL Tree Traversal

When you traverse an AVL tree you visit each node in a specified order. Orderings which are used for traversal are-

  • Inorder traversal
  • Preorder traversal
  • Postorder traversal

Note that traversals of AVL trees are same as binary search trees.

AVL tree Inorder traversal in Java

Logic for Inorder traversal of AVL tree is as follows-

Recursively traverse the left subtree
Visit the root node
Recursively traverse the right subtree
// For traversing in order
public void inOrder(Node node){
  if(node != null){
    inOrder(node.left);
    node.displayData();
    inOrder(node.right);
  }
}

AVL tree Preoder traversal in Java

Logic for preorder traversal of AVL tree is as follows-
Visit the root node
Recursively traverse the left subtree.
Recursively traverse the right subtree
// Preorder traversal
public void preOrder(Node node){
  if(node != null){
    node.displayData();
    preOrder(node.left);           
    preOrder(node.right);
  }
}

AVL tree Postoder traversal in Java

Logic for postoder traversal of AVL tree is as follows-

Recursively traverse the left subtree.
Recursively traverse the right subtree
Visit the root node
// Postorder traversal
public void postOrder(Node node){
  if(node != null){
    postOrder(node.left);
    postOrder(node.right);
    node.displayData();          
  }
}
public class AVLTree {
  // first node
  Node root;
  AVLTree(){
    root = null;
  }
  // Class for defining each node in the tree
  static class Node{
    int data;
    int height;
    Node left;
    Node right;
    Node(int data){
      this.data = data;
      // New node has height 1 because it has no children
      // height of any node is calculated as
      //height = 1 + max(height of left child, height of right child)
      this.height = 1;
    }
    public void displayData(){
      System.out.print(data + " ");
    }
  }
  
  public void insert(int data){
    root = insert(root, data);
     
  }
  
  public Node insert(Node node, int data) {
    if(node == null) {
      return new Node(data);
    }
    if(data < node.data) {
      node.left = insert(node.left, data);
    }else if(data > node.data) {
      node.right = insert(node.right, data);
    }else {
      return node;
    }
    
    //update height of each node traversed
    node.height = 1 + Math.max(getHeight(node.left), getHeight(node.right));
    int balance = getBalance(node);
    
    // left left
    if(balance > 1 && data < node.left.data)
      return rightRotate(node);
      
    //right right
    if(balance < -1 && data > node.right.data)
      return leftRotate(node);
    //left right
    if(balance > 1 && data > node.left.data) {
      node.left = leftRotate(node.left);
      return rightRotate(node);
    }
    //right left
    if(balance < -1 && data < node.right.data) {
      node.right = rightRotate(node.right);
      return leftRotate(node);
    }
    return node;
  }
  
  // method to get height of node
  int getHeight(Node node) {
    return node == null ? 0 : node.height;
  }
  
  // method to get balance factor of node
  int getBalance(Node node) {
    return node == null ? 0 : (getHeight(node.left) - getHeight(node.right));
  }
  
  Node rightRotate(Node node) {
    System.out.println("Rotate right on node " + node.data);
    Node temp1 = node.left;
    Node temp2 = temp1.right;
    // from where rotation started (origin node) should become right child
    temp1.right = node;
    // Right child of the left child of origin node should become left child
    // of origin node
    node.left = temp2;

    // update heights
    node.height = 1 + Math.max(getHeight(node.left), getHeight(node.right));
    temp1.height = 1 + Math.max(getHeight(temp1.left), getHeight(temp1.right));
    return temp1;
     
  }
  
  Node leftRotate(Node node) {
    System.out.println("Rotate left on node " + node.data);
    Node temp1 = node.right;
    Node temp2 = temp1.left;
    // from where rotation started (origin node) should become left child
    temp1.left = node;
    // left child of the right child of origin node should become right child
    // of origin node
    node.right = temp2;

    // update heights
    node.height = 1 + Math.max(getHeight(node.left), getHeight(node.right));
    temp1.height = 1 + Math.max(getHeight(temp1.left), getHeight(temp1.right));
    return temp1;
     
  }
  
  // Search node in binary search tree
  public Node find(int searchedValue){
    Node current = root;
    while(current.data != searchedValue){
      if(searchedValue < current.data)
        // Move to the left if searched value is less
        current = current.left;
      else
        // Move to the right if searched value is >=
        current = current.right;
      if(current == null){
        return null;
      }
    }
    return current;
  }
  
  // For traversing in order
  public void inOrder(Node node){
    if(node != null){
      inOrder(node.left);
      node.displayData();
      inOrder(node.right);
    }
  }
  
  // Preorder traversal
  public void preOrder(Node node){
    if(node != null){
      node.displayData();
      preOrder(node.left);           
      preOrder(node.right);
    }
  }
    
  // Postorder traversal
  public void postOrder(Node node){
    if(node != null){
      postOrder(node.left);
      postOrder(node.right);
      node.displayData();          
    }
  }
    
  public static void main(String[] args) {
    AVLTree avlTree = new AVLTree();
    int[] data = {10, 15, 20, 30, 40, 25};

    for(int d : data) {
      avlTree.insert(d);
    }
    
    System.out.println("Inorder traversal of binary tree");
    avlTree.inOrder(avlTree.root);
    System.out.println();
    
    System.out.println("Post order traversal of binary tree");
    avlTree.postOrder(avlTree.root);
    System.out.println();
    
    System.out.println("Pre order traversal of binary tree");
    avlTree.preOrder(avlTree.root);
    System.out.println();
    
    Node node = avlTree.find(21);
    System.out.println((node == null)? "Node not found" : String.valueOf(node.data));      
  }
}
Output
Rotate left on node 10
Rotate left on node 20
Rotate right on node 30
Rotate left on node 15
Inorder traversal of binary tree
10 15 20 25 30 40 
Post order traversal of binary tree
10 15 25 40 30 20 
Pre order traversal of binary tree
20 15 10 30 25 40 
Node not found

Constructed AVL tree would look something like this-

That's all for the topic AVL Tree Java Program - Insertion, Traversal, Search. If something is missing or you have something to share about the topic please write a comment.


You may also like