class Solution:
def multiply(self, mat1: List[List[int]], mat2: List[List[int]]) -> List[List[int]]:
m, n, p = len(mat1), len(mat1[0]), len(mat2[0])
result = [[0]*p for _ in range(m)]
for i in range(m):
for k in range(n):
if mat1[i][k] != 0:
for j in range(p):
if mat2[k][j] != 0:
result[i][j] += mat1[i][k] * mat2[k][j]
return result
class Solution {
public:
vector<vector<int>> multiply(vector<vector<int>>& mat1, vector<vector<int>>& mat2) {
int m = mat1.size(), n = mat1[0].size(), p = mat2[0].size();
vector<vector<int>> result(m, vector<int>(p, 0));
for (int i = 0; i < m; ++i) {
for (int k = 0; k < n; ++k) {
if (mat1[i][k] != 0) {
for (int j = 0; j < p; ++j) {
if (mat2[k][j] != 0) {
result[i][j] += mat1[i][k] * mat2[k][j];
}
}
}
}
}
return result;
}
};
class Solution {
public int[][] multiply(int[][] mat1, int[][] mat2) {
int m = mat1.length, n = mat1[0].length, p = mat2[0].length;
int[][] result = new int[m][p];
for (int i = 0; i < m; i++) {
for (int k = 0; k < n; k++) {
if (mat1[i][k] != 0) {
for (int j = 0; j < p; j++) {
if (mat2[k][j] != 0) {
result[i][j] += mat1[i][k] * mat2[k][j];
}
}
}
}
}
return result;
}
}
var multiply = function(mat1, mat2) {
const m = mat1.length, n = mat1[0].length, p = mat2[0].length;
const result = Array.from({length: m}, () => Array(p).fill(0));
for (let i = 0; i < m; i++) {
for (let k = 0; k < n; k++) {
if (mat1[i][k] !== 0) {
for (let j = 0; j < p; j++) {
if (mat2[k][j] !== 0) {
result[i][j] += mat1[i][k] * mat2[k][j];
}
}
}
}
}
return result;
};
You are given two 2D matrices, mat1
and mat2
, where the number of columns in mat1
is equal to the number of rows in mat2
. Both matrices are sparse, which means most of their elements are zero. Your task is to compute the matrix product mat1 x mat2
and return the resulting matrix as a 2D array.
At first glance, matrix multiplication is straightforward: for each position in the resulting matrix, sum the products of corresponding elements from a row in mat1
and a column in mat2
. However, if we naively iterate through all elements, we waste time multiplying and adding zeros, which do not affect the result.
Since both matrices are sparse, most elements are zeros. Multiplying by zero always results in zero, so we can skip these computations entirely. This realization leads us to look for non-zero elements only, which can drastically reduce the number of operations and improve efficiency.
mat1
and mat2
that are non-zero.
i
in mat1
, and for each column k
in that row, if mat1[i][k]
is non-zero, we then look at the k
-th row in mat2
.
j
in mat2
, if mat2[k][j]
is also non-zero, we compute the product and add it to the result at result[i][j]
.
This approach ensures that we only perform multiplications and additions when both operands are non-zero, making the solution much more efficient for sparse matrices.
Suppose mat1
and mat2
are:
mat1 = [ [1, 0, 0], [-1, 0, 3] ] mat2 = [ [7, 0, 0], [0, 0, 0], [0, 0, 1] ]
Let's compute the result step by step:
result[0][0]
: Only mat1[0][0]=1
and mat2[0][0]=7
are non-zero. So, 1*7=7
. All other products are zero.
result[0][1]
and result[0][2]
: All relevant products involve multiplying by zero, so they remain zero.
result[1][0]
: mat1[1][0]=-1
and mat2[0][0]=7
are non-zero. -1*7=-7
.
result[1][2]
: mat1[1][2]=3
and mat2[2][2]=1
are non-zero. 3*1=3
.
The final result is:
[ [7, 0, 0], [-7, 0, 3] ]
m x n
and n x p
matrix multiplication, the naive approach takes O(m * n * p)
time, since you compute each entry by summing n
products.
numNonZero1
be the number of non-zero elements in mat1
, and for each such element, you check up to p
columns in mat2
. However, you only process non-zero elements in mat2
as well, so the actual number of operations is proportional to the number of non-zero pairs. This can be much less than the brute-force time when the matrices are sparse.
O(m * p)
space for the result matrix.
By recognizing that most elements in both matrices are zero, we optimized matrix multiplication by skipping computations involving zeros. This avoids unnecessary work and makes the solution efficient for large, sparse matrices. The key insight is to only multiply and sum when both operands are non-zero, leading to significant performance gains over the brute-force approach.