class Solution:
def maxMatrixSum(self, matrix):
n = len(matrix)
total = 0
min_abs = float('inf')
neg_count = 0
for row in matrix:
for val in row:
if val < 0:
neg_count += 1
abs_val = abs(val)
total += abs_val
min_abs = min(min_abs, abs_val)
if neg_count % 2 == 0:
return total
else:
return total - 2 * min_abs
class Solution {
public:
long long maxMatrixSum(vector<vector<int>>& matrix) {
int n = matrix.size();
long long total = 0;
int minAbs = INT_MAX;
int negCount = 0;
for (const auto& row : matrix) {
for (int val : row) {
if (val < 0) negCount++;
int absVal = abs(val);
total += absVal;
if (absVal < minAbs) minAbs = absVal;
}
}
if (negCount % 2 == 0)
return total;
else
return total - 2LL * minAbs;
}
};
class Solution {
public long maxMatrixSum(int[][] matrix) {
int n = matrix.length;
long total = 0;
int minAbs = Integer.MAX_VALUE;
int negCount = 0;
for (int[] row : matrix) {
for (int val : row) {
if (val < 0) negCount++;
int absVal = Math.abs(val);
total += absVal;
if (absVal < minAbs) minAbs = absVal;
}
}
if (negCount % 2 == 0)
return total;
else
return total - 2L * minAbs;
}
}
var maxMatrixSum = function(matrix) {
let n = matrix.length;
let total = 0;
let minAbs = Infinity;
let negCount = 0;
for (let row of matrix) {
for (let val of row) {
if (val < 0) negCount++;
let absVal = Math.abs(val);
total += absVal;
if (absVal < minAbs) minAbs = absVal;
}
}
if (negCount % 2 === 0) {
return total;
} else {
return total - 2 * minAbs;
}
};
Given a square matrix matrix
of size n x n
, you are allowed to perform any number of operations where, in each operation, you may choose any two adjacent elements (sharing an edge) and multiply both of them by -1
(i.e., flip their sign). The goal is to maximize the sum of all elements in the matrix after performing any number of such operations.
1 <= n <= 300
, -10^5 <= matrix[i][j] <= 10^5
At first glance, it may seem that the only way to maximize the matrix sum is to brute-force all possible combinations of sign flips. However, given the size of the matrix, this is computationally infeasible.
Instead, let's think about the operations: flipping the sign of two adjacent elements. This operation preserves the parity (evenness or oddness) of the number of negative numbers in the matrix. If you flip two negatives, both become positive; if you flip a negative and a positive, both change their signs, but the total count of negatives changes by 0 or 2.
Therefore, the parity of the number of negative numbers is invariant modulo 2. This means that if there's an even number of negatives, you can always flip them all to positive. If there's an odd number, one will always remain negative no matter how you flip.
This insight leads us to an optimal approach: try to make all values positive, but if you end up with an odd count of negatives, one must stay negative, and to minimize the penalty, it should be the element with the smallest absolute value.
total
).neg_count
).min_abs
).total
.total
(since we counted it as positive, but it must be negative in the end).This approach ensures we only need a single pass over the matrix, and no extra data structures are needed, making it efficient and elegant.
Example Input:
matrix = [[1, -1], [-1, 1]]
neg_count = 2
(even).
min_abs = 1
.
neg_count
is even, we can flip all negatives to positives. The maximum sum is 4.
Another Example:
matrix = [[1, 2], [-3, 4]]
Absolute sum: 1 + 2 + 3 + 4 = 10
Negative count: 1
Minimum absolute: 1
Since negative count is odd, the answer is 10 - 2*1 = 8.
Brute-force approach: Would involve generating all possible sign configurations via allowed operations, which is exponential in the number of elements (O(2n2)), and thus infeasible for large matrices.
Optimized approach: Only requires a single pass over the matrix:
The Maximum Matrix Sum problem leverages a clever parity observation: the number of negative numbers' parity is invariant under the allowed operation. By summing absolute values and counting negatives, we can quickly determine whether all elements can be made positive or if one must remain negative. The final answer is either the total of all absolute values (if even negatives) or that total minus twice the smallest absolute value (if odd negatives). This approach is both elegant and highly efficient, requiring only a single pass and no extra memory.