class Solution:
def reconstructMatrix(self, upper, lower, colsum):
n = len(colsum)
row1 = [0] * n
row2 = [0] * n
for i in range(n):
if colsum[i] == 2:
row1[i] = 1
row2[i] = 1
upper -= 1
lower -= 1
for i in range(n):
if colsum[i] == 1:
if upper > 0:
row1[i] = 1
upper -= 1
elif lower > 0:
row2[i] = 1
lower -= 1
else:
return []
if upper != 0 or lower != 0:
return []
return [row1, row2]
class Solution {
public:
vector<vector<int>> reconstructMatrix(int upper, int lower, vector<int>& colsum) {
int n = colsum.size();
vector<int> row1(n, 0), row2(n, 0);
for (int i = 0; i < n; ++i) {
if (colsum[i] == 2) {
row1[i] = 1;
row2[i] = 1;
--upper;
--lower;
}
}
for (int i = 0; i < n; ++i) {
if (colsum[i] == 1) {
if (upper > 0) {
row1[i] = 1;
--upper;
} else if (lower > 0) {
row2[i] = 1;
--lower;
} else {
return {};
}
}
}
if (upper != 0 || lower != 0) return {};
return {row1, row2};
}
};
class Solution {
public List<List<Integer>> reconstructMatrix(int upper, int lower, List<Integer> colsum) {
int n = colsum.size();
List<Integer> row1 = new ArrayList<>(Collections.nCopies(n, 0));
List<Integer> row2 = new ArrayList<>(Collections.nCopies(n, 0));
for (int i = 0; i < n; ++i) {
if (colsum.get(i) == 2) {
row1.set(i, 1);
row2.set(i, 1);
upper--;
lower--;
}
}
for (int i = 0; i < n; ++i) {
if (colsum.get(i) == 1) {
if (upper > 0) {
row1.set(i, 1);
upper--;
} else if (lower > 0) {
row2.set(i, 1);
lower--;
} else {
return new ArrayList<>();
}
}
}
if (upper != 0 || lower != 0) return new ArrayList<>();
List<List<Integer>> result = new ArrayList<>();
result.add(row1);
result.add(row2);
return result;
}
}
var reconstructMatrix = function(upper, lower, colsum) {
let n = colsum.length;
let row1 = new Array(n).fill(0);
let row2 = new Array(n).fill(0);
for (let i = 0; i < n; ++i) {
if (colsum[i] === 2) {
row1[i] = 1;
row2[i] = 1;
upper--;
lower--;
}
}
for (let i = 0; i < n; ++i) {
if (colsum[i] === 1) {
if (upper > 0) {
row1[i] = 1;
upper--;
} else if (lower > 0) {
row2[i] = 1;
lower--;
} else {
return [];
}
}
}
if (upper !== 0 || lower !== 0) return [];
return [row1, row2];
};
You are given three inputs:
upper
: The total number of 1's that must appear in the first row of a 2-row binary matrix.lower
: The total number of 1's that must appear in the second row.colsum
: An array where colsum[i]
is the sum of the elements in column i
(so colsum[i]
is 0, 1, or 2).upper
.lower
.i
, the sum of the two rows at column i
is colsum[i]
.At first glance, the problem seems to require trying all possible binary combinations for each column, but that would be very inefficient, especially for large inputs. Instead, we can use the constraints to guide our assignments.
For each column, we know exactly how many 1's must be present. If colsum[i] == 2
, both rows must have a 1 in that column. If colsum[i] == 0
, both must have a 0. If colsum[i] == 1
, one of the rows must have a 1, but we need to decide which one based on the remaining upper
and lower
counts.
Instead of brute force, we can process the columns in order, greedily assigning 1's where required and tracking how many 1's remain to be assigned in each row.
The solution is built around a greedy, two-pass assignment process:
colsum[i] == 2
, both rows must have a 1 in this column. Assign 1 to both rows in column i
. Decrease upper
and lower
by 1.colsum[i] == 1
, assign the 1 to the row (upper or lower) that still needs more 1's. Prefer the first row if upper
is still positive, otherwise assign to the second row. After assignment, decrease the corresponding counter.upper
nor lower
is positive when needed, it's impossible, so return an empty array/list.upper != 0
or lower != 0
, the assignment is invalid, so return an empty array/list.This approach ensures we never violate the row sum constraints and always respect the column sums.
Let's use the following example:
upper = 2, lower = 1, colsum = [1,1,1]
row1 = [0,0,0]
, row2 = [0,0,0]
colsum[i] == 2
, so nothing changes.i
where colsum[i] == 1
:
upper = 2
, assign 1 to row1[0]
, upper = 1
upper = 1
, assign 1 to row1[1]
, upper = 0
upper = 0
, so assign 1 to row2[2]
, lower = 0
upper = 0
, lower = 0
(all constraints satisfied)[[1,1,0], [0,0,1]]
This matches all the constraints: both row sums and column sums.
Brute-force approach: Would try all 2n possible assignments for each row, which is exponential and infeasible for large colsum
.
Optimized approach (as above):
colsum
. Each column is processed at most twice (once for 2's, once for 1's).
The key insight is that the problem's constraints allow for a greedy, linear solution. By assigning mandatory 1's for colsum[i] == 2
first, and then distributing the remaining 1's for colsum[i] == 1
based on available upper
and lower
, we ensure all requirements are met. The final check for leftover 1's ensures validity.
This approach is elegant because it avoids brute-force enumeration and leverages the problem's structure for an efficient O(n) solution.