We need code that combines results from multiple cameras. We shouldnt take the average, we should use an approach verified at the world level.
The overall strategy that teams use at world level is:
- get the estimatse from all the cameras
*throw out obviously bad ones like those that generate poses outside the field, those above a given distance, or those above a given ambiguity
- put those into kalman filter. a kalman filter is code that continuously updates a given thing, based on some notion of weighted importance of samples based on how much we 'trust' each measurement.
Here's a chat discussion for reference
https://chatgpt.com/c/697a07a9-b058-8330-abe1-e31b311a6b90
Starting point to find mechanical advantage code:
https://github.com/Mechanical-Advantage/RobotCode2023/blob/main/src/main/java/org/littletonrobotics/frc2023/subsystems/apriltagvision/AprilTagVisionIONorthstar.java?utm_source=chatgpt.com
For clarity: this class should be separate from the code to grab the sources and put into the robot, so that it can be unit tested.
we should use wpilib's kalman filters:
edu.wpi.first.math.estimator
edu.wpi.first.math.system
We need code that combines results from multiple cameras. We shouldnt take the average, we should use an approach verified at the world level.
The overall strategy that teams use at world level is:
*throw out obviously bad ones like those that generate poses outside the field, those above a given distance, or those above a given ambiguity
Here's a chat discussion for reference
https://chatgpt.com/c/697a07a9-b058-8330-abe1-e31b311a6b90
Starting point to find mechanical advantage code:
https://github.com/Mechanical-Advantage/RobotCode2023/blob/main/src/main/java/org/littletonrobotics/frc2023/subsystems/apriltagvision/AprilTagVisionIONorthstar.java?utm_source=chatgpt.com
For clarity: this class should be separate from the code to grab the sources and put into the robot, so that it can be unit tested.
we should use wpilib's kalman filters:
edu.wpi.first.math.estimator
edu.wpi.first.math.system