-
-
Notifications
You must be signed in to change notification settings - Fork 33
Commit
This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.
* create `MediapipeGraphStore`, `TrackingComponent` and `AvatarController` * fix graph encoding * get landmarks * update Akihabara to latest version * check if camera is ready * Revert "check if camera is ready" This reverts commit 5e8f82a. * use bitmap instead * null check * fix annoying crash -yet again- * install linux package and begin BGR24 to BGRA32 conversion * convert pixel format * full encoding * Improved vscode launch and tasks * Fix color encoding * upgrade akihabara windows runtime * remove debugging code * Improve image encoding speed by 30x * check if Mat is empty * Ignore empty mat * Remove duplicate if * add TryGetRawFrame * cleanup useless garbage * Ignore mediapipe folder for VSCode testing * Adding a FaceControlPoints class * Enclose it in correct namespace * Math. * Adding a FaceControlPoints to the TrackingComponent * refactor and use `FaceData` * Keep memory under control and check for mediapipe errors * add custom timestamp counter * make image conversion better * check if it's byte array * Add CONSTANT_CASE parameters for other avatars * Fix "data cannot be null" bug * fix memory leak * remove because i forgor * Use exceptions instead of booleans * Improve TryGetRawFrame() * Add an OutputFrame to facilitate camera preview * Do not expose the ImageFrame to avoid AccessViolation * update camera preview to show tracking * fix colors and stuff poggers * properly dispose image * Add a ConvertRaw util function * Remove conversion * Remove unnecessary code and use ConvertRaw util function * Bloating FaceData with names * First implementation of 3d angles * improve gitignore ffs * Refactor distance functions * Re-refactor distance functions * Add angle movement * Bypass compositor on Linux while debugging * Refactor angle formulas * Refactor AvatarController * OPEN THE EYES * Flip X angle * move away from update thread and only send data to mediapipe when a camera has a new frame * - add hardcoded model - add disclaimer - send frame to mediapipe every camera tick - refactor `TrackingComponent` * Nitrous forgor 💀 * Figured out the math for additive breathing * Fixed CubismBreathController * Add body angle control * Make the breath controller absolute instead of additive We'll implement a safe way to combine them at some point. * Set default arm opacity * Apply smoothing * Add Calibrate feature * Remove hard-coded offset * use an easeInOutQuint function for the eyes Seems slightly better, but still janky. * fix tracker not running on camera tick and remove camera preview (for now) * final steps! - move Calibrate to `RecognitionSection` and add hint text - prevent movement jittering Co-authored-by: Nathan Alo <nathan.alo2000@gmail.com> Co-authored-by: Adryzz <46694241+adryzz@users.noreply.github.com> Co-authored-by: Speykious <speykious@gmail.com>
- Loading branch information
1 parent
9cf8b28
commit 62e26f4
Showing
50 changed files
with
15,321 additions
and
169 deletions.
There are no files selected for viewing
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
63 changes: 63 additions & 0 deletions
63
Vignette.Game.Resources/Graphs/face_mesh_desktop_live.pbtxt
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,63 @@ | ||
# MediaPipe graph that performs face mesh with TensorFlow Lite on CPU. | ||
|
||
# Input image. (ImageFrame) | ||
input_stream: "input_video" | ||
|
||
# Output image with rendered results. (ImageFrame) | ||
output_stream: "output_video" | ||
# Collection of detected/processed faces, each represented as a list of | ||
# landmarks. (std::vector<NormalizedLandmarkList>) | ||
output_stream: "multi_face_landmarks" | ||
|
||
# Throttles the images flowing downstream for flow control. It passes through | ||
# the very first incoming image unaltered, and waits for downstream nodes | ||
# (calculators and subgraphs) in the graph to finish their tasks before it | ||
# passes through another image. All images that come in while waiting are | ||
# dropped, limiting the number of in-flight images in most part of the graph to | ||
# 1. This prevents the downstream nodes from queuing up incoming images and data | ||
# excessively, which leads to increased latency and memory usage, unwanted in | ||
# real-time mobile applications. It also eliminates unnecessarily computation, | ||
# e.g., the output produced by a node may get dropped downstream if the | ||
# subsequent nodes are still busy processing previous inputs. | ||
node { | ||
calculator: "FlowLimiterCalculator" | ||
input_stream: "input_video" | ||
input_stream: "FINISHED:output_video" | ||
input_stream_info: { | ||
tag_index: "FINISHED" | ||
back_edge: true | ||
} | ||
output_stream: "throttled_input_video" | ||
} | ||
|
||
# Defines side packets for further use in the graph. | ||
node { | ||
calculator: "ConstantSidePacketCalculator" | ||
output_side_packet: "PACKET:num_faces" | ||
node_options: { | ||
[type.googleapis.com/mediapipe.ConstantSidePacketCalculatorOptions]: { | ||
packet { int_value: 1 } | ||
} | ||
} | ||
} | ||
# Subgraph that detects faces and corresponding landmarks. | ||
node { | ||
calculator: "FaceLandmarkFrontCpu" | ||
input_stream: "IMAGE:throttled_input_video" | ||
input_side_packet: "NUM_FACES:num_faces" | ||
output_stream: "LANDMARKS:multi_face_landmarks" | ||
output_stream: "ROIS_FROM_LANDMARKS:face_rects_from_landmarks" | ||
output_stream: "DETECTIONS:face_detections" | ||
output_stream: "ROIS_FROM_DETECTIONS:face_rects_from_detections" | ||
} | ||
# Subgraph that renders face-landmark annotation onto the input image. | ||
node { | ||
calculator: "FaceRendererCpu" | ||
input_stream: "IMAGE:throttled_input_video" | ||
input_stream: "LANDMARKS:multi_face_landmarks" | ||
input_stream: "NORM_RECTS:face_rects_from_landmarks" | ||
input_stream: "DETECTIONS:face_detections" | ||
output_stream: "IMAGE:output_video" | ||
} |
Binary file not shown.
Oops, something went wrong.
62e26f4
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
❤️