mirror of
https://repo.dec05eba.com/gpu-screen-recorder
synced 2026-01-31 01:13:06 +09:00
Add support for camera (yuyv, mjpeg) and multiple capture sources
This commit is contained in:
@@ -63,6 +63,7 @@ These are the dependencies needed to build GPU Screen Recorder:
|
||||
* libdrm
|
||||
* libcap
|
||||
* vulkan-headers
|
||||
* linux-api-headers
|
||||
|
||||
## Optional dependencies
|
||||
When building GPU Screen Recorder with portal support (`-Dportal=true` meson option, which is enabled by default) these dependencies are also needed:
|
||||
@@ -71,6 +72,7 @@ When building GPU Screen Recorder with portal support (`-Dportal=true` meson opt
|
||||
|
||||
## Runtime dependencies
|
||||
* libglvnd (which provides libgl, libglx and libegl) is needed. Your system needs to support at least OpenGL ES 3.0 (released in 2012)
|
||||
* libturbojpeg (aka libjpeg-turbo) is needed when capturing camera with mjpeg pixel format option
|
||||
|
||||
There are also additional dependencies needed at runtime depending on your GPU vendor:
|
||||
|
||||
@@ -252,4 +254,7 @@ This issue hasn't been observed on X11 yet, but if you do observe it you can eit
|
||||
## Password prompt shows up when I try to record my screen
|
||||
If GPU Screen Recorder is installed with -Dcapabilities=true (which is the default option) then `gsr-kms-server` is installed with admin capabilities.
|
||||
This removes a password prompt when recording a monitor with the `-w monitor` option (for example `-w screen`). However if the root user is disabled on the system then the password prompt will show up anyways.
|
||||
If the root user is disabled on your system then you can instead record with `-w focused` or `-w window_id` on X11 or `-w portal` on Wayland.
|
||||
If the root user is disabled on your system then you can instead record with `-w focused` or `-w window_id` on X11 or `-w portal` on Wayland.
|
||||
## GPU usage is high on my laptop
|
||||
GPU usage on battery powered devices is misleading. For example Intel iGPUs has multiple performance levels and the GPU usage reported on the system is the GPU usage at the current performance level.
|
||||
The performance level changes depending on the GPU load, so it may say that GPU usage is 80%, but the actual GPU usage may be 5%.
|
||||
|
||||
32
TODO
32
TODO
@@ -4,12 +4,8 @@ See https://trac.ffmpeg.org/wiki/EncodingForStreamingSites for optimizing stream
|
||||
Look at VK_EXT_external_memory_dma_buf.
|
||||
Use mov+faststart.
|
||||
Allow recording all monitors/selected monitor without nvfbc by recording the compositor proxy window and only recording the part that matches the monitor(s).
|
||||
Support amf and qsv.
|
||||
Disable flipping on nvidia? this might fix some stuttering issues on some setups. See NvCtrlGetAttribute/NvCtrlSetAttributeAndGetStatus NV_CTRL_SYNC_TO_VBLANK https://github.com/NVIDIA/nvidia-settings/blob/d5f022976368cbceb2f20b838ddb0bf992f0cfb9/src/gtk%2B-2.x/ctkopengl.c.
|
||||
Replays seem to have some issues with audio/video. Why?
|
||||
Cleanup unused gl/egl functions, macro, etc.
|
||||
Set audio track name to audio device name (if not merge of multiple audio devices).
|
||||
Add support for webcam, but only really for amd/intel because amd/intel can get drm fd access to webcam, nvidia cant. This allows us to create an opengl texture directly from the webcam fd for optimal performance.
|
||||
Reverse engineer nvapi so we can disable "force p2 state" on linux too (nvapi profile api with the settings id 0x50166c5e).
|
||||
Support yuv444p on amd/intel.
|
||||
fix yuv444 for hevc.
|
||||
@@ -94,7 +90,7 @@ Support vfr matching games exact fps all the time. On x11 use damage tracking, o
|
||||
Support selecting which gpu to use. This can be done in egl with eglQueryDevicesEXT and then eglGetPlatformDisplayEXT. This will automatically work on AMD and Intel as vaapi uses the same device. On nvidia we need to use eglQueryDeviceAttribEXT with EGL_CUDA_DEVICE_NV.
|
||||
Maybe on glx (nvidia x11 nvfbc) we need to use __NV_PRIME_RENDER_OFFLOAD, __NV_PRIME_RENDER_OFFLOAD_PROVIDER, __GLX_VENDOR_LIBRARY_NAME, __VK_LAYER_NV_optimus, VK_ICD_FILENAMES instead. Just look at prime-run /usr/bin/prime-run.
|
||||
|
||||
When adding support for steam deck, add option to send video to another computer.
|
||||
Add option to send video to another computer.
|
||||
New gpu screen recorder gui should have the option to cut the video directly, maybe running an ffmpeg command or implementing that ourselves. Only support gpu screen recorder video files.
|
||||
|
||||
Check if is software renderer by using eglQueryDisplayAttribEXT(egl_display, EGL_DEVICE_EXT..) eglQueryDeviceStringEXT(egl_device, EGL_EXTENSIONS) and check for "EGL_MESA_device_software".
|
||||
@@ -131,8 +127,6 @@ Enable 2-pass encoding.
|
||||
|
||||
Restart replay/update video resolution if monitor resolution changes.
|
||||
|
||||
Fix pure vaapi copy on intel.
|
||||
|
||||
Use nvidia low latency options for better encoding times.
|
||||
|
||||
Test ideal async_depth value. Increasing async_depth also increased gpu memory usage a lot (from 100mb to 500mb when moving from async_depth 2 to 16) at 4k resolution. Setting it to 8 increases it by 200mb which might be ok.
|
||||
@@ -354,3 +348,27 @@ There is a leak in nvfbc. When a monitor is turned off and then on there will be
|
||||
Right now a mitigation has been added to not try to recreate the nvfbc session if the capture target (monitor) isn't connected (predict if nvfbc session create will fail).
|
||||
One possible reason this happens is because bExternallyManagedContext is set to true.
|
||||
This also means that nvfbc leaks connection when destroying nvfbc, even if the monitor is connected (this is not an issue right now because exit is done, but if gsr was turned into a library it would be).
|
||||
|
||||
Add option to set audio source volume, maybe by doing for example: -a "default_input|app:firefox;volume=50"
|
||||
|
||||
Optimize v4l2 mjpeg by decompressing to yuv (tjDecompressToYUV) instead of rgb. This would allow removing the yuv to rgb conversion on cpu step (and rgb to yuv on the gpu as well) as well as reducing the cpu->gpu image data bandwidth as yuv is compressed.
|
||||
|
||||
Do jpeg decoding on the gpu for v4l2 mjpeg capture (see https://github.com/negge/jpeg_gpu). Or use jpeg decoding from vaapi/nvdec.
|
||||
|
||||
Support other v4l2 pixel formats, such as h264 and rgb/bgr. High-end cameras use h264 for high resolution high framerate option (4k 60fps or higher).
|
||||
|
||||
AV1 medium quality on nvidia seems to be very high quality. The quality needs to be rescaled. Need to test on nvidia av1 gpu, but I dont have any such gpu.
|
||||
|
||||
Support multiple -w, to record different sources to different video tracks. For example gameplay to video track 1 and webcam to video track 2.
|
||||
|
||||
Update man page with info about v4l2 and new region capture format. Mention that -region is deprecated.
|
||||
|
||||
Make multiple -w portal work with -restore-portal-session.
|
||||
|
||||
Test if webcam capture works on intel and nvidia. Especially nvidia on x11 because it uses glx there for nvfbc while v4l2 needs egl.
|
||||
|
||||
Play with DRM_FORMAT_MOD_LINEAR or other linear formats to avoid cuda copy. If it's already in linear format then instead check the format of the target texture and use the same format.
|
||||
|
||||
Give early error when using invalid v4l2 path like with monitor. Use gsr_capture_v4l2_list_devices for query. Or maybe remove the early error for monitor to simplify the code.
|
||||
|
||||
Support camera capture with pipewire to support multiple applications recording camera at the same time. Might not be as efficient as V4L2.
|
||||
@@ -3,9 +3,8 @@
|
||||
gpu-screen-recorder \- The fastest screen recording tool for Linux
|
||||
.SH SYNOPSIS
|
||||
.B gpu-screen-recorder
|
||||
.RI [ options ]
|
||||
.B \-w
|
||||
.I window_id|monitor|focused|portal|region
|
||||
.I window_id|monitor|focused|portal|region|v4l2_device_path
|
||||
.RI [ options ]
|
||||
.B \-o
|
||||
.I output_file
|
||||
@@ -50,8 +49,9 @@ Minimal performance impact compared to traditional screen recorders
|
||||
.SH OPTIONS
|
||||
.SS Capture Options
|
||||
.TP
|
||||
.BI \-w " window_id|monitor|focused|portal|region"
|
||||
Specify what to record. Valid values are:
|
||||
.BI \-w " window_id|monitor|focused|portal|region|v4l2_device_path"
|
||||
Specify what to record.
|
||||
Formats:
|
||||
.RS
|
||||
.IP \(bu 3
|
||||
.B window
|
||||
@@ -77,10 +77,99 @@ option)
|
||||
Monitor name (e.g.,
|
||||
.BR DP\-1 )
|
||||
- Record specific monitor
|
||||
.IP \(bu 3
|
||||
V4L2 device path (e.g.,
|
||||
.BR /dev/video0 )
|
||||
- Record camera device (V4L2).
|
||||
|
||||
Other applications can't use the camera when GPU Screen Recorder is using the camera and GPU Screen Recorder may not be able to use the camera if another application is already using the camera
|
||||
.IP \(bu 3
|
||||
Combine sources with | (e.g.,
|
||||
.BR "monitor:screen|v4l2:/dev/video0" )
|
||||
.RE
|
||||
.PP
|
||||
Run
|
||||
.B \-\-list\-capture\-options
|
||||
to see available options.
|
||||
to list available capture sources.
|
||||
.PP
|
||||
Run
|
||||
.B \-\-list\-v4l2\-devices
|
||||
to list available camera devices (V4L2).
|
||||
.PP
|
||||
Additional options can be passed to each capture source by splitting capture source with
|
||||
.B ;
|
||||
for example
|
||||
.BR "screen;x=50;y=50".
|
||||
.br
|
||||
These are the available options for capture sources:
|
||||
.RS
|
||||
.IP \(bu 3
|
||||
.B x
|
||||
- The X position in pixels. If the number ends with % and is a number between 0 and 100 then it's a position relative to the video size
|
||||
.IP \(bu 3
|
||||
.B y
|
||||
- The Y position in pixels. If the number ends with % and is a number between 0 and 100 then it's a position relative to the video size
|
||||
.IP \(bu 3
|
||||
.B width
|
||||
- The width in pixels. If the number ends with % and is a number between 0 and 100 then it's a size relative to the video size
|
||||
.IP \(bu 3
|
||||
.B height
|
||||
- The height in pixels. If the number ends with % and is a number between 0 and 100 then it's a size relative to the video size
|
||||
.IP \(bu 3
|
||||
.B halign
|
||||
- The horizontal alignment, should be either
|
||||
.BR "start",
|
||||
.B center
|
||||
or
|
||||
.BR "end".
|
||||
|
||||
Set to
|
||||
.B center
|
||||
by default, except for camera (V4L2) when capturing the camera above something else in which case this is set to
|
||||
.B start
|
||||
by default
|
||||
.IP \(bu 3
|
||||
.B valign
|
||||
- The vertical alignment, should be either
|
||||
.BR "start",
|
||||
.B center
|
||||
or
|
||||
.BR "end".
|
||||
|
||||
Set to
|
||||
.B center
|
||||
by default, except for camera (V4L2) when capturing the camera above something else in which case this is set to
|
||||
.B end
|
||||
by default
|
||||
.IP \(bu 3
|
||||
.B hflip
|
||||
- If the source should be flipped horizontally, should be either
|
||||
.B "true"
|
||||
or
|
||||
.BR "false".
|
||||
Set to
|
||||
.B false
|
||||
by default
|
||||
.IP \(bu 3
|
||||
.B vflip
|
||||
- If the source should be flipped vertically, should be either
|
||||
.B "true"
|
||||
or
|
||||
.BR "false".
|
||||
Set to
|
||||
.B false
|
||||
by default
|
||||
.IP \(bu 3
|
||||
.B pixfmt
|
||||
- The pixel format for cameras (V4L2), should be either
|
||||
.BR "auto",
|
||||
.B yuyv
|
||||
or
|
||||
.BR "mjpeg".
|
||||
Set to
|
||||
.B auto
|
||||
by default
|
||||
.RE
|
||||
.TP
|
||||
.BI \-region " WxH+X+Y"
|
||||
Specify region to capture when using
|
||||
@@ -132,6 +221,14 @@ Formats:
|
||||
Combine sources with | (e.g.,
|
||||
.BR "default_output|app:firefox" )
|
||||
.RE
|
||||
.PP
|
||||
Run
|
||||
.B \-\-list\-audio\-devices
|
||||
to list available audio devices.
|
||||
.br
|
||||
Run
|
||||
.B \-\-list\-application\-audio
|
||||
to list available applications for audio capture.
|
||||
.TP
|
||||
.BI \-ac " aac|opus|flac"
|
||||
Audio codec (default: opus for mp4/mkv, aac otherwise). FLAC temporarily disabled.
|
||||
@@ -224,13 +321,16 @@ Show version (5.10.2).
|
||||
Show system info (codecs, capture options).
|
||||
.TP
|
||||
.B \-\-list\-capture\-options
|
||||
List available windows and monitors.
|
||||
List available capture sources (window, monitors, portal, v4l2 device path).
|
||||
.TP
|
||||
.B \-\-list\-v4l2\-devices
|
||||
List available cameras devices (V4L2).
|
||||
.TP
|
||||
.B \-\-list\-audio\-devices
|
||||
List PulseAudio devices.
|
||||
List available audio devices.
|
||||
.TP
|
||||
.B \-\-list\-application\-audio
|
||||
List applications for audio capture.
|
||||
List available applications for audio capture.
|
||||
.SH SIGNALS
|
||||
GPU Screen Recorder can be controlled via signals:
|
||||
.TP
|
||||
@@ -305,7 +405,7 @@ Record region using slop:
|
||||
.PP
|
||||
.nf
|
||||
.RS
|
||||
gpu-screen-recorder -w region -region $(slop) -o video.mp4
|
||||
gpu-screen-recorder -w $(slop) -o video.mp4
|
||||
.RE
|
||||
.fi
|
||||
.PP
|
||||
@@ -313,11 +413,11 @@ Record region using slurp:
|
||||
.PP
|
||||
.nf
|
||||
.RS
|
||||
gpu-screen-recorder -w region -region $(slurp -f "%wx%h+%x+%y") -o video.mp4
|
||||
gpu-screen-recorder -w $(slurp -f "%wx%h+%x+%y") -o video.mp4
|
||||
.RE
|
||||
.fi
|
||||
.PP
|
||||
Instant replay and launch a script when saving replay
|
||||
Instant replay and launch a script when saving replay:
|
||||
.PP
|
||||
.nf
|
||||
.RS
|
||||
@@ -338,6 +438,22 @@ Take screenshot:
|
||||
.nf
|
||||
.RS
|
||||
gpu-screen-recorder -w screen -o screenshot.jpg
|
||||
.PP
|
||||
.RE
|
||||
.fi
|
||||
Record screen and camera:
|
||||
.PP
|
||||
.nf
|
||||
.RS
|
||||
gpu-screen-recorder -w "screen|/dev/video0" -o video.mp4
|
||||
.PP
|
||||
.RE
|
||||
.fi
|
||||
Record screen and camera. The camera is located at the bottom right flipped horizontally:
|
||||
.PP
|
||||
.nf
|
||||
.RS
|
||||
gpu-screen-recorder -w "monitor:screen|v4l2:/dev/video0;halign=end;valign=end;hflip=true" -o video.mp4
|
||||
.RE
|
||||
.fi
|
||||
.SH FILES
|
||||
|
||||
@@ -10,6 +10,15 @@ typedef struct gsr_egl gsr_egl;
|
||||
|
||||
#define NUM_ARGS 32
|
||||
|
||||
typedef enum {
|
||||
GSR_CAPTURE_SOURCE_TYPE_WINDOW,
|
||||
GSR_CAPTURE_SOURCE_TYPE_FOCUSED_WINDOW,
|
||||
GSR_CAPTURE_SOURCE_TYPE_MONITOR,
|
||||
GSR_CAPTURE_SOURCE_TYPE_REGION,
|
||||
GSR_CAPTURE_SOURCE_TYPE_PORTAL,
|
||||
GSR_CAPTURE_SOURCE_TYPE_V4L2
|
||||
} CaptureSourceType;
|
||||
|
||||
typedef enum {
|
||||
ARG_TYPE_STRING,
|
||||
ARG_TYPE_BOOLEAN,
|
||||
@@ -52,6 +61,7 @@ typedef struct {
|
||||
void (*info)(void *userdata);
|
||||
void (*list_audio_devices)(void *userdata);
|
||||
void (*list_application_audio)(void *userdata);
|
||||
void (*list_v4l2_devices)(void *userdata);
|
||||
void (*list_capture_options)(const char *card_path, void *userdata);
|
||||
} args_handlers;
|
||||
|
||||
@@ -68,7 +78,8 @@ typedef struct {
|
||||
gsr_bitrate_mode bitrate_mode;
|
||||
gsr_video_quality video_quality;
|
||||
gsr_replay_storage replay_storage;
|
||||
char window[64];
|
||||
|
||||
const char *capture_source;
|
||||
const char *container_format;
|
||||
const char *filename;
|
||||
const char *replay_recording_directory;
|
||||
|
||||
@@ -12,24 +12,34 @@ typedef struct AVFrame AVFrame;
|
||||
typedef struct AVMasteringDisplayMetadata AVMasteringDisplayMetadata;
|
||||
typedef struct AVContentLightMetadata AVContentLightMetadata;
|
||||
typedef struct gsr_capture gsr_capture;
|
||||
typedef struct gsr_capture_metadata gsr_capture_metadata;
|
||||
|
||||
typedef struct {
|
||||
// Width and height of the video
|
||||
int video_width;
|
||||
int video_height;
|
||||
// Width and height of the frame at the start of capture, the target size
|
||||
int recording_width;
|
||||
int recording_height;
|
||||
typedef enum {
|
||||
GSR_CAPTURE_ALIGN_START,
|
||||
GSR_CAPTURE_ALIGN_CENTER,
|
||||
GSR_CAPTURE_ALIGN_END
|
||||
} gsr_capture_alignment;
|
||||
|
||||
struct gsr_capture_metadata {
|
||||
// Size of the video
|
||||
vec2i video_size;
|
||||
// The captured output gets scaled to this size. By default this will be the same size as the captured target
|
||||
vec2i recording_size;
|
||||
vec2i position;
|
||||
int fps;
|
||||
} gsr_capture_metadata;
|
||||
gsr_capture_alignment halign;
|
||||
gsr_capture_alignment valign;
|
||||
gsr_flip flip;
|
||||
};
|
||||
|
||||
struct gsr_capture {
|
||||
/* These methods should not be called manually. Call gsr_capture_* instead. |capture_metadata->video_width| and |capture_metadata->video_height| should be set by this function */
|
||||
/* These methods should not be called manually. Call gsr_capture_* instead. |capture_metadata->video_size| should be set by this function */
|
||||
int (*start)(gsr_capture *cap, gsr_capture_metadata *capture_metadata);
|
||||
void (*on_event)(gsr_capture *cap, gsr_egl *egl); /* can be NULL */
|
||||
void (*tick)(gsr_capture *cap); /* can be NULL. If there is an event then |on_event| is called before this */
|
||||
bool (*should_stop)(gsr_capture *cap, bool *err); /* can be NULL. If NULL, return false */
|
||||
bool (*capture_has_synchronous_task)(gsr_capture *cap); /* can be NULL. If this returns true then the time spent in |capture| is ignored for video/audio (capture is paused while the synchronous task happens) */
|
||||
void (*pre_capture)(gsr_capture *cap, gsr_capture_metadata *capture_metadata, gsr_color_conversion *color_conversion); /* can be NULL */
|
||||
int (*capture)(gsr_capture *cap, gsr_capture_metadata *capture_metadata, gsr_color_conversion *color_conversion); /* Return 0 if the frame was captured */
|
||||
bool (*uses_external_image)(gsr_capture *cap); /* can be NULL. If NULL, return false */
|
||||
bool (*set_hdr_metadata)(gsr_capture *cap, AVMasteringDisplayMetadata *mastering_display_metadata, AVContentLightMetadata *light_metadata); /* can be NULL. If NULL, return false */
|
||||
|
||||
@@ -2,9 +2,13 @@
|
||||
#define GSR_CAPTURE_KMS_H
|
||||
|
||||
#include "capture.h"
|
||||
#include "../cursor.h"
|
||||
#include "../../kms/kms_shared.h"
|
||||
|
||||
typedef struct {
|
||||
gsr_egl *egl;
|
||||
gsr_cursor *x11_cursor;
|
||||
gsr_kms_response *kms_response;
|
||||
const char *display_to_capture; /* A copy is made of this */
|
||||
bool hdr;
|
||||
bool record_cursor;
|
||||
|
||||
30
include/capture/v4l2.h
Normal file
30
include/capture/v4l2.h
Normal file
@@ -0,0 +1,30 @@
|
||||
#ifndef GSR_CAPTURE_V4L2_H
|
||||
#define GSR_CAPTURE_V4L2_H
|
||||
|
||||
#include "capture.h"
|
||||
|
||||
typedef enum {
|
||||
GSR_CAPTURE_V4L2_PIXFMT_AUTO,
|
||||
GSR_CAPTURE_V4L2_PIXFMT_YUYV,
|
||||
GSR_CAPTURE_V4L2_PIXFMT_MJPEG
|
||||
} gsr_capture_v4l2_pixfmt;
|
||||
|
||||
typedef struct {
|
||||
bool yuyv;
|
||||
bool mjpeg;
|
||||
} gsr_capture_v4l2_supported_pixfmts;
|
||||
|
||||
typedef struct {
|
||||
gsr_egl *egl;
|
||||
vec2i output_resolution;
|
||||
const char *device_path;
|
||||
gsr_capture_v4l2_pixfmt pixfmt;
|
||||
int fps;
|
||||
} gsr_capture_v4l2_params;
|
||||
|
||||
gsr_capture* gsr_capture_v4l2_create(const gsr_capture_v4l2_params *params);
|
||||
|
||||
typedef void (*v4l2_devices_query_callback)(const char *path, gsr_capture_v4l2_supported_pixfmts supported_pixfmts, vec2i size, void *userdata);
|
||||
void gsr_capture_v4l2_list_devices(v4l2_devices_query_callback callback, void *userdata);
|
||||
|
||||
#endif /* GSR_CAPTURE_V4L2_H */
|
||||
@@ -3,9 +3,11 @@
|
||||
|
||||
#include "capture.h"
|
||||
#include "../vec2.h"
|
||||
#include "../cursor.h"
|
||||
|
||||
typedef struct {
|
||||
gsr_egl *egl;
|
||||
gsr_cursor *cursor;
|
||||
unsigned long window;
|
||||
bool follow_focused; /* If this is set then |window| is ignored */
|
||||
bool record_cursor;
|
||||
|
||||
@@ -3,9 +3,11 @@
|
||||
|
||||
#include "capture.h"
|
||||
#include "../vec2.h"
|
||||
#include "../cursor.h"
|
||||
|
||||
typedef struct {
|
||||
gsr_egl *egl;
|
||||
gsr_cursor *cursor;
|
||||
const char *display_to_capture; /* A copy is made of this */
|
||||
bool record_cursor;
|
||||
vec2i output_resolution;
|
||||
|
||||
@@ -27,6 +27,12 @@ typedef enum {
|
||||
GSR_ROT_270
|
||||
} gsr_rotation;
|
||||
|
||||
typedef enum {
|
||||
GSR_FLIP_NONE = 0,
|
||||
GSR_FLIP_HORIZONTAL = (1 << 0),
|
||||
GSR_FLIP_VERTICAL = (1 << 1)
|
||||
} gsr_flip;
|
||||
|
||||
typedef struct {
|
||||
int rotation_matrix;
|
||||
int offset;
|
||||
@@ -55,12 +61,14 @@ typedef struct {
|
||||
|
||||
unsigned int vertex_array_object_id;
|
||||
unsigned int vertex_buffer_object_id;
|
||||
|
||||
bool schedule_clear;
|
||||
} gsr_color_conversion;
|
||||
|
||||
int gsr_color_conversion_init(gsr_color_conversion *self, const gsr_color_conversion_params *params);
|
||||
void gsr_color_conversion_deinit(gsr_color_conversion *self);
|
||||
|
||||
void gsr_color_conversion_draw(gsr_color_conversion *self, unsigned int texture_id, vec2i destination_pos, vec2i destination_size, vec2i source_pos, vec2i source_size, vec2i texture_size, gsr_rotation rotation, gsr_source_color source_color, bool external_texture);
|
||||
void gsr_color_conversion_draw(gsr_color_conversion *self, unsigned int texture_id, vec2i destination_pos, vec2i destination_size, vec2i source_pos, vec2i source_size, vec2i texture_size, gsr_rotation rotation, gsr_flip flip, gsr_source_color source_color, bool external_texture);
|
||||
void gsr_color_conversion_clear(gsr_color_conversion *self);
|
||||
void gsr_color_conversion_read_destination_texture(gsr_color_conversion *self, int destination_texture_index, int x, int y, int width, int height, unsigned int color_format, unsigned int data_format, void *pixels);
|
||||
|
||||
|
||||
@@ -9,40 +9,62 @@
|
||||
typedef struct _XDisplay Display;
|
||||
typedef union _XEvent XEvent;
|
||||
|
||||
typedef enum {
|
||||
GSR_DAMAGE_TRACK_NONE,
|
||||
GSR_DAMAGE_TRACK_WINDOW,
|
||||
GSR_DAMAGE_TRACK_MONITOR
|
||||
} gsr_damage_track_type;
|
||||
#define GSR_DAMAGE_MAX_MONITORS 32
|
||||
#define GSR_DAMAGE_MAX_TRACKED_TARGETS 12
|
||||
|
||||
typedef struct {
|
||||
int64_t window_id;
|
||||
vec2i window_pos;
|
||||
vec2i window_size;
|
||||
uint64_t damage;
|
||||
int refcount;
|
||||
} gsr_damage_window;
|
||||
|
||||
typedef struct {
|
||||
char *monitor_name;
|
||||
gsr_monitor *monitor;
|
||||
int refcount;
|
||||
} gsr_damage_monitor;
|
||||
|
||||
typedef struct {
|
||||
gsr_egl *egl;
|
||||
Display *display;
|
||||
bool track_cursor;
|
||||
gsr_damage_track_type track_type;
|
||||
|
||||
int damage_event;
|
||||
int damage_error;
|
||||
uint64_t damage;
|
||||
bool damaged;
|
||||
|
||||
uint64_t monitor_damage;
|
||||
|
||||
int randr_event;
|
||||
int randr_error;
|
||||
|
||||
uint64_t window;
|
||||
//vec2i window_pos;
|
||||
vec2i window_size;
|
||||
gsr_cursor *cursor;
|
||||
gsr_monitor monitors[GSR_DAMAGE_MAX_MONITORS];
|
||||
int num_monitors;
|
||||
|
||||
gsr_cursor cursor; /* Relative to |window| */
|
||||
gsr_monitor monitor;
|
||||
char monitor_name[32];
|
||||
gsr_damage_window windows_tracked[GSR_DAMAGE_MAX_TRACKED_TARGETS];
|
||||
int num_windows_tracked;
|
||||
|
||||
gsr_damage_monitor monitors_tracked[GSR_DAMAGE_MAX_TRACKED_TARGETS];
|
||||
int num_monitors_tracked;
|
||||
|
||||
int all_monitors_tracked_refcount;
|
||||
vec2i cursor_pos;
|
||||
} gsr_damage;
|
||||
|
||||
bool gsr_damage_init(gsr_damage *self, gsr_egl *egl, bool track_cursor);
|
||||
bool gsr_damage_init(gsr_damage *self, gsr_egl *egl, gsr_cursor *cursor, bool track_cursor);
|
||||
void gsr_damage_deinit(gsr_damage *self);
|
||||
|
||||
bool gsr_damage_set_target_window(gsr_damage *self, uint64_t window);
|
||||
bool gsr_damage_set_target_monitor(gsr_damage *self, const char *monitor_name);
|
||||
/* This is reference counted */
|
||||
bool gsr_damage_start_tracking_window(gsr_damage *self, int64_t window);
|
||||
void gsr_damage_stop_tracking_window(gsr_damage *self, int64_t window);
|
||||
|
||||
/* This is reference counted. If |monitor_name| is NULL then all monitors are tracked */
|
||||
bool gsr_damage_start_tracking_monitor(gsr_damage *self, const char *monitor_name);
|
||||
void gsr_damage_stop_tracking_monitor(gsr_damage *self, const char *monitor_name);
|
||||
|
||||
void gsr_damage_on_event(gsr_damage *self, XEvent *xev);
|
||||
void gsr_damage_tick(gsr_damage *self);
|
||||
/* Also returns true if damage tracking is not available */
|
||||
|
||||
@@ -140,6 +140,7 @@ typedef void(*__GLXextFuncPtr)(void);
|
||||
#define GL_TEXTURE1 0x84C1
|
||||
#define GL_SHADER_IMAGE_ACCESS_BARRIER_BIT 0x00000020
|
||||
#define GL_ALL_BARRIER_BITS 0xFFFFFFFF
|
||||
#define GL_PIXEL_UNPACK_BUFFER 0x88EC
|
||||
|
||||
#define GL_VENDOR 0x1F00
|
||||
#define GL_RENDERER 0x1F01
|
||||
|
||||
@@ -9,6 +9,7 @@
|
||||
|
||||
typedef struct AVCodecContext AVCodecContext;
|
||||
typedef struct AVFrame AVFrame;
|
||||
typedef struct gsr_capture_metadata gsr_capture_metadata;
|
||||
|
||||
typedef struct {
|
||||
const char *name;
|
||||
@@ -55,6 +56,7 @@ int create_directory_recursive(char *path);
|
||||
void setup_dma_buf_attrs(intptr_t *img_attr, uint32_t format, uint32_t width, uint32_t height, const int *fds, const uint32_t *offsets, const uint32_t *pitches, const uint64_t *modifiers, int num_planes, bool use_modifier);
|
||||
|
||||
vec2i scale_keep_aspect_ratio(vec2i from, vec2i to);
|
||||
vec2i gsr_capture_get_target_position(vec2i output_size, gsr_capture_metadata *capture_metadata);
|
||||
|
||||
unsigned int gl_create_texture(gsr_egl *egl, int width, int height, int internal_format, unsigned int format, int filter);
|
||||
|
||||
|
||||
@@ -470,6 +470,7 @@ int gsr_kms_client_get_kms(gsr_kms_client *self, gsr_kms_response *response) {
|
||||
response->version = 0;
|
||||
response->result = KMS_RESULT_FAILED_TO_SEND;
|
||||
response->err_msg[0] = '\0';
|
||||
response->num_items = 0;
|
||||
|
||||
gsr_kms_request request;
|
||||
request.version = GSR_KMS_PROTOCOL_VERSION;
|
||||
|
||||
@@ -7,7 +7,6 @@
|
||||
#include <stdio.h>
|
||||
#include <string.h>
|
||||
#include <errno.h>
|
||||
#include <stdlib.h>
|
||||
#include <locale.h>
|
||||
|
||||
#include <unistd.h>
|
||||
|
||||
@@ -14,6 +14,7 @@ src = [
|
||||
'src/capture/xcomposite.c',
|
||||
'src/capture/ximage.c',
|
||||
'src/capture/kms.c',
|
||||
'src/capture/v4l2.c',
|
||||
'src/encoder/encoder.c',
|
||||
'src/encoder/video/video.c',
|
||||
'src/encoder/video/nvenc.c',
|
||||
|
||||
@@ -33,4 +33,4 @@ wayland-client = ">=1"
|
||||
dbus-1 = ">=1"
|
||||
libpipewire-0.3 = ">=1"
|
||||
libspa-0.2 = ">=0"
|
||||
vulkan = ">=1"
|
||||
vulkan = ">=1"
|
||||
@@ -9,6 +9,7 @@
|
||||
#include <inttypes.h>
|
||||
#include <limits.h>
|
||||
#include <assert.h>
|
||||
#include <errno.h>
|
||||
#include <libgen.h>
|
||||
#include <sys/stat.h>
|
||||
|
||||
@@ -187,20 +188,22 @@ static double args_get_double_by_key(Arg *args, int num_args, const char *key, d
|
||||
}
|
||||
}
|
||||
|
||||
static void usage_header() {
|
||||
static void usage_header(void) {
|
||||
const bool inside_flatpak = getenv("FLATPAK_ID") != NULL;
|
||||
const char *program_name = inside_flatpak ? "flatpak run --command=gpu-screen-recorder com.dec05eba.gpu_screen_recorder" : "gpu-screen-recorder";
|
||||
printf("usage: %s -w <window_id|monitor|focused|portal|region> [-c <container_format>] [-s WxH] [-region WxH+X+Y] [-f <fps>] [-a <audio_input>] "
|
||||
printf("usage: %s -w <window_id|monitor|focused|portal|region|v4l2_device_path> [-c <container_format>] [-s WxH] [-region WxH+X+Y] [-f <fps>] [-a <audio_input>] "
|
||||
"[-q <quality>] [-r <replay_buffer_size_sec>] [-replay-storage ram|disk] [-restart-replay-on-save yes|no] "
|
||||
"[-k h264|hevc|av1|vp8|vp9|hevc_hdr|av1_hdr|hevc_10bit|av1_10bit] [-ac aac|opus|flac] [-ab <bitrate>] [-oc yes|no] [-fm cfr|vfr|content] "
|
||||
"[-bm auto|qp|vbr|cbr] [-cr limited|full] [-tune performance|quality] [-df yes|no] [-sc <script_path>] [-p <plugin_path>] "
|
||||
"[-cursor yes|no] [-keyint <value>] [-restore-portal-session yes|no] [-portal-session-token-filepath filepath] [-encoder gpu|cpu] "
|
||||
"[-fallback-cpu-encoding yes|no] [-o <output_file>] [-ro <output_directory>] [--list-capture-options [card_path]] [--list-audio-devices] [--list-application-audio] "
|
||||
"[-v yes|no] [-gl-debug yes|no] [--version] [-h|--help]\n", program_name);
|
||||
"[-fallback-cpu-encoding yes|no] [-o <output_file>] [-ro <output_directory>] [--list-capture-options [card_path]] [--list-audio-devices] "
|
||||
"[--list-application-audio] [--list-v4l2-devices] [-v yes|no] [-gl-debug yes|no] [--version] [-h|--help]\n", program_name);
|
||||
fflush(stdout);
|
||||
}
|
||||
|
||||
static void usage_full() {
|
||||
// TODO: Add --list-v4l2-devices option
|
||||
|
||||
static void usage_full(void) {
|
||||
const bool inside_flatpak = getenv("FLATPAK_ID") != NULL;
|
||||
usage_header();
|
||||
printf("\n");
|
||||
@@ -212,7 +215,7 @@ static void usage_full() {
|
||||
fflush(stdout);
|
||||
}
|
||||
|
||||
static void usage() {
|
||||
static void usage(void) {
|
||||
usage_header();
|
||||
}
|
||||
|
||||
@@ -246,8 +249,7 @@ static bool args_parser_set_values(args_parser *self) {
|
||||
self->bitrate_mode = (gsr_bitrate_mode)args_get_enum_by_key(self->args, NUM_ARGS, "-bm", GSR_BITRATE_MODE_AUTO);
|
||||
self->replay_storage = (gsr_replay_storage)args_get_enum_by_key(self->args, NUM_ARGS, "-replay-storage", GSR_REPLAY_STORAGE_RAM);
|
||||
|
||||
const char *window = args_get_value_by_key(self->args, NUM_ARGS, "-w");
|
||||
snprintf(self->window, sizeof(self->window), "%s", window);
|
||||
self->capture_source = args_get_value_by_key(self->args, NUM_ARGS, "-w");
|
||||
self->verbose = args_get_boolean_by_key(self->args, NUM_ARGS, "-v", true);
|
||||
self->gl_debug = args_get_boolean_by_key(self->args, NUM_ARGS, "-gl-debug", false);
|
||||
self->record_cursor = args_get_boolean_by_key(self->args, NUM_ARGS, "-cursor", true);
|
||||
@@ -335,14 +337,9 @@ static bool args_parser_set_values(args_parser *self) {
|
||||
}
|
||||
}
|
||||
|
||||
const char *output_resolution_str = args_get_value_by_key(self->args, NUM_ARGS, "-s");
|
||||
if(!output_resolution_str && strcmp(self->window, "focused") == 0) {
|
||||
fprintf(stderr, "gsr error: option -s is required when using '-w focused' option\n");
|
||||
usage();
|
||||
return false;
|
||||
}
|
||||
|
||||
self->output_resolution = (vec2i){0, 0};
|
||||
|
||||
const char *output_resolution_str = args_get_value_by_key(self->args, NUM_ARGS, "-s");
|
||||
if(output_resolution_str) {
|
||||
if(sscanf(output_resolution_str, "%dx%d", &self->output_resolution.x, &self->output_resolution.y) != 2) {
|
||||
fprintf(stderr, "gsr error: invalid value for option -s '%s', expected a value in format WxH\n", output_resolution_str);
|
||||
@@ -361,12 +358,6 @@ static bool args_parser_set_values(args_parser *self) {
|
||||
self->region_position = (vec2i){0, 0};
|
||||
const char *region_str = args_get_value_by_key(self->args, NUM_ARGS, "-region");
|
||||
if(region_str) {
|
||||
if(strcmp(self->window, "region") != 0) {
|
||||
fprintf(stderr, "gsr error: option -region can only be used when option '-w region' is used\n");
|
||||
usage();
|
||||
return false;
|
||||
}
|
||||
|
||||
if(sscanf(region_str, "%dx%d+%d+%d", &self->region_size.x, &self->region_size.y, &self->region_position.x, &self->region_position.y) != 4) {
|
||||
fprintf(stderr, "gsr error: invalid value for option -region '%s', expected a value in format WxH+X+Y\n", region_str);
|
||||
usage();
|
||||
@@ -378,12 +369,6 @@ static bool args_parser_set_values(args_parser *self) {
|
||||
usage();
|
||||
return false;
|
||||
}
|
||||
} else {
|
||||
if(strcmp(self->window, "region") == 0) {
|
||||
fprintf(stderr, "gsr error: option -region is required when '-w region' is used\n");
|
||||
usage();
|
||||
return false;
|
||||
}
|
||||
}
|
||||
|
||||
self->fps = args_get_i64_by_key(self->args, NUM_ARGS, "-f", 60);
|
||||
@@ -452,10 +437,6 @@ static bool args_parser_set_values(args_parser *self) {
|
||||
|
||||
self->replay_recording_directory = args_get_value_by_key(self->args, NUM_ARGS, "-ro");
|
||||
|
||||
const bool is_portal_capture = strcmp(self->window, "portal") == 0;
|
||||
if(!self->restore_portal_session && is_portal_capture)
|
||||
fprintf(stderr, "gsr info: option '-w portal' was used without '-restore-portal-session yes'. The previous screencast session will be ignored\n");
|
||||
|
||||
if(self->is_livestream && self->recording_saved_script) {
|
||||
fprintf(stderr, "gsr warning: live stream detected, -sc script is ignored\n");
|
||||
self->recording_saved_script = NULL;
|
||||
@@ -493,6 +474,11 @@ bool args_parser_parse(args_parser *self, int argc, char **argv, const args_hand
|
||||
return true;
|
||||
}
|
||||
|
||||
if(argc == 2 && strcmp(argv[1], "--list-v4l2-devices") == 0) {
|
||||
arg_handlers->list_v4l2_devices(userdata);
|
||||
return true;
|
||||
}
|
||||
|
||||
if(strcmp(argv[1], "--list-capture-options") == 0) {
|
||||
if(argc == 2) {
|
||||
arg_handlers->list_capture_options(NULL, userdata);
|
||||
@@ -706,12 +692,6 @@ bool args_parser_validate_with_gl_info(args_parser *self, gsr_egl *egl) {
|
||||
return false;
|
||||
}
|
||||
|
||||
const bool is_portal_capture = strcmp(self->window, "portal") == 0;
|
||||
if(video_codec_is_hdr(self->video_codec) && is_portal_capture) {
|
||||
fprintf(stderr, "gsr warning: portal capture option doesn't support hdr yet (PipeWire doesn't support hdr), the video will be tonemapped from hdr to sdr\n");
|
||||
self->video_codec = hdr_video_codec_to_sdr_video_codec(self->video_codec);
|
||||
}
|
||||
|
||||
return true;
|
||||
}
|
||||
|
||||
|
||||
@@ -3,7 +3,6 @@
|
||||
#include "../../include/color_conversion.h"
|
||||
#include "../../include/cursor.h"
|
||||
#include "../../include/window/window.h"
|
||||
#include "../../kms/client/kms_client.h"
|
||||
|
||||
#include <stdlib.h>
|
||||
#include <string.h>
|
||||
@@ -30,9 +29,6 @@ typedef struct {
|
||||
|
||||
typedef struct {
|
||||
gsr_capture_kms_params params;
|
||||
|
||||
gsr_kms_client kms_client;
|
||||
gsr_kms_response kms_response;
|
||||
|
||||
vec2i capture_pos;
|
||||
vec2i capture_size;
|
||||
@@ -51,7 +47,6 @@ typedef struct {
|
||||
bool hdr_metadata_set;
|
||||
|
||||
bool is_x11;
|
||||
gsr_cursor x11_cursor;
|
||||
|
||||
//int drm_fd;
|
||||
//uint64_t prev_sequence;
|
||||
@@ -61,21 +56,12 @@ typedef struct {
|
||||
vec2i prev_plane_size;
|
||||
|
||||
double last_time_monitor_check;
|
||||
} gsr_capture_kms;
|
||||
|
||||
static void gsr_capture_kms_cleanup_kms_fds(gsr_capture_kms *self) {
|
||||
for(int i = 0; i < self->kms_response.num_items; ++i) {
|
||||
for(int j = 0; j < self->kms_response.items[i].num_dma_bufs; ++j) {
|
||||
gsr_kms_response_dma_buf *dma_buf = &self->kms_response.items[i].dma_buf[j];
|
||||
if(dma_buf->fd > 0) {
|
||||
close(dma_buf->fd);
|
||||
dma_buf->fd = -1;
|
||||
}
|
||||
}
|
||||
self->kms_response.items[i].num_dma_bufs = 0;
|
||||
}
|
||||
self->kms_response.num_items = 0;
|
||||
}
|
||||
bool capture_is_combined_plane;
|
||||
gsr_kms_response_item *drm_fd;
|
||||
vec2i output_size;
|
||||
vec2i target_pos;
|
||||
} gsr_capture_kms;
|
||||
|
||||
static void gsr_capture_kms_stop(gsr_capture_kms *self) {
|
||||
if(self->input_texture_id) {
|
||||
@@ -97,10 +83,6 @@ static void gsr_capture_kms_stop(gsr_capture_kms *self) {
|
||||
// close(self->drm_fd);
|
||||
// self->drm_fd = -1;
|
||||
// }
|
||||
|
||||
gsr_capture_kms_cleanup_kms_fds(self);
|
||||
gsr_kms_client_deinit(&self->kms_client);
|
||||
gsr_cursor_deinit(&self->x11_cursor);
|
||||
}
|
||||
|
||||
static int max_int(int a, int b) {
|
||||
@@ -172,16 +154,8 @@ static int gsr_capture_kms_start(gsr_capture *cap, gsr_capture_metadata *capture
|
||||
gsr_monitor monitor;
|
||||
self->monitor_id.num_connector_ids = 0;
|
||||
|
||||
int kms_init_res = gsr_kms_client_init(&self->kms_client, self->params.egl->card_path);
|
||||
if(kms_init_res != 0)
|
||||
return kms_init_res;
|
||||
|
||||
self->is_x11 = gsr_window_get_display_server(self->params.egl->window) == GSR_DISPLAY_SERVER_X11;
|
||||
const gsr_connection_type connection_type = self->is_x11 ? GSR_CONNECTION_X11 : GSR_CONNECTION_DRM;
|
||||
if(self->is_x11) {
|
||||
Display *display = gsr_window_get_display(self->params.egl->window);
|
||||
gsr_cursor_init(&self->x11_cursor, self->params.egl, display);
|
||||
}
|
||||
|
||||
MonitorCallbackUserdata monitor_callback_userdata = {
|
||||
&self->monitor_id,
|
||||
@@ -209,29 +183,17 @@ static int gsr_capture_kms_start(gsr_capture *cap, gsr_capture_metadata *capture
|
||||
|
||||
if(self->params.output_resolution.x > 0 && self->params.output_resolution.y > 0) {
|
||||
self->params.output_resolution = scale_keep_aspect_ratio(self->capture_size, self->params.output_resolution);
|
||||
capture_metadata->video_width = self->params.output_resolution.x;
|
||||
capture_metadata->video_height = self->params.output_resolution.y;
|
||||
capture_metadata->video_size = self->params.output_resolution;
|
||||
} else if(self->params.region_size.x > 0 && self->params.region_size.y > 0) {
|
||||
capture_metadata->video_width = self->params.region_size.x;
|
||||
capture_metadata->video_height = self->params.region_size.y;
|
||||
capture_metadata->video_size = self->params.region_size;
|
||||
} else {
|
||||
capture_metadata->video_width = self->capture_size.x;
|
||||
capture_metadata->video_height = self->capture_size.y;
|
||||
capture_metadata->video_size = self->capture_size;
|
||||
}
|
||||
|
||||
self->last_time_monitor_check = clock_get_monotonic_seconds();
|
||||
return 0;
|
||||
}
|
||||
|
||||
static void gsr_capture_kms_on_event(gsr_capture *cap, gsr_egl *egl) {
|
||||
gsr_capture_kms *self = cap->priv;
|
||||
if(!self->is_x11)
|
||||
return;
|
||||
|
||||
XEvent *xev = gsr_window_get_event_data(egl->window);
|
||||
gsr_cursor_on_event(&self->x11_cursor, xev);
|
||||
}
|
||||
|
||||
// TODO: This is disabled for now because we want to be able to record at a framerate higher than the monitor framerate
|
||||
// static void gsr_capture_kms_tick(gsr_capture *cap) {
|
||||
// gsr_capture_kms *self = cap->priv;
|
||||
@@ -397,14 +359,14 @@ static gsr_kms_response_item* find_monitor_drm(gsr_capture_kms *self, bool *capt
|
||||
gsr_kms_response_item *drm_fd = NULL;
|
||||
|
||||
for(int i = 0; i < self->monitor_id.num_connector_ids; ++i) {
|
||||
drm_fd = find_drm_by_connector_id(&self->kms_response, self->monitor_id.connector_ids[i]);
|
||||
drm_fd = find_drm_by_connector_id(self->params.kms_response, self->monitor_id.connector_ids[i]);
|
||||
if(drm_fd)
|
||||
break;
|
||||
}
|
||||
|
||||
// Will never happen on wayland unless the target monitor has been disconnected
|
||||
if(!drm_fd && self->is_x11) {
|
||||
drm_fd = find_largest_drm(&self->kms_response);
|
||||
drm_fd = find_largest_drm(self->params.kms_response);
|
||||
*capture_is_combined_plane = true;
|
||||
}
|
||||
|
||||
@@ -412,7 +374,7 @@ static gsr_kms_response_item* find_monitor_drm(gsr_capture_kms *self, bool *capt
|
||||
}
|
||||
|
||||
static gsr_kms_response_item* find_cursor_drm_if_on_monitor(gsr_capture_kms *self, uint32_t monitor_connector_id, bool capture_is_combined_plane) {
|
||||
gsr_kms_response_item *cursor_drm_fd = find_cursor_drm(&self->kms_response, monitor_connector_id);
|
||||
gsr_kms_response_item *cursor_drm_fd = find_cursor_drm(self->params.kms_response, monitor_connector_id);
|
||||
if(!capture_is_combined_plane && cursor_drm_fd && cursor_drm_fd->connector_id != monitor_connector_id)
|
||||
cursor_drm_fd = NULL;
|
||||
return cursor_drm_fd;
|
||||
@@ -431,7 +393,7 @@ static gsr_monitor_rotation sub_rotations(gsr_monitor_rotation rot1, gsr_monitor
|
||||
return remainder_int(rot1 - rot2, 4);
|
||||
}
|
||||
|
||||
static void render_drm_cursor(gsr_capture_kms *self, gsr_color_conversion *color_conversion, const gsr_kms_response_item *cursor_drm_fd, vec2i target_pos, vec2i output_size, vec2i framebuffer_size) {
|
||||
static void render_drm_cursor(gsr_capture_kms *self, gsr_color_conversion *color_conversion, gsr_capture_metadata *capture_metadata, const gsr_kms_response_item *cursor_drm_fd, vec2i target_pos, vec2i output_size, vec2i framebuffer_size) {
|
||||
const vec2d scale = {
|
||||
self->capture_size.x == 0 ? 0 : (double)output_size.x / (double)self->capture_size.x,
|
||||
self->capture_size.y == 0 ? 0 : (double)output_size.y / (double)self->capture_size.y
|
||||
@@ -508,13 +470,13 @@ static void render_drm_cursor(gsr_capture_kms *self, gsr_color_conversion *color
|
||||
gsr_color_conversion_draw(color_conversion, self->cursor_texture_id,
|
||||
cursor_pos, (vec2i){cursor_size.x * scale.x, cursor_size.y * scale.y},
|
||||
(vec2i){0, 0}, cursor_size, cursor_size,
|
||||
gsr_monitor_rotation_to_rotation(rotation), GSR_SOURCE_COLOR_RGB, cursor_texture_id_is_external);
|
||||
gsr_monitor_rotation_to_rotation(rotation), capture_metadata->flip, GSR_SOURCE_COLOR_RGB, cursor_texture_id_is_external);
|
||||
|
||||
self->params.egl->glDisable(GL_SCISSOR_TEST);
|
||||
}
|
||||
|
||||
static void render_x11_cursor(gsr_capture_kms *self, gsr_color_conversion *color_conversion, vec2i capture_pos, vec2i target_pos, vec2i output_size) {
|
||||
if(!self->x11_cursor.visible)
|
||||
static void render_x11_cursor(gsr_capture_kms *self, gsr_color_conversion *color_conversion, gsr_capture_metadata *capture_metadata, vec2i capture_pos, vec2i target_pos, vec2i output_size) {
|
||||
if(!self->params.x11_cursor->visible)
|
||||
return;
|
||||
|
||||
const vec2d scale = {
|
||||
@@ -522,21 +484,18 @@ static void render_x11_cursor(gsr_capture_kms *self, gsr_color_conversion *color
|
||||
self->capture_size.y == 0 ? 0 : (double)output_size.y / (double)self->capture_size.y
|
||||
};
|
||||
|
||||
Display *display = gsr_window_get_display(self->params.egl->window);
|
||||
gsr_cursor_tick(&self->x11_cursor, DefaultRootWindow(display));
|
||||
|
||||
const vec2i cursor_pos = {
|
||||
target_pos.x + (self->x11_cursor.position.x - self->x11_cursor.hotspot.x - capture_pos.x) * scale.x,
|
||||
target_pos.y + (self->x11_cursor.position.y - self->x11_cursor.hotspot.y - capture_pos.y) * scale.y
|
||||
target_pos.x + (self->params.x11_cursor->position.x - self->params.x11_cursor->hotspot.x - capture_pos.x) * scale.x,
|
||||
target_pos.y + (self->params.x11_cursor->position.y - self->params.x11_cursor->hotspot.y - capture_pos.y) * scale.y
|
||||
};
|
||||
|
||||
self->params.egl->glEnable(GL_SCISSOR_TEST);
|
||||
self->params.egl->glScissor(target_pos.x, target_pos.y, output_size.x, output_size.y);
|
||||
|
||||
gsr_color_conversion_draw(color_conversion, self->x11_cursor.texture_id,
|
||||
cursor_pos, (vec2i){self->x11_cursor.size.x * scale.x, self->x11_cursor.size.y * scale.y},
|
||||
(vec2i){0, 0}, self->x11_cursor.size, self->x11_cursor.size,
|
||||
GSR_ROT_0, GSR_SOURCE_COLOR_RGB, false);
|
||||
gsr_color_conversion_draw(color_conversion, self->params.x11_cursor->texture_id,
|
||||
cursor_pos, (vec2i){self->params.x11_cursor->size.x * scale.x, self->params.x11_cursor->size.y * scale.y},
|
||||
(vec2i){0, 0}, self->params.x11_cursor->size, self->params.x11_cursor->size,
|
||||
GSR_ROT_0, capture_metadata->flip, GSR_SOURCE_COLOR_RGB, false);
|
||||
|
||||
self->params.egl->glDisable(GL_SCISSOR_TEST);
|
||||
}
|
||||
@@ -545,7 +504,7 @@ static void gsr_capture_kms_update_capture_size_change(gsr_capture_kms *self, gs
|
||||
if(target_pos.x != self->prev_target_pos.x || target_pos.y != self->prev_target_pos.y || drm_fd->src_w != self->prev_plane_size.x || drm_fd->src_h != self->prev_plane_size.y) {
|
||||
self->prev_target_pos = target_pos;
|
||||
self->prev_plane_size = self->capture_size;
|
||||
gsr_color_conversion_clear(color_conversion);
|
||||
color_conversion->schedule_clear = true;
|
||||
}
|
||||
}
|
||||
|
||||
@@ -590,48 +549,47 @@ static void gsr_capture_kms_update_connector_ids(gsr_capture_kms *self) {
|
||||
self->capture_size = rotate_capture_size_if_rotated(self, monitor.size);
|
||||
}
|
||||
|
||||
static int gsr_capture_kms_capture(gsr_capture *cap, gsr_capture_metadata *capture_metadata, gsr_color_conversion *color_conversion) {
|
||||
static void gsr_capture_kms_pre_capture(gsr_capture *cap, gsr_capture_metadata *capture_metadata, gsr_color_conversion *color_conversion) {
|
||||
gsr_capture_kms *self = cap->priv;
|
||||
|
||||
gsr_capture_kms_cleanup_kms_fds(self);
|
||||
|
||||
if(gsr_kms_client_get_kms(&self->kms_client, &self->kms_response) != 0) {
|
||||
fprintf(stderr, "gsr error: gsr_capture_kms_capture: failed to get kms, error: %d (%s)\n", self->kms_response.result, self->kms_response.err_msg);
|
||||
return -1;
|
||||
}
|
||||
|
||||
if(self->kms_response.num_items == 0) {
|
||||
if(self->params.kms_response->num_items == 0) {
|
||||
static bool error_shown = false;
|
||||
if(!error_shown) {
|
||||
error_shown = true;
|
||||
fprintf(stderr, "gsr error: no drm found, capture will fail\n");
|
||||
fprintf(stderr, "gsr error: gsr_capture_kms_pre_capture: no drm found, capture will fail\n");
|
||||
}
|
||||
return -1;
|
||||
return;
|
||||
}
|
||||
|
||||
gsr_capture_kms_update_connector_ids(self);
|
||||
|
||||
bool capture_is_combined_plane = false;
|
||||
const gsr_kms_response_item *drm_fd = find_monitor_drm(self, &capture_is_combined_plane);
|
||||
if(!drm_fd) {
|
||||
gsr_capture_kms_cleanup_kms_fds(self);
|
||||
return -1;
|
||||
}
|
||||
self->capture_is_combined_plane = false;
|
||||
self->drm_fd = find_monitor_drm(self, &self->capture_is_combined_plane);
|
||||
if(!self->drm_fd)
|
||||
return;
|
||||
|
||||
if(drm_fd->has_hdr_metadata && self->params.hdr && hdr_metadata_is_supported_format(&drm_fd->hdr_metadata))
|
||||
gsr_kms_set_hdr_metadata(self, drm_fd);
|
||||
if(self->drm_fd->has_hdr_metadata && self->params.hdr && hdr_metadata_is_supported_format(&self->drm_fd->hdr_metadata))
|
||||
gsr_kms_set_hdr_metadata(self, self->drm_fd);
|
||||
|
||||
self->capture_size = rotate_capture_size_if_rotated(self, (vec2i){ drm_fd->src_w, drm_fd->src_h });
|
||||
self->capture_size = rotate_capture_size_if_rotated(self, (vec2i){ self->drm_fd->src_w, self->drm_fd->src_h });
|
||||
if(self->params.region_size.x > 0 && self->params.region_size.y > 0)
|
||||
self->capture_size = self->params.region_size;
|
||||
|
||||
const vec2i output_size = scale_keep_aspect_ratio(self->capture_size, (vec2i){capture_metadata->recording_width, capture_metadata->recording_height});
|
||||
const vec2i target_pos = { max_int(0, capture_metadata->video_width / 2 - output_size.x / 2), max_int(0, capture_metadata->video_height / 2 - output_size.y / 2) };
|
||||
gsr_capture_kms_update_capture_size_change(self, color_conversion, target_pos, drm_fd);
|
||||
self->output_size = scale_keep_aspect_ratio(self->capture_size, capture_metadata->recording_size);
|
||||
self->target_pos = gsr_capture_get_target_position(self->output_size, capture_metadata);
|
||||
gsr_capture_kms_update_capture_size_change(self, color_conversion, self->target_pos, self->drm_fd);
|
||||
}
|
||||
|
||||
static int gsr_capture_kms_capture(gsr_capture *cap, gsr_capture_metadata *capture_metadata, gsr_color_conversion *color_conversion) {
|
||||
(void)capture_metadata;
|
||||
gsr_capture_kms *self = cap->priv;
|
||||
|
||||
if(self->params.kms_response->num_items == 0)
|
||||
return -1;
|
||||
|
||||
vec2i capture_pos = self->capture_pos;
|
||||
if(!capture_is_combined_plane)
|
||||
capture_pos = (vec2i){drm_fd->x, drm_fd->y};
|
||||
if(!self->capture_is_combined_plane)
|
||||
capture_pos = (vec2i){self->drm_fd->x, self->drm_fd->y};
|
||||
|
||||
capture_pos.x += self->params.region_position.x;
|
||||
capture_pos.y += self->params.region_position.y;
|
||||
@@ -639,22 +597,22 @@ static int gsr_capture_kms_capture(gsr_capture *cap, gsr_capture_metadata *captu
|
||||
//self->params.egl->glFlush();
|
||||
//self->params.egl->glFinish();
|
||||
|
||||
EGLImage image = gsr_capture_kms_create_egl_image_with_fallback(self, drm_fd);
|
||||
EGLImage image = gsr_capture_kms_create_egl_image_with_fallback(self, self->drm_fd);
|
||||
if(image) {
|
||||
gsr_capture_kms_bind_image_to_input_texture_with_fallback(self, image);
|
||||
self->params.egl->eglDestroyImage(self->params.egl->egl_display, image);
|
||||
}
|
||||
|
||||
const gsr_monitor_rotation plane_rotation = kms_rotation_to_gsr_monitor_rotation(drm_fd->rotation);
|
||||
const gsr_monitor_rotation rotation = capture_is_combined_plane ? GSR_MONITOR_ROT_0 : sub_rotations(self->monitor_rotation, plane_rotation);
|
||||
const gsr_monitor_rotation plane_rotation = kms_rotation_to_gsr_monitor_rotation(self->drm_fd->rotation);
|
||||
const gsr_monitor_rotation rotation = self->capture_is_combined_plane ? GSR_MONITOR_ROT_0 : sub_rotations(self->monitor_rotation, plane_rotation);
|
||||
|
||||
gsr_color_conversion_draw(color_conversion, self->external_texture_fallback ? self->external_input_texture_id : self->input_texture_id,
|
||||
target_pos, output_size,
|
||||
capture_pos, self->capture_size, (vec2i){ drm_fd->width, drm_fd->height },
|
||||
gsr_monitor_rotation_to_rotation(rotation), GSR_SOURCE_COLOR_RGB, self->external_texture_fallback);
|
||||
self->target_pos, self->output_size,
|
||||
capture_pos, self->capture_size, (vec2i){ self->drm_fd->width, self->drm_fd->height },
|
||||
gsr_monitor_rotation_to_rotation(rotation), capture_metadata->flip, GSR_SOURCE_COLOR_RGB, self->external_texture_fallback);
|
||||
|
||||
if(self->params.record_cursor) {
|
||||
gsr_kms_response_item *cursor_drm_fd = find_cursor_drm_if_on_monitor(self, drm_fd->connector_id, capture_is_combined_plane);
|
||||
gsr_kms_response_item *cursor_drm_fd = find_cursor_drm_if_on_monitor(self, self->drm_fd->connector_id, self->capture_is_combined_plane);
|
||||
// The cursor is handled by x11 on x11 instead of using the cursor drm plane because on prime systems with a dedicated nvidia gpu
|
||||
// the cursor plane is not available when the cursor is on the monitor controlled by the nvidia device.
|
||||
// TODO: This doesn't work properly with software cursor on x11 since it will draw the x11 cursor on top of the cursor already in the framebuffer.
|
||||
@@ -663,18 +621,16 @@ static int gsr_capture_kms_capture(gsr_capture *cap, gsr_capture_metadata *captu
|
||||
vec2i cursor_monitor_offset = self->capture_pos;
|
||||
cursor_monitor_offset.x += self->params.region_position.x;
|
||||
cursor_monitor_offset.y += self->params.region_position.y;
|
||||
render_x11_cursor(self, color_conversion, cursor_monitor_offset, target_pos, output_size);
|
||||
render_x11_cursor(self, color_conversion, capture_metadata, cursor_monitor_offset, self->target_pos, self->output_size);
|
||||
} else if(cursor_drm_fd) {
|
||||
const vec2i framebuffer_size = rotate_capture_size_if_rotated(self, (vec2i){ drm_fd->src_w, drm_fd->src_h });
|
||||
render_drm_cursor(self, color_conversion, cursor_drm_fd, target_pos, output_size, framebuffer_size);
|
||||
const vec2i framebuffer_size = rotate_capture_size_if_rotated(self, (vec2i){ self->drm_fd->src_w, self->drm_fd->src_h });
|
||||
render_drm_cursor(self, color_conversion, capture_metadata, cursor_drm_fd, self->target_pos, self->output_size, framebuffer_size);
|
||||
}
|
||||
}
|
||||
|
||||
//self->params.egl->glFlush();
|
||||
//self->params.egl->glFinish();
|
||||
|
||||
gsr_capture_kms_cleanup_kms_fds(self);
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
@@ -766,9 +722,9 @@ gsr_capture* gsr_capture_kms_create(const gsr_capture_kms_params *params) {
|
||||
|
||||
*cap = (gsr_capture) {
|
||||
.start = gsr_capture_kms_start,
|
||||
.on_event = gsr_capture_kms_on_event,
|
||||
//.tick = gsr_capture_kms_tick,
|
||||
.should_stop = gsr_capture_kms_should_stop,
|
||||
.pre_capture = gsr_capture_kms_pre_capture,
|
||||
.capture = gsr_capture_kms_capture,
|
||||
.uses_external_image = gsr_capture_kms_uses_external_image,
|
||||
.set_hdr_metadata = gsr_capture_kms_set_hdr_metadata,
|
||||
|
||||
@@ -284,16 +284,14 @@ static int gsr_capture_nvfbc_start(gsr_capture *cap, gsr_capture_metadata *captu
|
||||
goto error_cleanup;
|
||||
}
|
||||
|
||||
capture_metadata->video_width = self->tracking_width;
|
||||
capture_metadata->video_height = self->tracking_height;
|
||||
capture_metadata->video_size.x = self->tracking_width;
|
||||
capture_metadata->video_size.y = self->tracking_height;
|
||||
|
||||
if(self->params.output_resolution.x > 0 && self->params.output_resolution.y > 0) {
|
||||
self->params.output_resolution = scale_keep_aspect_ratio((vec2i){capture_metadata->video_width, capture_metadata->video_height}, self->params.output_resolution);
|
||||
capture_metadata->video_width = self->params.output_resolution.x;
|
||||
capture_metadata->video_height = self->params.output_resolution.y;
|
||||
self->params.output_resolution = scale_keep_aspect_ratio(capture_metadata->video_size, self->params.output_resolution);
|
||||
capture_metadata->video_size = self->params.output_resolution;
|
||||
} else if(self->params.region_size.x > 0 && self->params.region_size.y > 0) {
|
||||
capture_metadata->video_width = self->params.region_size.x;
|
||||
capture_metadata->video_height = self->params.region_size.y;
|
||||
capture_metadata->video_size = self->params.region_size;
|
||||
}
|
||||
|
||||
return 0;
|
||||
@@ -371,8 +369,8 @@ static int gsr_capture_nvfbc_capture(gsr_capture *cap, gsr_capture_metadata *cap
|
||||
if(self->params.region_size.x > 0 && self->params.region_size.y > 0)
|
||||
frame_size = self->params.region_size;
|
||||
|
||||
const vec2i output_size = scale_keep_aspect_ratio(frame_size, (vec2i){capture_metadata->recording_width, capture_metadata->recording_height});
|
||||
const vec2i target_pos = { max_int(0, capture_metadata->video_width / 2 - output_size.x / 2), max_int(0, capture_metadata->video_height / 2 - output_size.y / 2) };
|
||||
const vec2i output_size = scale_keep_aspect_ratio(frame_size, capture_metadata->recording_size);
|
||||
const vec2i target_pos = gsr_capture_get_target_position(output_size, capture_metadata);
|
||||
|
||||
NVFBC_FRAME_GRAB_INFO frame_info;
|
||||
memset(&frame_info, 0, sizeof(frame_info));
|
||||
@@ -398,7 +396,7 @@ static int gsr_capture_nvfbc_capture(gsr_capture *cap, gsr_capture_metadata *cap
|
||||
gsr_color_conversion_draw(color_conversion, self->setup_params.dwTextures[grab_params.dwTextureIndex],
|
||||
target_pos, (vec2i){output_size.x, output_size.y},
|
||||
self->params.region_position, frame_size, original_frame_size,
|
||||
GSR_ROT_0, GSR_SOURCE_COLOR_BGR, false);
|
||||
GSR_ROT_0, capture_metadata->flip, GSR_SOURCE_COLOR_BGR, false);
|
||||
|
||||
//self->params.egl->glFlush();
|
||||
//self->params.egl->glFinish();
|
||||
|
||||
@@ -35,6 +35,7 @@ typedef struct {
|
||||
|
||||
bool should_stop;
|
||||
bool stop_is_error;
|
||||
bool do_capture;
|
||||
} gsr_capture_portal;
|
||||
|
||||
static void gsr_capture_portal_cleanup_plane_fds(gsr_capture_portal *self) {
|
||||
@@ -293,12 +294,10 @@ static int gsr_capture_portal_start(gsr_capture *cap, gsr_capture_metadata *capt
|
||||
}
|
||||
|
||||
if(self->params.output_resolution.x == 0 && self->params.output_resolution.y == 0) {
|
||||
capture_metadata->video_width = self->capture_size.x;
|
||||
capture_metadata->video_height = self->capture_size.y;
|
||||
capture_metadata->video_size = self->capture_size;
|
||||
} else {
|
||||
self->params.output_resolution = scale_keep_aspect_ratio(self->capture_size, self->params.output_resolution);
|
||||
capture_metadata->video_width = self->params.output_resolution.x;
|
||||
capture_metadata->video_height = self->params.output_resolution.y;
|
||||
capture_metadata->video_size = self->params.output_resolution;
|
||||
}
|
||||
|
||||
return 0;
|
||||
@@ -322,22 +321,22 @@ static bool fourcc_has_alpha(uint32_t fourcc) {
|
||||
return false;
|
||||
}
|
||||
|
||||
static int gsr_capture_portal_capture(gsr_capture *cap, gsr_capture_metadata *capture_metadata, gsr_color_conversion *color_conversion) {
|
||||
(void)color_conversion;
|
||||
static void gsr_capture_portal_pre_capture(gsr_capture *cap, gsr_capture_metadata *capture_metadata, gsr_color_conversion *color_conversion) {
|
||||
gsr_capture_portal *self = cap->priv;
|
||||
self->do_capture = false;
|
||||
|
||||
if(self->should_stop)
|
||||
return -1;
|
||||
return;
|
||||
|
||||
if(gsr_pipewire_video_should_restart(&self->pipewire)) {
|
||||
fprintf(stderr, "gsr info: gsr_capture_portal_capture: pipewire capture was paused, trying to start capture again\n");
|
||||
fprintf(stderr, "gsr info: gsr_capture_portal_pre_capture: pipewire capture was paused, trying to start capture again\n");
|
||||
gsr_capture_portal_stop(self);
|
||||
const int result = gsr_capture_portal_setup(self, capture_metadata->fps);
|
||||
if(result != 0) {
|
||||
self->stop_is_error = result != PORTAL_CAPTURE_CANCELED_BY_USER_EXIT_CODE;
|
||||
self->should_stop = true;
|
||||
}
|
||||
return -1;
|
||||
return;
|
||||
}
|
||||
|
||||
/* TODO: Handle formats other than RGB(A) */
|
||||
@@ -346,15 +345,29 @@ static int gsr_capture_portal_capture(gsr_capture *cap, gsr_capture_metadata *ca
|
||||
if(self->pipewire_data.region.width != self->capture_size.x || self->pipewire_data.region.height != self->capture_size.y) {
|
||||
self->capture_size.x = self->pipewire_data.region.width;
|
||||
self->capture_size.y = self->pipewire_data.region.height;
|
||||
gsr_color_conversion_clear(color_conversion);
|
||||
color_conversion->schedule_clear = true;
|
||||
}
|
||||
} else {
|
||||
return -1;
|
||||
return;
|
||||
}
|
||||
}
|
||||
|
||||
const vec2i output_size = scale_keep_aspect_ratio(self->capture_size, (vec2i){capture_metadata->recording_width, capture_metadata->recording_height});
|
||||
const vec2i target_pos = { max_int(0, capture_metadata->video_width / 2 - output_size.x / 2), max_int(0, capture_metadata->video_height / 2 - output_size.y / 2) };
|
||||
const bool fourcc_alpha = fourcc_has_alpha(self->pipewire_data.fourcc);
|
||||
if(fourcc_alpha)
|
||||
color_conversion->schedule_clear = true;
|
||||
|
||||
self->do_capture = true;
|
||||
}
|
||||
|
||||
static int gsr_capture_portal_capture(gsr_capture *cap, gsr_capture_metadata *capture_metadata, gsr_color_conversion *color_conversion) {
|
||||
(void)color_conversion;
|
||||
gsr_capture_portal *self = cap->priv;
|
||||
|
||||
if(self->should_stop || !self->do_capture)
|
||||
return -1;
|
||||
|
||||
const vec2i output_size = scale_keep_aspect_ratio(self->capture_size, capture_metadata->recording_size);
|
||||
const vec2i target_pos = gsr_capture_get_target_position(output_size, capture_metadata);
|
||||
|
||||
const vec2i actual_texture_size = {self->pipewire_data.texture_width, self->pipewire_data.texture_height};
|
||||
|
||||
@@ -363,14 +376,10 @@ static int gsr_capture_portal_capture(gsr_capture *cap, gsr_capture_metadata *ca
|
||||
|
||||
// TODO: Handle region crop
|
||||
|
||||
const bool fourcc_alpha = fourcc_has_alpha(self->pipewire_data.fourcc);
|
||||
if(fourcc_alpha)
|
||||
gsr_color_conversion_clear(color_conversion);
|
||||
|
||||
gsr_color_conversion_draw(color_conversion, self->pipewire_data.using_external_image ? self->texture_map.external_texture_id : self->texture_map.texture_id,
|
||||
target_pos, output_size,
|
||||
(vec2i){self->pipewire_data.region.x, self->pipewire_data.region.y}, (vec2i){self->pipewire_data.region.width, self->pipewire_data.region.height}, actual_texture_size,
|
||||
gsr_monitor_rotation_to_rotation(self->pipewire_data.rotation), GSR_SOURCE_COLOR_RGB, self->pipewire_data.using_external_image);
|
||||
gsr_monitor_rotation_to_rotation(self->pipewire_data.rotation), capture_metadata->flip, GSR_SOURCE_COLOR_RGB, self->pipewire_data.using_external_image);
|
||||
|
||||
if(self->params.record_cursor && self->texture_map.cursor_texture_id > 0 && self->pipewire_data.cursor_region.width > 0) {
|
||||
const vec2d scale = {
|
||||
@@ -385,13 +394,15 @@ static int gsr_capture_portal_capture(gsr_capture *cap, gsr_capture_metadata *ca
|
||||
|
||||
self->params.egl->glEnable(GL_SCISSOR_TEST);
|
||||
self->params.egl->glScissor(target_pos.x, target_pos.y, output_size.x, output_size.y);
|
||||
|
||||
gsr_color_conversion_draw(color_conversion, self->texture_map.cursor_texture_id,
|
||||
(vec2i){cursor_pos.x, cursor_pos.y},
|
||||
(vec2i){self->pipewire_data.cursor_region.width * scale.x, self->pipewire_data.cursor_region.height * scale.y},
|
||||
(vec2i){0, 0},
|
||||
(vec2i){self->pipewire_data.cursor_region.width, self->pipewire_data.cursor_region.height},
|
||||
(vec2i){self->pipewire_data.cursor_region.width, self->pipewire_data.cursor_region.height},
|
||||
gsr_monitor_rotation_to_rotation(self->pipewire_data.rotation), GSR_SOURCE_COLOR_RGB, false);
|
||||
gsr_monitor_rotation_to_rotation(self->pipewire_data.rotation), capture_metadata->flip, GSR_SOURCE_COLOR_RGB, false);
|
||||
|
||||
self->params.egl->glDisable(GL_SCISSOR_TEST);
|
||||
}
|
||||
|
||||
@@ -458,6 +469,7 @@ gsr_capture* gsr_capture_portal_create(const gsr_capture_portal_params *params)
|
||||
.tick = NULL,
|
||||
.should_stop = gsr_capture_portal_should_stop,
|
||||
.capture_has_synchronous_task = gsr_capture_portal_capture_has_synchronous_task,
|
||||
.pre_capture = gsr_capture_portal_pre_capture,
|
||||
.capture = gsr_capture_portal_capture,
|
||||
.uses_external_image = gsr_capture_portal_uses_external_image,
|
||||
.is_damaged = gsr_capture_portal_is_damaged,
|
||||
|
||||
656
src/capture/v4l2.c
Normal file
656
src/capture/v4l2.c
Normal file
@@ -0,0 +1,656 @@
|
||||
#include "../../include/capture/v4l2.h"
|
||||
#include "../../include/color_conversion.h"
|
||||
#include "../../include/egl.h"
|
||||
#include "../../include/utils.h"
|
||||
|
||||
#include <dlfcn.h>
|
||||
#include <fcntl.h>
|
||||
#include <unistd.h>
|
||||
#include <poll.h>
|
||||
#include <sys/ioctl.h>
|
||||
#include <sys/mman.h>
|
||||
#include <linux/videodev2.h>
|
||||
#include <linux/dma-buf.h>
|
||||
#include <drm_fourcc.h>
|
||||
|
||||
#include <stdlib.h>
|
||||
#include <string.h>
|
||||
#include <stdio.h>
|
||||
#include <errno.h>
|
||||
#include <assert.h>
|
||||
|
||||
#define TJPF_RGB 0
|
||||
#define TJPF_RGBA 7
|
||||
#define TJFLAG_FASTDCT 2048
|
||||
|
||||
#define NUM_BUFFERS 2
|
||||
#define NUM_PBOS 2
|
||||
|
||||
typedef void* tjhandle;
|
||||
typedef tjhandle (*FUNC_tjInitDecompress)(void);
|
||||
typedef int (*FUNC_tjDestroy)(tjhandle handle);
|
||||
typedef int (*FUNC_tjDecompressHeader2)(tjhandle handle,
|
||||
unsigned char *jpegBuf, unsigned long jpegSize,
|
||||
int *width, int *height, int *jpegSubsamp);
|
||||
typedef int (*FUNC_tjDecompress2)(tjhandle handle, const unsigned char *jpegBuf,
|
||||
unsigned long jpegSize, unsigned char *dstBuf,
|
||||
int width, int pitch, int height, int pixelFormat,
|
||||
int flags);
|
||||
typedef char* (*FUNC_tjGetErrorStr2)(tjhandle handle);
|
||||
|
||||
typedef enum {
|
||||
V4L2_BUFFER_TYPE_DMABUF,
|
||||
V4L2_BUFFER_TYPE_MMAP
|
||||
} v4l2_buffer_type;
|
||||
|
||||
typedef struct {
|
||||
gsr_capture_v4l2_params params;
|
||||
vec2i capture_size;
|
||||
|
||||
bool should_stop;
|
||||
bool stop_is_error;
|
||||
|
||||
int fd;
|
||||
int dmabuf_fd[NUM_BUFFERS];
|
||||
EGLImage dma_image[NUM_BUFFERS];
|
||||
unsigned int texture_id;
|
||||
bool got_first_frame;
|
||||
|
||||
void *dmabuf_map[NUM_BUFFERS];
|
||||
size_t dmabuf_size[NUM_BUFFERS];
|
||||
unsigned int pbos[NUM_PBOS];
|
||||
unsigned int pbo_index;
|
||||
|
||||
v4l2_buffer_type buffer_type;
|
||||
|
||||
void *libturbojpeg_lib;
|
||||
FUNC_tjInitDecompress tjInitDecompress;
|
||||
FUNC_tjDestroy tjDestroy;
|
||||
FUNC_tjDecompressHeader2 tjDecompressHeader2;
|
||||
FUNC_tjDecompress2 tjDecompress2;
|
||||
FUNC_tjGetErrorStr2 tjGetErrorStr2;
|
||||
tjhandle jpeg_decompressor;
|
||||
|
||||
double capture_start_time;
|
||||
} gsr_capture_v4l2;
|
||||
|
||||
static int xioctl(int fd, unsigned long request, void *arg) {
|
||||
int r;
|
||||
|
||||
do {
|
||||
r = ioctl(fd, request, arg);
|
||||
} while (-1 == r && EINTR == errno);
|
||||
|
||||
return r;
|
||||
}
|
||||
|
||||
static void gsr_capture_v4l2_stop(gsr_capture_v4l2 *self) {
|
||||
self->params.egl->glDeleteBuffers(NUM_PBOS, self->pbos);
|
||||
for(int i = 0; i < NUM_PBOS; ++i) {
|
||||
self->pbos[i] = 0;
|
||||
}
|
||||
|
||||
if(self->texture_id) {
|
||||
self->params.egl->glDeleteTextures(1, &self->texture_id);
|
||||
self->texture_id = 0;
|
||||
}
|
||||
|
||||
for(int i = 0; i < NUM_BUFFERS; ++i) {
|
||||
if(self->dmabuf_map[i]) {
|
||||
munmap(self->dmabuf_map[i], self->dmabuf_size[i]);
|
||||
self->dmabuf_map[i] = NULL;
|
||||
}
|
||||
|
||||
if(self->dma_image[i]) {
|
||||
self->params.egl->eglDestroyImage(self->params.egl->egl_display, self->dma_image[i]);
|
||||
self->dma_image[i] = NULL;
|
||||
}
|
||||
|
||||
if(self->dmabuf_fd[i] > 0) {
|
||||
close(self->dmabuf_fd[i]);
|
||||
self->dmabuf_fd[i] = 0;
|
||||
}
|
||||
}
|
||||
|
||||
if(self->fd > 0) {
|
||||
xioctl(self->fd, VIDIOC_STREAMOFF, &(enum v4l2_buf_type){V4L2_BUF_TYPE_VIDEO_CAPTURE});
|
||||
close(self->fd);
|
||||
self->fd = 0;
|
||||
}
|
||||
|
||||
if(self->jpeg_decompressor) {
|
||||
self->tjDestroy(self->jpeg_decompressor);
|
||||
self->jpeg_decompressor = NULL;
|
||||
}
|
||||
|
||||
if(self->libturbojpeg_lib) {
|
||||
dlclose(self->libturbojpeg_lib);
|
||||
self->libturbojpeg_lib = NULL;
|
||||
}
|
||||
}
|
||||
|
||||
static void gsr_capture_v4l2_reset_cropping(gsr_capture_v4l2 *self) {
|
||||
struct v4l2_cropcap cropcap = {
|
||||
.type = V4L2_BUF_TYPE_VIDEO_CAPTURE
|
||||
};
|
||||
if(xioctl(self->fd, VIDIOC_CROPCAP, &cropcap) == 0) {
|
||||
struct v4l2_crop crop = {
|
||||
.type = V4L2_BUF_TYPE_VIDEO_CAPTURE,
|
||||
.c = cropcap.defrect /* reset to default */
|
||||
};
|
||||
|
||||
if(xioctl(self->fd, VIDIOC_S_CROP, &crop) == -1) {
|
||||
switch (errno) {
|
||||
case EINVAL:
|
||||
/* Cropping not supported. */
|
||||
break;
|
||||
default:
|
||||
/* Errors ignored. */
|
||||
break;
|
||||
}
|
||||
}
|
||||
} else {
|
||||
/* Errors ignored. */
|
||||
}
|
||||
}
|
||||
|
||||
gsr_capture_v4l2_supported_pixfmts gsr_capture_v4l2_get_supported_pixfmts(int fd) {
|
||||
gsr_capture_v4l2_supported_pixfmts result = {0};
|
||||
|
||||
struct v4l2_fmtdesc fmt = {
|
||||
.type = V4L2_BUF_TYPE_VIDEO_CAPTURE
|
||||
};
|
||||
while(xioctl(fd, VIDIOC_ENUM_FMT, &fmt) == 0) {
|
||||
//fprintf(stderr, "fmt: %d, desc: %s, flags: %d\n", fmt.pixelformat, fmt.description, fmt.flags);
|
||||
switch(fmt.pixelformat) {
|
||||
case V4L2_PIX_FMT_YUYV:
|
||||
result.yuyv = true;
|
||||
break;
|
||||
case V4L2_PIX_FMT_MJPEG:
|
||||
result.mjpeg = true;
|
||||
break;
|
||||
}
|
||||
++fmt.index;
|
||||
}
|
||||
|
||||
return result;
|
||||
}
|
||||
|
||||
static uint32_t gsr_pixfmt_to_v4l2_pixfmt(gsr_capture_v4l2_pixfmt pixfmt) {
|
||||
switch(pixfmt) {
|
||||
case GSR_CAPTURE_V4L2_PIXFMT_AUTO:
|
||||
assert(false);
|
||||
break;
|
||||
case GSR_CAPTURE_V4L2_PIXFMT_YUYV:
|
||||
return V4L2_PIX_FMT_YUYV;
|
||||
case GSR_CAPTURE_V4L2_PIXFMT_MJPEG:
|
||||
return V4L2_PIX_FMT_MJPEG;
|
||||
}
|
||||
assert(false);
|
||||
return V4L2_PIX_FMT_YUYV;
|
||||
}
|
||||
|
||||
static bool gsr_capture_v4l2_validate_pixfmt(gsr_capture_v4l2 *self, const gsr_capture_v4l2_supported_pixfmts supported_pixfmts) {
|
||||
switch(self->params.pixfmt) {
|
||||
case GSR_CAPTURE_V4L2_PIXFMT_AUTO: {
|
||||
if(supported_pixfmts.yuyv) {
|
||||
self->params.pixfmt = GSR_CAPTURE_V4L2_PIXFMT_YUYV;
|
||||
} else if(supported_pixfmts.mjpeg) {
|
||||
self->params.pixfmt = GSR_CAPTURE_V4L2_PIXFMT_MJPEG;
|
||||
} else {
|
||||
fprintf(stderr, "gsr error: gsr_capture_v4l2_create: %s doesn't support yuyv nor mjpeg. GPU Screen Recorder supports only yuyv and mjpeg at the moment. Report this as an issue, see: https://git.dec05eba.com/?p=about\n", self->params.device_path);
|
||||
return false;
|
||||
}
|
||||
break;
|
||||
}
|
||||
case GSR_CAPTURE_V4L2_PIXFMT_YUYV: {
|
||||
if(!supported_pixfmts.yuyv) {
|
||||
fprintf(stderr, "gsr error: gsr_capture_v4l2_create: %s doesn't support yuyv. Try recording with -pixfmt mjpeg or -pixfmt auto instead\n", self->params.device_path);
|
||||
return false;
|
||||
}
|
||||
break;
|
||||
}
|
||||
case GSR_CAPTURE_V4L2_PIXFMT_MJPEG: {
|
||||
if(!supported_pixfmts.mjpeg) {
|
||||
fprintf(stderr, "gsr error: gsr_capture_v4l2_create: %s doesn't support mjpeg. Try recording with -pixfmt yuyv or -pixfmt auto instead\n", self->params.device_path);
|
||||
return false;
|
||||
}
|
||||
break;
|
||||
}
|
||||
}
|
||||
return true;
|
||||
}
|
||||
|
||||
static bool gsr_capture_v4l2_create_pbos(gsr_capture_v4l2 *self, int width, int height) {
|
||||
self->pbo_index = 0;
|
||||
|
||||
self->params.egl->glGenBuffers(NUM_PBOS, self->pbos);
|
||||
for(int i = 0; i < NUM_PBOS; ++i) {
|
||||
if(self->pbos[i] == 0) {
|
||||
fprintf(stderr, "gsr error: gsr_capture_v4l2_create_pbos: failed to create pixel buffer objects\n");
|
||||
return false;
|
||||
}
|
||||
|
||||
self->params.egl->glBindBuffer(GL_PIXEL_UNPACK_BUFFER, self->pbos[i]);
|
||||
self->params.egl->glBufferData(GL_PIXEL_UNPACK_BUFFER, width * height * 4, 0, GL_DYNAMIC_DRAW);
|
||||
}
|
||||
|
||||
self->params.egl->glBindBuffer(GL_PIXEL_UNPACK_BUFFER, 0);
|
||||
return true;
|
||||
}
|
||||
|
||||
static bool gsr_capture_v4l2_map_buffer(gsr_capture_v4l2 *self, const struct v4l2_format *fmt) {
|
||||
switch(self->params.pixfmt) {
|
||||
case GSR_CAPTURE_V4L2_PIXFMT_AUTO: {
|
||||
assert(false);
|
||||
return false;
|
||||
}
|
||||
case GSR_CAPTURE_V4L2_PIXFMT_YUYV: {
|
||||
for(int i = 0; i < NUM_BUFFERS; ++i) {
|
||||
self->dma_image[i] = self->params.egl->eglCreateImage(self->params.egl->egl_display, 0, EGL_LINUX_DMA_BUF_EXT, NULL, (intptr_t[]) {
|
||||
EGL_WIDTH, fmt->fmt.pix.width,
|
||||
EGL_HEIGHT, fmt->fmt.pix.height,
|
||||
EGL_LINUX_DRM_FOURCC_EXT, DRM_FORMAT_YUYV,
|
||||
EGL_DMA_BUF_PLANE0_FD_EXT, self->dmabuf_fd[i],
|
||||
EGL_DMA_BUF_PLANE0_OFFSET_EXT, 0,
|
||||
EGL_DMA_BUF_PLANE0_PITCH_EXT, fmt->fmt.pix.bytesperline,
|
||||
EGL_NONE
|
||||
});
|
||||
if(!self->dma_image[i]) {
|
||||
fprintf(stderr, "gsr error: gsr_capture_v4l2_map_buffer: eglCreateImage failed, error: %d\n", self->params.egl->eglGetError());
|
||||
return false;
|
||||
}
|
||||
}
|
||||
|
||||
self->params.egl->glGenTextures(1, &self->texture_id);
|
||||
self->params.egl->glBindTexture(GL_TEXTURE_EXTERNAL_OES, self->texture_id);
|
||||
self->params.egl->glTexParameteri(GL_TEXTURE_EXTERNAL_OES, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
|
||||
self->params.egl->glTexParameteri(GL_TEXTURE_EXTERNAL_OES, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
|
||||
self->params.egl->glBindTexture(GL_TEXTURE_EXTERNAL_OES, 0);
|
||||
if(self->texture_id == 0) {
|
||||
fprintf(stderr, "gsr error: gsr_capture_v4l2_map_buffer: failed to create texture\n");
|
||||
return false;
|
||||
}
|
||||
|
||||
self->buffer_type = V4L2_BUFFER_TYPE_DMABUF;
|
||||
break;
|
||||
}
|
||||
case GSR_CAPTURE_V4L2_PIXFMT_MJPEG: {
|
||||
for(int i = 0; i < NUM_BUFFERS; ++i) {
|
||||
self->dmabuf_size[i] = fmt->fmt.pix.sizeimage;
|
||||
self->dmabuf_map[i] = mmap(NULL, fmt->fmt.pix.sizeimage, PROT_READ, MAP_SHARED, self->dmabuf_fd[i], 0);
|
||||
if(self->dmabuf_map[i] == MAP_FAILED) {
|
||||
fprintf(stderr, "gsr error: gsr_capture_v4l2_map_buffer: mmap failed, error: %s\n", strerror(errno));
|
||||
return false;
|
||||
}
|
||||
}
|
||||
|
||||
// GL_RGBA is intentionally used here instead of GL_RGB, because the performance is much better when using glTexSubImage2D (22% cpu usage compared to 38% cpu usage)
|
||||
self->texture_id = gl_create_texture(self->params.egl, fmt->fmt.pix.width, fmt->fmt.pix.height, GL_RGBA8, GL_RGBA, GL_LINEAR);
|
||||
if(self->texture_id == 0) {
|
||||
fprintf(stderr, "gsr error: gsr_capture_v4l2_map_buffer: failed to create texture\n");
|
||||
return false;
|
||||
}
|
||||
|
||||
if(!gsr_capture_v4l2_create_pbos(self, fmt->fmt.pix.width, fmt->fmt.pix.height))
|
||||
return false;
|
||||
|
||||
self->buffer_type = V4L2_BUFFER_TYPE_MMAP;
|
||||
break;
|
||||
}
|
||||
}
|
||||
return true;
|
||||
}
|
||||
|
||||
static int gsr_capture_v4l2_setup(gsr_capture_v4l2 *self) {
|
||||
self->fd = open(self->params.device_path, O_RDWR | O_NONBLOCK);
|
||||
if(self->fd < 0) {
|
||||
fprintf(stderr, "gsr error: gsr_capture_v4l2_create: failed to open %s, error: %s\n", self->params.device_path, strerror(errno));
|
||||
return -1;
|
||||
}
|
||||
|
||||
struct v4l2_capability cap = {0};
|
||||
if(xioctl(self->fd, VIDIOC_QUERYCAP, &cap) == -1) {
|
||||
if(EINVAL == errno) {
|
||||
fprintf(stderr, "gsr error: gsr_capture_v4l2_create: %s isn't a v4l2 device\n", self->params.device_path);
|
||||
return -1;
|
||||
} else {
|
||||
fprintf(stderr, "gsr error: gsr_capture_v4l2_create: VIDIOC_QUERYCAP failed, error: %s\n", strerror(errno));
|
||||
return -1;
|
||||
}
|
||||
}
|
||||
|
||||
if(!(cap.capabilities & V4L2_CAP_VIDEO_CAPTURE)) {
|
||||
fprintf(stderr, "gsr error: gsr_capture_v4l2_create: %s isn't a video capture device\n", self->params.device_path);
|
||||
return -1;
|
||||
}
|
||||
|
||||
if(!(cap.capabilities & V4L2_CAP_STREAMING)) {
|
||||
fprintf(stderr, "gsr error: gsr_capture_v4l2_create: %s doesn't support streaming i/o\n", self->params.device_path);
|
||||
return -1;
|
||||
}
|
||||
|
||||
gsr_capture_v4l2_reset_cropping(self);
|
||||
|
||||
const gsr_capture_v4l2_supported_pixfmts supported_pixfmts = gsr_capture_v4l2_get_supported_pixfmts(self->fd);
|
||||
if(!gsr_capture_v4l2_validate_pixfmt(self, supported_pixfmts))
|
||||
return -1;
|
||||
|
||||
if(self->params.pixfmt == GSR_CAPTURE_V4L2_PIXFMT_MJPEG) {
|
||||
dlerror(); /* clear */
|
||||
self->libturbojpeg_lib = dlopen("libturbojpeg.so.0", RTLD_LAZY);
|
||||
if(!self->libturbojpeg_lib) {
|
||||
fprintf(stderr, "gsr error: gsr_capture_v4l2_create: failed to load libturbojpeg.so.0 which is required for camera mjpeg capture, error: %s\n", dlerror());
|
||||
return -1;
|
||||
}
|
||||
|
||||
self->tjInitDecompress = (FUNC_tjInitDecompress)dlsym(self->libturbojpeg_lib, "tjInitDecompress");
|
||||
self->tjDestroy = (FUNC_tjDestroy)dlsym(self->libturbojpeg_lib, "tjDestroy");
|
||||
self->tjDecompressHeader2 = (FUNC_tjDecompressHeader2)dlsym(self->libturbojpeg_lib, "tjDecompressHeader2");
|
||||
self->tjDecompress2 = (FUNC_tjDecompress2)dlsym(self->libturbojpeg_lib, "tjDecompress2");
|
||||
self->tjGetErrorStr2 = (FUNC_tjGetErrorStr2)dlsym(self->libturbojpeg_lib, "tjGetErrorStr2");
|
||||
|
||||
if(!self->tjInitDecompress || !self->tjDestroy || !self->tjDecompressHeader2 || !self->tjDecompress2 || !self->tjGetErrorStr2) {
|
||||
fprintf(stderr, "gsr error: gsr_capture_v4l2_create: libturbojpeg.so.0 is missing functions. The libturbojpeg version installed on your system might be outdated\n");
|
||||
return -1;
|
||||
}
|
||||
|
||||
self->jpeg_decompressor = self->tjInitDecompress();
|
||||
if(!self->jpeg_decompressor) {
|
||||
fprintf(stderr, "gsr error: gsr_capture_v4l2_create: failed to create jpeg decompressor\n");
|
||||
return -1;
|
||||
}
|
||||
}
|
||||
|
||||
const uint32_t v4l2_pixfmt = gsr_pixfmt_to_v4l2_pixfmt(self->params.pixfmt);
|
||||
struct v4l2_format fmt = {
|
||||
.type = V4L2_BUF_TYPE_VIDEO_CAPTURE,
|
||||
.fmt.pix.pixelformat = v4l2_pixfmt
|
||||
};
|
||||
if(xioctl(self->fd, VIDIOC_S_FMT, &fmt) == -1) {
|
||||
fprintf(stderr, "gsr error: gsr_capture_v4l2_create: VIDIOC_S_FMT failed, error: %s\n", strerror(errno));
|
||||
return -1;
|
||||
}
|
||||
|
||||
if(fmt.fmt.pix.pixelformat != v4l2_pixfmt) {
|
||||
fprintf(stderr, "gsr error: gsr_capture_v4l2_create: pixel format isn't as requested (got pixel format: %u, requested: %u), error: %s\n", fmt.fmt.pix.pixelformat, v4l2_pixfmt, strerror(errno));
|
||||
return -1;
|
||||
}
|
||||
|
||||
/* Buggy driver paranoia */
|
||||
const uint32_t min_stride = fmt.fmt.pix.width * 2; // * 2 because the stride is width (Y) + width/2 (U) + width/2 (V)
|
||||
if(fmt.fmt.pix.bytesperline < min_stride)
|
||||
fmt.fmt.pix.bytesperline = min_stride;
|
||||
|
||||
const uint32_t min_size = fmt.fmt.pix.bytesperline * fmt.fmt.pix.height;
|
||||
if(fmt.fmt.pix.sizeimage < min_size)
|
||||
fmt.fmt.pix.sizeimage = min_size;
|
||||
|
||||
self->capture_size.x = fmt.fmt.pix.width;
|
||||
self->capture_size.y = fmt.fmt.pix.height;
|
||||
|
||||
struct v4l2_requestbuffers reqbuf = {
|
||||
.type = V4L2_BUF_TYPE_VIDEO_CAPTURE,
|
||||
.memory = V4L2_MEMORY_MMAP,
|
||||
.count = NUM_BUFFERS
|
||||
};
|
||||
if(xioctl(self->fd, VIDIOC_REQBUFS, &reqbuf) == -1) {
|
||||
fprintf(stderr, "gsr error: gsr_capture_v4l2_create: VIDIOC_REQBUFS failed, error: %s\n", strerror(errno));
|
||||
return -1;
|
||||
}
|
||||
|
||||
for(int i = 0; i < NUM_BUFFERS; ++i) {
|
||||
struct v4l2_exportbuffer expbuf = {
|
||||
.type = V4L2_BUF_TYPE_VIDEO_CAPTURE,
|
||||
.index = i,
|
||||
.flags = O_RDONLY
|
||||
};
|
||||
if(xioctl(self->fd, VIDIOC_EXPBUF, &expbuf) == -1) {
|
||||
fprintf(stderr, "gsr error: gsr_capture_v4l2_create: VIDIOC_EXPBUF failed, error: %s\n", strerror(errno));
|
||||
return -1;
|
||||
}
|
||||
self->dmabuf_fd[i] = expbuf.fd;
|
||||
}
|
||||
|
||||
if(!gsr_capture_v4l2_map_buffer(self, &fmt))
|
||||
return -1;
|
||||
|
||||
for(int i = 0; i < NUM_BUFFERS; ++i) {
|
||||
struct v4l2_buffer buf = {
|
||||
.type = V4L2_BUF_TYPE_VIDEO_CAPTURE,
|
||||
.index = i,
|
||||
.memory = V4L2_MEMORY_MMAP
|
||||
};
|
||||
xioctl(self->fd, VIDIOC_QBUF, &buf);
|
||||
}
|
||||
|
||||
if(xioctl(self->fd, VIDIOC_STREAMON, &(enum v4l2_buf_type){V4L2_BUF_TYPE_VIDEO_CAPTURE})) {
|
||||
fprintf(stderr, "gsr error: gsr_capture_v4l2_create: VIDIOC_STREAMON failed, error: %s\n", strerror(errno));
|
||||
return -1;
|
||||
}
|
||||
|
||||
fprintf(stderr, "gsr info: gsr_capture_v4l2_create: waiting for camera %s to be ready\n", self->params.device_path);
|
||||
return 0;
|
||||
}
|
||||
|
||||
static int gsr_capture_v4l2_start(gsr_capture *cap, gsr_capture_metadata *capture_metadata) {
|
||||
gsr_capture_v4l2 *self = cap->priv;
|
||||
|
||||
const int result = gsr_capture_v4l2_setup(self);
|
||||
if(result != 0) {
|
||||
gsr_capture_v4l2_stop(self);
|
||||
return result;
|
||||
}
|
||||
|
||||
if(self->params.output_resolution.x == 0 && self->params.output_resolution.y == 0) {
|
||||
capture_metadata->video_size = self->capture_size;
|
||||
} else {
|
||||
self->params.output_resolution = scale_keep_aspect_ratio(self->capture_size, self->params.output_resolution);
|
||||
capture_metadata->video_size = self->params.output_resolution;
|
||||
}
|
||||
|
||||
self->capture_start_time = clock_get_monotonic_seconds();
|
||||
return 0;
|
||||
}
|
||||
|
||||
static void gsr_capture_v4l2_tick(gsr_capture *cap) {
|
||||
gsr_capture_v4l2 *self = cap->priv;
|
||||
if(!self->got_first_frame && !self->should_stop) {
|
||||
const double timeout_sec = 5.0;
|
||||
if(clock_get_monotonic_seconds() - self->capture_start_time >= timeout_sec) {
|
||||
fprintf(stderr, "gsr error: gsr_capture_v4l2_capture: didn't receive camera data in %f seconds\n", timeout_sec);
|
||||
self->should_stop = true;
|
||||
self->stop_is_error = true;
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
static void gsr_capture_v4l2_decode_jpeg_to_texture(gsr_capture_v4l2 *self, const struct v4l2_buffer *buf) {
|
||||
int jpeg_subsamp = 0;
|
||||
int jpeg_width = 0;
|
||||
int jpeg_height = 0;
|
||||
if(self->tjDecompressHeader2(self->jpeg_decompressor, self->dmabuf_map[buf->index], buf->bytesused, &jpeg_width, &jpeg_height, &jpeg_subsamp) != 0) {
|
||||
fprintf(stderr, "gsr error: gsr_capture_v4l2_capture: failed to decompress camera jpeg header data, error: %s\n", self->tjGetErrorStr2(self->jpeg_decompressor));
|
||||
return;
|
||||
}
|
||||
|
||||
if(jpeg_width != self->capture_size.x || jpeg_height != self->capture_size.y) {
|
||||
fprintf(stderr, "gsr error: gsr_capture_v4l2_capture: got jpeg data of incorrect dimensions. Expected %dx%d, got %dx%d\n", self->capture_size.x, self->capture_size.y, jpeg_width, jpeg_height);
|
||||
return;
|
||||
}
|
||||
|
||||
self->params.egl->glBindTexture(GL_TEXTURE_2D, self->texture_id);
|
||||
|
||||
self->pbo_index = (self->pbo_index + 1) % NUM_PBOS;
|
||||
const unsigned int next_pbo_index = (self->pbo_index + 1) % NUM_PBOS;
|
||||
|
||||
self->params.egl->glBindBuffer(GL_PIXEL_UNPACK_BUFFER, self->pbos[self->pbo_index]);
|
||||
self->params.egl->glTexSubImage2D(GL_TEXTURE_2D, 0, 0, 0, self->capture_size.x, self->capture_size.y, GL_RGBA, GL_UNSIGNED_BYTE, NULL);
|
||||
|
||||
self->params.egl->glBindBuffer(GL_PIXEL_UNPACK_BUFFER, self->pbos[next_pbo_index]);
|
||||
self->params.egl->glBufferData(GL_PIXEL_UNPACK_BUFFER, self->capture_size.x * self->capture_size.y * 4, 0, GL_DYNAMIC_DRAW);
|
||||
|
||||
void *mapped_buffer = self->params.egl->glMapBuffer(GL_PIXEL_UNPACK_BUFFER, GL_WRITE_ONLY);
|
||||
if(mapped_buffer) {
|
||||
if(self->tjDecompress2(self->jpeg_decompressor, self->dmabuf_map[buf->index], buf->bytesused, mapped_buffer, jpeg_width, 0, jpeg_height, TJPF_RGBA, TJFLAG_FASTDCT) != 0)
|
||||
fprintf(stderr, "gsr error: gsr_capture_v4l2_capture: failed to decompress camera jpeg data, error: %s\n", self->tjGetErrorStr2(self->jpeg_decompressor));
|
||||
self->params.egl->glUnmapBuffer(GL_PIXEL_UNPACK_BUFFER);
|
||||
}
|
||||
|
||||
self->params.egl->glBindBuffer(GL_PIXEL_UNPACK_BUFFER, 0);
|
||||
self->params.egl->glBindTexture(GL_TEXTURE_2D, 0);
|
||||
}
|
||||
|
||||
static int gsr_capture_v4l2_capture(gsr_capture *cap, gsr_capture_metadata *capture_metadata, gsr_color_conversion *color_conversion) {
|
||||
(void)color_conversion;
|
||||
gsr_capture_v4l2 *self = cap->priv;
|
||||
|
||||
struct v4l2_buffer buf = {
|
||||
.type = V4L2_BUF_TYPE_VIDEO_CAPTURE,
|
||||
.memory = V4L2_MEMORY_MMAP
|
||||
};
|
||||
|
||||
xioctl(self->fd, VIDIOC_DQBUF, &buf);
|
||||
if(buf.bytesused > 0 && !(buf.flags & V4L2_BUF_FLAG_ERROR)) {
|
||||
if(!self->got_first_frame)
|
||||
fprintf(stderr, "gsr info: gsr_capture_v4l2_capture: camera %s is now ready\n", self->params.device_path);
|
||||
self->got_first_frame = true;
|
||||
|
||||
switch(self->buffer_type) {
|
||||
case V4L2_BUFFER_TYPE_DMABUF: {
|
||||
self->params.egl->glBindTexture(GL_TEXTURE_EXTERNAL_OES, self->texture_id);
|
||||
self->params.egl->glEGLImageTargetTexture2DOES(GL_TEXTURE_EXTERNAL_OES, self->dma_image[buf.index]);
|
||||
self->params.egl->glBindTexture(GL_TEXTURE_EXTERNAL_OES, 0);
|
||||
break;
|
||||
}
|
||||
case V4L2_BUFFER_TYPE_MMAP: {
|
||||
//xioctl(self->dmabuf_fd[buf.index], DMA_BUF_IOCTL_SYNC, &(struct dma_buf_sync){ .flags = DMA_BUF_SYNC_START });
|
||||
gsr_capture_v4l2_decode_jpeg_to_texture(self, &buf);
|
||||
//xioctl(self->dmabuf_fd[buf.index], DMA_BUF_IOCTL_SYNC, &(struct dma_buf_sync){ .flags = DMA_BUF_SYNC_END });
|
||||
break;
|
||||
}
|
||||
}
|
||||
}
|
||||
xioctl(self->fd, VIDIOC_QBUF, &buf);
|
||||
|
||||
const vec2i output_size = scale_keep_aspect_ratio(self->capture_size, capture_metadata->recording_size);
|
||||
const vec2i target_pos = gsr_capture_get_target_position(output_size, capture_metadata);
|
||||
|
||||
//if(self->got_first_frame) {
|
||||
gsr_color_conversion_draw(color_conversion, self->texture_id,
|
||||
target_pos, output_size,
|
||||
(vec2i){0, 0}, self->capture_size, self->capture_size,
|
||||
GSR_ROT_0, capture_metadata->flip, GSR_SOURCE_COLOR_RGB, self->buffer_type == V4L2_BUFFER_TYPE_DMABUF);
|
||||
//}
|
||||
|
||||
return self->got_first_frame ? 0 : -1;
|
||||
}
|
||||
|
||||
static bool gsr_capture_v4l2_uses_external_image(gsr_capture *cap) {
|
||||
(void)cap;
|
||||
return true;
|
||||
}
|
||||
|
||||
static bool gsr_capture_v4l2_should_stop(gsr_capture *cap, bool *err) {
|
||||
gsr_capture_v4l2 *self = cap->priv;
|
||||
if(err)
|
||||
*err = self->stop_is_error;
|
||||
return self->should_stop;
|
||||
}
|
||||
|
||||
static bool gsr_capture_v4l2_is_damaged(gsr_capture *cap) {
|
||||
gsr_capture_v4l2 *self = cap->priv;
|
||||
struct pollfd poll_data = {
|
||||
.fd = self->fd,
|
||||
.events = POLLIN,
|
||||
.revents = 0
|
||||
};
|
||||
return poll(&poll_data, 1, 0) > 0 && (poll_data.revents & POLLIN);
|
||||
}
|
||||
|
||||
static void gsr_capture_v4l2_clear_damage(gsr_capture *cap) {
|
||||
gsr_capture_v4l2 *self = cap->priv;
|
||||
(void)self;
|
||||
}
|
||||
|
||||
static void gsr_capture_v4l2_destroy(gsr_capture *cap) {
|
||||
gsr_capture_v4l2 *self = cap->priv;
|
||||
if(cap->priv) {
|
||||
gsr_capture_v4l2_stop(self);
|
||||
free(cap->priv);
|
||||
cap->priv = NULL;
|
||||
}
|
||||
free(cap);
|
||||
}
|
||||
|
||||
gsr_capture* gsr_capture_v4l2_create(const gsr_capture_v4l2_params *params) {
|
||||
if(!params) {
|
||||
fprintf(stderr, "gsr error: gsr_capture_v4l2_create params is NULL\n");
|
||||
return NULL;
|
||||
}
|
||||
|
||||
gsr_capture *cap = calloc(1, sizeof(gsr_capture));
|
||||
if(!cap)
|
||||
return NULL;
|
||||
|
||||
gsr_capture_v4l2 *cap_camera = calloc(1, sizeof(gsr_capture_v4l2));
|
||||
if(!cap_camera) {
|
||||
free(cap);
|
||||
return NULL;
|
||||
}
|
||||
|
||||
cap_camera->params = *params;
|
||||
|
||||
*cap = (gsr_capture) {
|
||||
.start = gsr_capture_v4l2_start,
|
||||
.tick = gsr_capture_v4l2_tick,
|
||||
.should_stop = gsr_capture_v4l2_should_stop,
|
||||
.capture = gsr_capture_v4l2_capture,
|
||||
.uses_external_image = gsr_capture_v4l2_uses_external_image,
|
||||
.is_damaged = gsr_capture_v4l2_is_damaged,
|
||||
.clear_damage = gsr_capture_v4l2_clear_damage,
|
||||
.destroy = gsr_capture_v4l2_destroy,
|
||||
.priv = cap_camera
|
||||
};
|
||||
|
||||
return cap;
|
||||
}
|
||||
|
||||
void gsr_capture_v4l2_list_devices(v4l2_devices_query_callback callback, void *userdata) {
|
||||
void *libturbojpeg_lib = dlopen("libturbojpeg.so.0", RTLD_LAZY);
|
||||
const bool has_libturbojpeg_lib = libturbojpeg_lib != NULL;
|
||||
if(libturbojpeg_lib)
|
||||
dlclose(libturbojpeg_lib);
|
||||
|
||||
char v4l2_device_path[128];
|
||||
for(int i = 0; i < 8; ++i) {
|
||||
snprintf(v4l2_device_path, sizeof(v4l2_device_path), "/dev/video%d", i);
|
||||
|
||||
const int fd = open(v4l2_device_path, O_RDWR | O_NONBLOCK);
|
||||
if(fd < 0)
|
||||
continue;
|
||||
|
||||
struct v4l2_capability cap = {0};
|
||||
if(xioctl(fd, VIDIOC_QUERYCAP, &cap) == -1)
|
||||
goto next;
|
||||
|
||||
if(!(cap.capabilities & V4L2_CAP_VIDEO_CAPTURE))
|
||||
goto next;
|
||||
|
||||
if(!(cap.capabilities & V4L2_CAP_STREAMING))
|
||||
goto next;
|
||||
|
||||
struct v4l2_format fmt = {
|
||||
.type = V4L2_BUF_TYPE_VIDEO_CAPTURE
|
||||
};
|
||||
if(xioctl(fd, VIDIOC_G_FMT, &fmt) == -1)
|
||||
goto next;
|
||||
|
||||
const gsr_capture_v4l2_supported_pixfmts supported_pixfmts = gsr_capture_v4l2_get_supported_pixfmts(fd);
|
||||
if(supported_pixfmts.yuyv || (supported_pixfmts.mjpeg && has_libturbojpeg_lib))
|
||||
callback(v4l2_device_path, supported_pixfmts, (vec2i){ fmt.fmt.pix.width, fmt.fmt.pix.height }, userdata);
|
||||
|
||||
next:
|
||||
close(fd);
|
||||
}
|
||||
}
|
||||
@@ -23,6 +23,7 @@ typedef struct {
|
||||
bool init_new_window;
|
||||
|
||||
Window window;
|
||||
vec2i window_pos;
|
||||
vec2i window_size;
|
||||
vec2i texture_size;
|
||||
double window_resize_timer;
|
||||
@@ -31,14 +32,11 @@ typedef struct {
|
||||
|
||||
Atom net_active_window_atom;
|
||||
|
||||
gsr_cursor cursor;
|
||||
|
||||
bool clear_background;
|
||||
} gsr_capture_xcomposite;
|
||||
|
||||
static void gsr_capture_xcomposite_stop(gsr_capture_xcomposite *self) {
|
||||
window_texture_deinit(&self->window_texture);
|
||||
gsr_cursor_deinit(&self->cursor);
|
||||
}
|
||||
|
||||
static int max_int(int a, int b) {
|
||||
@@ -81,6 +79,9 @@ static int gsr_capture_xcomposite_start(gsr_capture *cap, gsr_capture_metadata *
|
||||
return -1;
|
||||
}
|
||||
|
||||
self->window_pos.x = attr.x;
|
||||
self->window_pos.y = attr.y;
|
||||
|
||||
self->window_size.x = max_int(attr.width, 0);
|
||||
self->window_size.y = max_int(attr.height, 0);
|
||||
|
||||
@@ -95,20 +96,13 @@ static int gsr_capture_xcomposite_start(gsr_capture *cap, gsr_capture_metadata *
|
||||
return -1;
|
||||
}
|
||||
|
||||
if(gsr_cursor_init(&self->cursor, self->params.egl, self->display) != 0) {
|
||||
gsr_capture_xcomposite_stop(self);
|
||||
return -1;
|
||||
}
|
||||
|
||||
self->texture_size.x = self->window_texture.window_width;
|
||||
self->texture_size.y = self->window_texture.window_height;
|
||||
|
||||
if(self->params.output_resolution.x == 0 && self->params.output_resolution.y == 0) {
|
||||
capture_metadata->video_width = self->texture_size.x;
|
||||
capture_metadata->video_height = self->texture_size.y;
|
||||
capture_metadata->video_size = self->texture_size;
|
||||
} else {
|
||||
capture_metadata->video_width = self->params.output_resolution.x;
|
||||
capture_metadata->video_height = self->params.output_resolution.y;
|
||||
capture_metadata->video_size = self->params.output_resolution;
|
||||
}
|
||||
|
||||
self->window_resize_timer = clock_get_monotonic_seconds();
|
||||
@@ -137,6 +131,9 @@ static void gsr_capture_xcomposite_tick(gsr_capture *cap) {
|
||||
if(!XGetWindowAttributes(self->display, self->window, &attr))
|
||||
fprintf(stderr, "gsr error: gsr_capture_xcomposite_tick failed: invalid window id: %lu\n", self->window);
|
||||
|
||||
self->window_pos.x = attr.x;
|
||||
self->window_pos.y = attr.y;
|
||||
|
||||
self->window_size.x = max_int(attr.width, 0);
|
||||
self->window_size.y = max_int(attr.height, 0);
|
||||
|
||||
@@ -190,6 +187,9 @@ static void gsr_capture_xcomposite_on_event(gsr_capture *cap, gsr_egl *egl) {
|
||||
break;
|
||||
}
|
||||
case ConfigureNotify: {
|
||||
self->window_pos.x = xev->xconfigure.x;
|
||||
self->window_pos.y = xev->xconfigure.y;
|
||||
|
||||
/* Window resized */
|
||||
if(xev->xconfigure.window == self->window && (xev->xconfigure.width != self->window_size.x || xev->xconfigure.height != self->window_size.y)) {
|
||||
self->window_size.x = max_int(xev->xconfigure.width, 0);
|
||||
@@ -207,8 +207,6 @@ static void gsr_capture_xcomposite_on_event(gsr_capture *cap, gsr_egl *egl) {
|
||||
break;
|
||||
}
|
||||
}
|
||||
|
||||
gsr_cursor_on_event(&self->cursor, xev);
|
||||
}
|
||||
|
||||
static bool gsr_capture_xcomposite_should_stop(gsr_capture *cap, bool *err) {
|
||||
@@ -224,16 +222,21 @@ static bool gsr_capture_xcomposite_should_stop(gsr_capture *cap, bool *err) {
|
||||
return false;
|
||||
}
|
||||
|
||||
static int gsr_capture_xcomposite_capture(gsr_capture *cap, gsr_capture_metadata *capture_metadata, gsr_color_conversion *color_conversion) {
|
||||
static void gsr_capture_xcomposite_pre_capture(gsr_capture *cap, gsr_capture_metadata *capture_metadata, gsr_color_conversion *color_conversion) {
|
||||
(void)capture_metadata;
|
||||
gsr_capture_xcomposite *self = cap->priv;
|
||||
|
||||
if(self->clear_background) {
|
||||
self->clear_background = false;
|
||||
gsr_color_conversion_clear(color_conversion);
|
||||
color_conversion->schedule_clear = true;
|
||||
}
|
||||
}
|
||||
|
||||
const vec2i output_size = scale_keep_aspect_ratio(self->texture_size, (vec2i){capture_metadata->recording_width, capture_metadata->recording_height});
|
||||
const vec2i target_pos = { max_int(0, capture_metadata->video_width / 2 - output_size.x / 2), max_int(0, capture_metadata->video_height / 2 - output_size.y / 2) };
|
||||
static int gsr_capture_xcomposite_capture(gsr_capture *cap, gsr_capture_metadata *capture_metadata, gsr_color_conversion *color_conversion) {
|
||||
gsr_capture_xcomposite *self = cap->priv;
|
||||
|
||||
const vec2i output_size = scale_keep_aspect_ratio(self->texture_size, capture_metadata->recording_size);
|
||||
const vec2i target_pos = gsr_capture_get_target_position(output_size, capture_metadata);
|
||||
|
||||
//self->params.egl->glFlush();
|
||||
//self->params.egl->glFinish();
|
||||
@@ -241,28 +244,28 @@ static int gsr_capture_xcomposite_capture(gsr_capture *cap, gsr_capture_metadata
|
||||
gsr_color_conversion_draw(color_conversion, window_texture_get_opengl_texture_id(&self->window_texture),
|
||||
target_pos, output_size,
|
||||
(vec2i){0, 0}, self->texture_size, self->texture_size,
|
||||
GSR_ROT_0, GSR_SOURCE_COLOR_RGB, false);
|
||||
GSR_ROT_0, capture_metadata->flip, GSR_SOURCE_COLOR_RGB, false);
|
||||
|
||||
if(self->params.record_cursor && self->cursor.visible) {
|
||||
if(self->params.record_cursor && self->params.cursor->visible) {
|
||||
const vec2d scale = {
|
||||
self->texture_size.x == 0 ? 0 : (double)output_size.x / (double)self->texture_size.x,
|
||||
self->texture_size.y == 0 ? 0 : (double)output_size.y / (double)self->texture_size.y
|
||||
};
|
||||
|
||||
gsr_cursor_tick(&self->cursor, self->window);
|
||||
|
||||
const vec2i cursor_pos = {
|
||||
target_pos.x + (self->cursor.position.x - self->cursor.hotspot.x) * scale.x,
|
||||
target_pos.y + (self->cursor.position.y - self->cursor.hotspot.y) * scale.y
|
||||
target_pos.x + (self->params.cursor->position.x - self->params.cursor->hotspot.x - self->window_pos.x) * scale.x,
|
||||
target_pos.y + (self->params.cursor->position.y - self->params.cursor->hotspot.y - self->window_pos.y) * scale.y
|
||||
};
|
||||
|
||||
if(cursor_pos.x < target_pos.x || cursor_pos.x + self->cursor.size.x > target_pos.x + output_size.x || cursor_pos.y < target_pos.y || cursor_pos.y + self->cursor.size.y > target_pos.y + output_size.y)
|
||||
self->clear_background = true;
|
||||
self->params.egl->glEnable(GL_SCISSOR_TEST);
|
||||
self->params.egl->glScissor(target_pos.x, target_pos.y, output_size.x, output_size.y);
|
||||
|
||||
gsr_color_conversion_draw(color_conversion, self->cursor.texture_id,
|
||||
cursor_pos, (vec2i){self->cursor.size.x * scale.x, self->cursor.size.y * scale.y},
|
||||
(vec2i){0, 0}, self->cursor.size, self->cursor.size,
|
||||
GSR_ROT_0, GSR_SOURCE_COLOR_RGB, false);
|
||||
gsr_color_conversion_draw(color_conversion, self->params.cursor->texture_id,
|
||||
cursor_pos, (vec2i){self->params.cursor->size.x * scale.x, self->params.cursor->size.y * scale.y},
|
||||
(vec2i){0, 0}, self->params.cursor->size, self->params.cursor->size,
|
||||
GSR_ROT_0, capture_metadata->flip, GSR_SOURCE_COLOR_RGB, false);
|
||||
|
||||
self->params.egl->glDisable(GL_SCISSOR_TEST);
|
||||
}
|
||||
|
||||
//self->params.egl->glFlush();
|
||||
@@ -309,6 +312,7 @@ gsr_capture* gsr_capture_xcomposite_create(const gsr_capture_xcomposite_params *
|
||||
.on_event = gsr_capture_xcomposite_on_event,
|
||||
.tick = gsr_capture_xcomposite_tick,
|
||||
.should_stop = gsr_capture_xcomposite_should_stop,
|
||||
.pre_capture = gsr_capture_xcomposite_pre_capture,
|
||||
.capture = gsr_capture_xcomposite_capture,
|
||||
.uses_external_image = NULL,
|
||||
.get_window_id = gsr_capture_xcomposite_get_window_id,
|
||||
|
||||
@@ -16,7 +16,6 @@
|
||||
typedef struct {
|
||||
gsr_capture_ximage_params params;
|
||||
Display *display;
|
||||
gsr_cursor cursor;
|
||||
gsr_monitor monitor;
|
||||
vec2i capture_pos;
|
||||
vec2i capture_size;
|
||||
@@ -25,7 +24,6 @@ typedef struct {
|
||||
} gsr_capture_ximage;
|
||||
|
||||
static void gsr_capture_ximage_stop(gsr_capture_ximage *self) {
|
||||
gsr_cursor_deinit(&self->cursor);
|
||||
if(self->texture_id) {
|
||||
self->params.egl->glDeleteTextures(1, &self->texture_id);
|
||||
self->texture_id = 0;
|
||||
@@ -40,11 +38,6 @@ static int gsr_capture_ximage_start(gsr_capture *cap, gsr_capture_metadata *capt
|
||||
gsr_capture_ximage *self = cap->priv;
|
||||
self->root_window = DefaultRootWindow(self->display);
|
||||
|
||||
if(gsr_cursor_init(&self->cursor, self->params.egl, self->display) != 0) {
|
||||
gsr_capture_ximage_stop(self);
|
||||
return -1;
|
||||
}
|
||||
|
||||
if(!get_monitor_by_name(self->params.egl, GSR_CONNECTION_X11, self->params.display_to_capture, &self->monitor)) {
|
||||
fprintf(stderr, "gsr error: gsr_capture_ximage_start: failed to find monitor by name \"%s\"\n", self->params.display_to_capture);
|
||||
gsr_capture_ximage_stop(self);
|
||||
@@ -59,14 +52,11 @@ static int gsr_capture_ximage_start(gsr_capture *cap, gsr_capture_metadata *capt
|
||||
|
||||
if(self->params.output_resolution.x > 0 && self->params.output_resolution.y > 0) {
|
||||
self->params.output_resolution = scale_keep_aspect_ratio(self->capture_size, self->params.output_resolution);
|
||||
capture_metadata->video_width = self->params.output_resolution.x;
|
||||
capture_metadata->video_height = self->params.output_resolution.y;
|
||||
capture_metadata->video_size = self->params.output_resolution;
|
||||
} else if(self->params.region_size.x > 0 && self->params.region_size.y > 0) {
|
||||
capture_metadata->video_width = self->params.region_size.x;
|
||||
capture_metadata->video_height = self->params.region_size.y;
|
||||
capture_metadata->video_size = self->params.region_size;
|
||||
} else {
|
||||
capture_metadata->video_width = self->capture_size.x;
|
||||
capture_metadata->video_height = self->capture_size.y;
|
||||
capture_metadata->video_size = self->capture_size;
|
||||
}
|
||||
|
||||
self->texture_id = gl_create_texture(self->params.egl, self->capture_size.x, self->capture_size.y, GL_RGB8, GL_RGB, GL_LINEAR);
|
||||
@@ -79,12 +69,6 @@ static int gsr_capture_ximage_start(gsr_capture *cap, gsr_capture_metadata *capt
|
||||
return 0;
|
||||
}
|
||||
|
||||
static void gsr_capture_ximage_on_event(gsr_capture *cap, gsr_egl *egl) {
|
||||
gsr_capture_ximage *self = cap->priv;
|
||||
XEvent *xev = gsr_window_get_event_data(egl->window);
|
||||
gsr_cursor_on_event(&self->cursor, xev);
|
||||
}
|
||||
|
||||
static bool gsr_capture_ximage_upload_to_texture(gsr_capture_ximage *self, int x, int y, int width, int height) {
|
||||
const int max_width = XWidthOfScreen(DefaultScreenOfDisplay(self->display));
|
||||
const int max_height = XHeightOfScreen(DefaultScreenOfDisplay(self->display));
|
||||
@@ -137,6 +121,7 @@ static bool gsr_capture_ximage_upload_to_texture(gsr_capture_ximage *self, int x
|
||||
}
|
||||
|
||||
self->params.egl->glBindTexture(GL_TEXTURE_2D, self->texture_id);
|
||||
// TODO: Change to GL_RGBA for better performance? image_data needs alpha then as well
|
||||
self->params.egl->glTexSubImage2D(GL_TEXTURE_2D, 0, 0, 0, image->width, image->height, GL_RGB, GL_UNSIGNED_BYTE, image_data);
|
||||
self->params.egl->glBindTexture(GL_TEXTURE_2D, 0);
|
||||
success = true;
|
||||
@@ -150,35 +135,33 @@ static bool gsr_capture_ximage_upload_to_texture(gsr_capture_ximage *self, int x
|
||||
static int gsr_capture_ximage_capture(gsr_capture *cap, gsr_capture_metadata *capture_metadata, gsr_color_conversion *color_conversion) {
|
||||
gsr_capture_ximage *self = cap->priv;
|
||||
|
||||
const vec2i output_size = scale_keep_aspect_ratio(self->capture_size, (vec2i){capture_metadata->recording_width, capture_metadata->recording_height});
|
||||
const vec2i target_pos = { max_int(0, capture_metadata->video_width / 2 - output_size.x / 2), max_int(0, capture_metadata->video_height / 2 - output_size.y / 2) };
|
||||
const vec2i output_size = scale_keep_aspect_ratio(self->capture_size, capture_metadata->recording_size);
|
||||
const vec2i target_pos = gsr_capture_get_target_position(output_size, capture_metadata);
|
||||
gsr_capture_ximage_upload_to_texture(self, self->capture_pos.x + self->params.region_position.x, self->capture_pos.y + self->params.region_position.y, self->capture_size.x, self->capture_size.y);
|
||||
|
||||
gsr_color_conversion_draw(color_conversion, self->texture_id,
|
||||
target_pos, output_size,
|
||||
(vec2i){0, 0}, self->capture_size, self->capture_size,
|
||||
GSR_ROT_0, GSR_SOURCE_COLOR_RGB, false);
|
||||
GSR_ROT_0, capture_metadata->flip, GSR_SOURCE_COLOR_RGB, false);
|
||||
|
||||
if(self->params.record_cursor && self->cursor.visible) {
|
||||
if(self->params.record_cursor && self->params.cursor->visible) {
|
||||
const vec2d scale = {
|
||||
self->capture_size.x == 0 ? 0 : (double)output_size.x / (double)self->capture_size.x,
|
||||
self->capture_size.y == 0 ? 0 : (double)output_size.y / (double)self->capture_size.y
|
||||
};
|
||||
|
||||
gsr_cursor_tick(&self->cursor, self->root_window);
|
||||
|
||||
const vec2i cursor_pos = {
|
||||
target_pos.x + (self->cursor.position.x - self->cursor.hotspot.x) * scale.x - self->capture_pos.x - self->params.region_position.x,
|
||||
target_pos.y + (self->cursor.position.y - self->cursor.hotspot.y) * scale.y - self->capture_pos.y - self->params.region_position.y
|
||||
target_pos.x + (self->params.cursor->position.x - self->params.cursor->hotspot.x) * scale.x - self->capture_pos.x - self->params.region_position.x,
|
||||
target_pos.y + (self->params.cursor->position.y - self->params.cursor->hotspot.y) * scale.y - self->capture_pos.y - self->params.region_position.y
|
||||
};
|
||||
|
||||
self->params.egl->glEnable(GL_SCISSOR_TEST);
|
||||
self->params.egl->glScissor(target_pos.x, target_pos.y, output_size.x, output_size.y);
|
||||
|
||||
gsr_color_conversion_draw(color_conversion, self->cursor.texture_id,
|
||||
cursor_pos, (vec2i){self->cursor.size.x * scale.x, self->cursor.size.y * scale.y},
|
||||
(vec2i){0, 0}, self->cursor.size, self->cursor.size,
|
||||
GSR_ROT_0, GSR_SOURCE_COLOR_RGB, false);
|
||||
gsr_color_conversion_draw(color_conversion, self->params.cursor->texture_id,
|
||||
cursor_pos, (vec2i){self->params.cursor->size.x * scale.x, self->params.cursor->size.y * scale.y},
|
||||
(vec2i){0, 0}, self->params.cursor->size, self->params.cursor->size,
|
||||
GSR_ROT_0, capture_metadata->flip, GSR_SOURCE_COLOR_RGB, false);
|
||||
|
||||
self->params.egl->glDisable(GL_SCISSOR_TEST);
|
||||
}
|
||||
@@ -230,7 +213,6 @@ gsr_capture* gsr_capture_ximage_create(const gsr_capture_ximage_params *params)
|
||||
|
||||
*cap = (gsr_capture) {
|
||||
.start = gsr_capture_ximage_start,
|
||||
.on_event = gsr_capture_ximage_on_event,
|
||||
.tick = NULL,
|
||||
.should_stop = NULL,
|
||||
.capture = gsr_capture_ximage_capture,
|
||||
|
||||
@@ -472,7 +472,7 @@ static void gsr_color_conversion_swizzle_reset(gsr_color_conversion *self, gsr_s
|
||||
}
|
||||
}
|
||||
|
||||
static void gsr_color_conversion_draw_graphics(gsr_color_conversion *self, unsigned int texture_id, bool external_texture, gsr_rotation rotation, float rotation_matrix[2][2], vec2i source_position, vec2i source_size, vec2i destination_pos, vec2i texture_size, vec2f scale, gsr_source_color source_color) {
|
||||
static void gsr_color_conversion_draw_graphics(gsr_color_conversion *self, unsigned int texture_id, bool external_texture, gsr_rotation rotation, gsr_flip flip, float rotation_matrix[2][2], vec2i source_position, vec2i source_size, vec2i destination_pos, vec2i texture_size, vec2f scale, gsr_source_color source_color) {
|
||||
if(source_size.x == 0 || source_size.y == 0)
|
||||
return;
|
||||
|
||||
@@ -508,7 +508,7 @@ static void gsr_color_conversion_draw_graphics(gsr_color_conversion *self, unsig
|
||||
(float)source_size.y / (texture_size.y == 0 ? 1.0f : (float)texture_size.y),
|
||||
};
|
||||
|
||||
const float vertices[] = {
|
||||
float vertices[] = {
|
||||
-1.0f + 0.0f, -1.0f + 0.0f + size_norm.y, texture_pos_norm.x, texture_pos_norm.y + texture_size_norm.y,
|
||||
-1.0f + 0.0f, -1.0f + 0.0f, texture_pos_norm.x, texture_pos_norm.y,
|
||||
-1.0f + 0.0f + size_norm.x, -1.0f + 0.0f, texture_pos_norm.x + texture_size_norm.x, texture_pos_norm.y,
|
||||
@@ -518,6 +518,20 @@ static void gsr_color_conversion_draw_graphics(gsr_color_conversion *self, unsig
|
||||
-1.0f + 0.0f + size_norm.x, -1.0f + 0.0f + size_norm.y, texture_pos_norm.x + texture_size_norm.x, texture_pos_norm.y + texture_size_norm.y
|
||||
};
|
||||
|
||||
if(flip & GSR_FLIP_HORIZONTAL) {
|
||||
for(int i = 0; i < 6; ++i) {
|
||||
const float prev_x = vertices[i*4 + 2];
|
||||
vertices[i*4 + 2] = texture_pos_norm.x + texture_size_norm.x - prev_x;
|
||||
}
|
||||
}
|
||||
|
||||
if(flip & GSR_FLIP_VERTICAL) {
|
||||
for(int i = 0; i < 6; ++i) {
|
||||
const float prev_y = vertices[i*4 + 3];
|
||||
vertices[i*4 + 3] = texture_pos_norm.y + texture_size_norm.y - prev_y;
|
||||
}
|
||||
}
|
||||
|
||||
self->params.egl->glBindVertexArray(self->vertex_array_object_id);
|
||||
self->params.egl->glViewport(0, 0, dest_texture_size.x, dest_texture_size.y);
|
||||
|
||||
@@ -569,7 +583,7 @@ static void gsr_color_conversion_draw_graphics(gsr_color_conversion *self, unsig
|
||||
self->params.egl->glBindFramebuffer(GL_FRAMEBUFFER, 0);
|
||||
}
|
||||
|
||||
void gsr_color_conversion_draw(gsr_color_conversion *self, unsigned int texture_id, vec2i destination_pos, vec2i destination_size, vec2i source_pos, vec2i source_size, vec2i texture_size, gsr_rotation rotation, gsr_source_color source_color, bool external_texture) {
|
||||
void gsr_color_conversion_draw(gsr_color_conversion *self, unsigned int texture_id, vec2i destination_pos, vec2i destination_size, vec2i source_pos, vec2i source_size, vec2i texture_size, gsr_rotation rotation, gsr_flip flip, gsr_source_color source_color, bool external_texture) {
|
||||
assert(!external_texture || self->params.load_external_image_shader);
|
||||
if(external_texture && !self->params.load_external_image_shader) {
|
||||
fprintf(stderr, "gsr error: gsr_color_conversion_draw: external texture not loaded\n");
|
||||
@@ -590,7 +604,7 @@ void gsr_color_conversion_draw(gsr_color_conversion *self, unsigned int texture_
|
||||
|
||||
source_position.x += source_pos.x;
|
||||
source_position.y += source_pos.y;
|
||||
gsr_color_conversion_draw_graphics(self, texture_id, external_texture, rotation, rotation_matrix, source_position, source_size, destination_pos, texture_size, scale, source_color);
|
||||
gsr_color_conversion_draw_graphics(self, texture_id, external_texture, rotation, flip, rotation_matrix, source_position, source_size, destination_pos, texture_size, scale, source_color);
|
||||
|
||||
self->params.egl->glFlush();
|
||||
// TODO: Use the minimal barrier required
|
||||
|
||||
@@ -96,7 +96,7 @@ int gsr_cursor_init(gsr_cursor *self, gsr_egl *egl, Display *display) {
|
||||
}
|
||||
|
||||
void gsr_cursor_deinit(gsr_cursor *self) {
|
||||
if(!self->egl)
|
||||
if(!self->display)
|
||||
return;
|
||||
|
||||
if(self->texture_id) {
|
||||
@@ -112,6 +112,9 @@ void gsr_cursor_deinit(gsr_cursor *self) {
|
||||
}
|
||||
|
||||
bool gsr_cursor_on_event(gsr_cursor *self, XEvent *xev) {
|
||||
if(!self->display)
|
||||
return false;
|
||||
|
||||
bool updated = false;
|
||||
|
||||
if(xev->type == self->x_fixes_event_base + XFixesCursorNotify) {
|
||||
@@ -131,6 +134,9 @@ bool gsr_cursor_on_event(gsr_cursor *self, XEvent *xev) {
|
||||
}
|
||||
|
||||
void gsr_cursor_tick(gsr_cursor *self, Window relative_to) {
|
||||
if(!self->display)
|
||||
return;
|
||||
|
||||
Window dummy_window;
|
||||
int dummy_i;
|
||||
unsigned int dummy_u;
|
||||
|
||||
422
src/damage.c
422
src/damage.c
@@ -4,6 +4,7 @@
|
||||
|
||||
#include <stdio.h>
|
||||
#include <string.h>
|
||||
#include <stdlib.h>
|
||||
#include <X11/extensions/Xdamage.h>
|
||||
#include <X11/extensions/Xrandr.h>
|
||||
|
||||
@@ -26,51 +27,115 @@ static bool xrandr_is_supported(Display *display) {
|
||||
return major_version > 1 || (major_version == 1 && minor_version >= 2);
|
||||
}
|
||||
|
||||
bool gsr_damage_init(gsr_damage *self, gsr_egl *egl, bool track_cursor) {
|
||||
static int gsr_damage_get_tracked_monitor_index(const gsr_damage *self, const char *monitor_name) {
|
||||
for(int i = 0; i < self->num_monitors_tracked; ++i) {
|
||||
if(strcmp(self->monitors_tracked[i].monitor_name, monitor_name) == 0)
|
||||
return i;
|
||||
}
|
||||
return -1;
|
||||
}
|
||||
|
||||
static void add_monitor_callback(const gsr_monitor *monitor, void *userdata) {
|
||||
gsr_damage *self = userdata;
|
||||
|
||||
const int damage_monitor_index = gsr_damage_get_tracked_monitor_index(self, monitor->name);
|
||||
gsr_damage_monitor *damage_monitor = NULL;
|
||||
if(damage_monitor_index != -1) {
|
||||
damage_monitor = &self->monitors_tracked[damage_monitor_index];
|
||||
damage_monitor->monitor = NULL;
|
||||
}
|
||||
|
||||
if(self->num_monitors + 1 > GSR_DAMAGE_MAX_MONITORS) {
|
||||
fprintf(stderr, "gsr error: gsr_damage_on_output_change: max monitors reached\n");
|
||||
return;
|
||||
}
|
||||
|
||||
char *monitor_name_copy = strdup(monitor->name);
|
||||
if(!monitor_name_copy) {
|
||||
fprintf(stderr, "gsr error: gsr_damage_on_output_change: strdup failed for monitor: %s\n", monitor->name);
|
||||
return;
|
||||
}
|
||||
|
||||
self->monitors[self->num_monitors] = *monitor;
|
||||
self->monitors[self->num_monitors].name = monitor_name_copy;
|
||||
++self->num_monitors;
|
||||
|
||||
if(damage_monitor)
|
||||
damage_monitor->monitor = &self->monitors[self->num_monitors - 1];
|
||||
}
|
||||
|
||||
bool gsr_damage_init(gsr_damage *self, gsr_egl *egl, gsr_cursor *cursor, bool track_cursor) {
|
||||
memset(self, 0, sizeof(*self));
|
||||
self->egl = egl;
|
||||
self->track_cursor = track_cursor;
|
||||
self->cursor = cursor;
|
||||
|
||||
if(gsr_window_get_display_server(egl->window) != GSR_DISPLAY_SERVER_X11) {
|
||||
fprintf(stderr, "gsr warning: gsr_damage_init: damage tracking is not supported on wayland\n");
|
||||
fprintf(stderr, "gsr error: gsr_damage_init: damage tracking is not supported on wayland\n");
|
||||
return false;
|
||||
}
|
||||
self->display = gsr_window_get_display(egl->window);
|
||||
|
||||
if(!XDamageQueryExtension(self->display, &self->damage_event, &self->damage_error)) {
|
||||
fprintf(stderr, "gsr warning: gsr_damage_init: XDamage is not supported by your X11 server\n");
|
||||
fprintf(stderr, "gsr error: gsr_damage_init: XDamage is not supported by your X11 server\n");
|
||||
gsr_damage_deinit(self);
|
||||
return false;
|
||||
}
|
||||
|
||||
if(!XRRQueryExtension(self->display, &self->randr_event, &self->randr_error)) {
|
||||
fprintf(stderr, "gsr warning: gsr_damage_init: XRandr is not supported by your X11 server\n");
|
||||
fprintf(stderr, "gsr error: gsr_damage_init: XRandr is not supported by your X11 server\n");
|
||||
gsr_damage_deinit(self);
|
||||
return false;
|
||||
}
|
||||
|
||||
if(!xrandr_is_supported(self->display)) {
|
||||
fprintf(stderr, "gsr warning: gsr_damage_init: your X11 randr version is too old\n");
|
||||
fprintf(stderr, "gsr error: gsr_damage_init: your X11 randr version is too old\n");
|
||||
gsr_damage_deinit(self);
|
||||
return false;
|
||||
}
|
||||
|
||||
if(self->track_cursor)
|
||||
self->track_cursor = gsr_cursor_init(&self->cursor, self->egl, self->display) == 0;
|
||||
|
||||
XRRSelectInput(self->display, DefaultRootWindow(self->display), RRScreenChangeNotifyMask | RRCrtcChangeNotifyMask | RROutputChangeNotifyMask);
|
||||
|
||||
self->monitor_damage = XDamageCreate(self->display, DefaultRootWindow(self->display), XDamageReportNonEmpty);
|
||||
if(!self->monitor_damage) {
|
||||
fprintf(stderr, "gsr error: gsr_damage_init: XDamageCreate failed\n");
|
||||
gsr_damage_deinit(self);
|
||||
return false;
|
||||
}
|
||||
XDamageSubtract(self->display, self->monitor_damage, None, None);
|
||||
|
||||
for_each_active_monitor_output_x11_not_cached(self->display, add_monitor_callback, self);
|
||||
|
||||
self->damaged = true;
|
||||
return true;
|
||||
}
|
||||
|
||||
static void gsr_damage_deinit_monitors(gsr_damage *self) {
|
||||
for(int i = 0; i < self->num_monitors; ++i) {
|
||||
free((char*)self->monitors[i].name);
|
||||
}
|
||||
self->num_monitors = 0;
|
||||
}
|
||||
|
||||
void gsr_damage_deinit(gsr_damage *self) {
|
||||
if(self->damage) {
|
||||
XDamageDestroy(self->display, self->damage);
|
||||
self->damage = None;
|
||||
if(self->monitor_damage) {
|
||||
XDamageDestroy(self->display, self->monitor_damage);
|
||||
self->monitor_damage = None;
|
||||
}
|
||||
|
||||
gsr_cursor_deinit(&self->cursor);
|
||||
for(int i = 0; i < self->num_monitors_tracked; ++i) {
|
||||
free(self->monitors_tracked[i].monitor_name);
|
||||
}
|
||||
self->num_monitors_tracked = 0;
|
||||
|
||||
for(int i = 0; i < self->num_windows_tracked; ++i) {
|
||||
XSelectInput(self->display, self->windows_tracked[i].window_id, 0);
|
||||
XDamageDestroy(self->display, self->windows_tracked[i].damage);
|
||||
}
|
||||
self->num_windows_tracked = 0;
|
||||
|
||||
self->all_monitors_tracked_refcount = 0;
|
||||
gsr_damage_deinit_monitors(self);
|
||||
|
||||
self->damage_event = 0;
|
||||
self->damage_error = 0;
|
||||
@@ -79,139 +144,191 @@ void gsr_damage_deinit(gsr_damage *self) {
|
||||
self->randr_error = 0;
|
||||
}
|
||||
|
||||
bool gsr_damage_set_target_window(gsr_damage *self, uint64_t window) {
|
||||
if(self->damage_event == 0)
|
||||
static int gsr_damage_get_tracked_window_index(const gsr_damage *self, int64_t window) {
|
||||
for(int i = 0; i < self->num_windows_tracked; ++i) {
|
||||
if(self->windows_tracked[i].window_id == window)
|
||||
return i;
|
||||
}
|
||||
return -1;
|
||||
}
|
||||
|
||||
bool gsr_damage_start_tracking_window(gsr_damage *self, int64_t window) {
|
||||
if(self->damage_event == 0 || window == None)
|
||||
return false;
|
||||
|
||||
if(window == self->window)
|
||||
const int damage_window_index = gsr_damage_get_tracked_window_index(self, window);
|
||||
if(damage_window_index != -1) {
|
||||
++self->windows_tracked[damage_window_index].refcount;
|
||||
return true;
|
||||
|
||||
if(self->damage) {
|
||||
XDamageDestroy(self->display, self->damage);
|
||||
self->damage = None;
|
||||
}
|
||||
|
||||
if(self->window)
|
||||
XSelectInput(self->display, self->window, 0);
|
||||
|
||||
self->window = window;
|
||||
XSelectInput(self->display, self->window, StructureNotifyMask | ExposureMask);
|
||||
if(self->num_windows_tracked + 1 > GSR_DAMAGE_MAX_TRACKED_TARGETS) {
|
||||
fprintf(stderr, "gsr error: gsr_damage_start_tracking_window: max window targets reached\n");
|
||||
return false;
|
||||
}
|
||||
|
||||
XWindowAttributes win_attr;
|
||||
win_attr.x = 0;
|
||||
win_attr.y = 0;
|
||||
win_attr.width = 0;
|
||||
win_attr.height = 0;
|
||||
if(!XGetWindowAttributes(self->display, self->window, &win_attr))
|
||||
fprintf(stderr, "gsr warning: gsr_damage_set_target_window failed: failed to get window attributes: %ld\n", (long)self->window);
|
||||
if(!XGetWindowAttributes(self->display, window, &win_attr))
|
||||
fprintf(stderr, "gsr warning: gsr_damage_start_tracking_window failed: failed to get window attributes: %ld\n", (long)window);
|
||||
|
||||
//self->window_pos.x = win_attr.x;
|
||||
//self->window_pos.y = win_attr.y;
|
||||
|
||||
self->window_size.x = win_attr.width;
|
||||
self->window_size.y = win_attr.height;
|
||||
|
||||
self->damage = XDamageCreate(self->display, window, XDamageReportNonEmpty);
|
||||
if(self->damage) {
|
||||
XDamageSubtract(self->display, self->damage, None, None);
|
||||
self->damaged = true;
|
||||
self->track_type = GSR_DAMAGE_TRACK_WINDOW;
|
||||
return true;
|
||||
} else {
|
||||
fprintf(stderr, "gsr warning: gsr_damage_set_target_window: XDamageCreate failed\n");
|
||||
self->track_type = GSR_DAMAGE_TRACK_NONE;
|
||||
const Damage damage = XDamageCreate(self->display, window, XDamageReportNonEmpty);
|
||||
if(!damage) {
|
||||
fprintf(stderr, "gsr error: gsr_damage_start_tracking_window: XDamageCreate failed\n");
|
||||
return false;
|
||||
}
|
||||
XDamageSubtract(self->display, damage, None, None);
|
||||
|
||||
XSelectInput(self->display, window, StructureNotifyMask | ExposureMask);
|
||||
|
||||
gsr_damage_window *damage_window = &self->windows_tracked[self->num_windows_tracked];
|
||||
++self->num_windows_tracked;
|
||||
|
||||
damage_window->window_id = window;
|
||||
damage_window->window_pos.x = win_attr.x;
|
||||
damage_window->window_pos.y = win_attr.y;
|
||||
damage_window->window_size.x = win_attr.width;
|
||||
damage_window->window_size.y = win_attr.height;
|
||||
damage_window->damage = damage;
|
||||
damage_window->refcount = 1;
|
||||
return true;
|
||||
}
|
||||
|
||||
void gsr_damage_stop_tracking_window(gsr_damage *self, int64_t window) {
|
||||
if(window == None)
|
||||
return;
|
||||
|
||||
const int damage_window_index = gsr_damage_get_tracked_window_index(self, window);
|
||||
if(damage_window_index == -1)
|
||||
return;
|
||||
|
||||
gsr_damage_window *damage_window = &self->windows_tracked[damage_window_index];
|
||||
--damage_window->refcount;
|
||||
if(damage_window->refcount <= 0) {
|
||||
XSelectInput(self->display, damage_window->window_id, 0);
|
||||
XDamageDestroy(self->display, damage_window->damage);
|
||||
self->windows_tracked[damage_window_index] = self->windows_tracked[self->num_windows_tracked - 1];
|
||||
--self->num_windows_tracked;
|
||||
}
|
||||
}
|
||||
|
||||
bool gsr_damage_set_target_monitor(gsr_damage *self, const char *monitor_name) {
|
||||
static gsr_monitor* gsr_damage_get_monitor_by_id(gsr_damage *self, RRCrtc id) {
|
||||
for(int i = 0; i < self->num_monitors; ++i) {
|
||||
if(self->monitors[i].monitor_identifier == id)
|
||||
return &self->monitors[i];
|
||||
}
|
||||
return NULL;
|
||||
}
|
||||
|
||||
static gsr_monitor* gsr_damage_get_monitor_by_name(gsr_damage *self, const char *name) {
|
||||
for(int i = 0; i < self->num_monitors; ++i) {
|
||||
if(strcmp(self->monitors[i].name, name) == 0)
|
||||
return &self->monitors[i];
|
||||
}
|
||||
return NULL;
|
||||
}
|
||||
|
||||
bool gsr_damage_start_tracking_monitor(gsr_damage *self, const char *monitor_name) {
|
||||
if(self->damage_event == 0)
|
||||
return false;
|
||||
|
||||
if(strcmp(self->monitor_name, monitor_name) == 0)
|
||||
return true;
|
||||
if(strcmp(monitor_name, "screen-direct") == 0 || strcmp(monitor_name, "screen-direct-force") == 0)
|
||||
monitor_name = NULL;
|
||||
|
||||
if(self->damage) {
|
||||
XDamageDestroy(self->display, self->damage);
|
||||
self->damage = None;
|
||||
if(!monitor_name) {
|
||||
++self->all_monitors_tracked_refcount;
|
||||
return true;
|
||||
}
|
||||
|
||||
memset(&self->monitor, 0, sizeof(self->monitor));
|
||||
if(strcmp(monitor_name, "screen-direct") != 0 && strcmp(monitor_name, "screen-direct-force") != 0) {
|
||||
if(!get_monitor_by_name(self->egl, GSR_CONNECTION_X11, monitor_name, &self->monitor))
|
||||
fprintf(stderr, "gsr warning: gsr_damage_set_target_monitor: failed to find monitor: %s\n", monitor_name);
|
||||
const int damage_monitor_index = gsr_damage_get_tracked_monitor_index(self, monitor_name);
|
||||
if(damage_monitor_index != -1) {
|
||||
++self->monitors_tracked[damage_monitor_index].refcount;
|
||||
return true;
|
||||
}
|
||||
|
||||
if(self->window)
|
||||
XSelectInput(self->display, self->window, 0);
|
||||
|
||||
self->window = DefaultRootWindow(self->display);
|
||||
self->damage = XDamageCreate(self->display, self->window, XDamageReportNonEmpty);
|
||||
if(self->damage) {
|
||||
XDamageSubtract(self->display, self->damage, None, None);
|
||||
self->damaged = true;
|
||||
snprintf(self->monitor_name, sizeof(self->monitor_name), "%s", monitor_name);
|
||||
self->track_type = GSR_DAMAGE_TRACK_MONITOR;
|
||||
return true;
|
||||
} else {
|
||||
fprintf(stderr, "gsr warning: gsr_damage_set_target_monitor: XDamageCreate failed\n");
|
||||
self->track_type = GSR_DAMAGE_TRACK_NONE;
|
||||
if(self->num_monitors_tracked + 1 > GSR_DAMAGE_MAX_TRACKED_TARGETS) {
|
||||
fprintf(stderr, "gsr error: gsr_damage_start_tracking_monitor: max monitor targets reached\n");
|
||||
return false;
|
||||
}
|
||||
|
||||
char *monitor_name_copy = strdup(monitor_name);
|
||||
if(!monitor_name_copy) {
|
||||
fprintf(stderr, "gsr error: gsr_damage_start_tracking_monitor: strdup failed for monitor: %s\n", monitor_name);
|
||||
return false;
|
||||
}
|
||||
|
||||
gsr_monitor *monitor = gsr_damage_get_monitor_by_name(self, monitor_name);
|
||||
if(!monitor) {
|
||||
fprintf(stderr, "gsr error: gsr_damage_start_tracking_monitor: failed to find monitor: %s\n", monitor_name);
|
||||
free(monitor_name_copy);
|
||||
return false;
|
||||
}
|
||||
|
||||
gsr_damage_monitor *damage_monitor = &self->monitors_tracked[self->num_monitors_tracked];
|
||||
++self->num_monitors_tracked;
|
||||
|
||||
damage_monitor->monitor_name = monitor_name_copy;
|
||||
damage_monitor->monitor = monitor;
|
||||
damage_monitor->refcount = 1;
|
||||
return true;
|
||||
}
|
||||
|
||||
void gsr_damage_stop_tracking_monitor(gsr_damage *self, const char *monitor_name) {
|
||||
if(strcmp(monitor_name, "screen-direct") == 0 || strcmp(monitor_name, "screen-direct-force") == 0)
|
||||
monitor_name = NULL;
|
||||
|
||||
if(!monitor_name) {
|
||||
--self->all_monitors_tracked_refcount;
|
||||
if(self->all_monitors_tracked_refcount < 0)
|
||||
self->all_monitors_tracked_refcount = 0;
|
||||
return;
|
||||
}
|
||||
|
||||
const int damage_monitor_index = gsr_damage_get_tracked_monitor_index(self, monitor_name);
|
||||
if(damage_monitor_index == -1)
|
||||
return;
|
||||
|
||||
gsr_damage_monitor *damage_monitor = &self->monitors_tracked[damage_monitor_index];
|
||||
--damage_monitor->refcount;
|
||||
if(damage_monitor->refcount <= 0) {
|
||||
free(damage_monitor->monitor_name);
|
||||
self->monitors_tracked[damage_monitor_index] = self->monitors_tracked[self->num_monitors_tracked - 1];
|
||||
--self->num_monitors_tracked;
|
||||
}
|
||||
}
|
||||
|
||||
static void gsr_damage_on_crtc_change(gsr_damage *self, XEvent *xev) {
|
||||
const XRRCrtcChangeNotifyEvent *rr_crtc_change_event = (XRRCrtcChangeNotifyEvent*)xev;
|
||||
if(rr_crtc_change_event->crtc == 0 || self->monitor.monitor_identifier == 0)
|
||||
return;
|
||||
|
||||
if(rr_crtc_change_event->crtc != self->monitor.monitor_identifier)
|
||||
if(rr_crtc_change_event->crtc == 0)
|
||||
return;
|
||||
|
||||
if(rr_crtc_change_event->width == 0 || rr_crtc_change_event->height == 0)
|
||||
return;
|
||||
|
||||
if(rr_crtc_change_event->x != self->monitor.pos.x || rr_crtc_change_event->y != self->monitor.pos.y ||
|
||||
(int)rr_crtc_change_event->width != self->monitor.size.x || (int)rr_crtc_change_event->height != self->monitor.size.y) {
|
||||
self->monitor.pos.x = rr_crtc_change_event->x;
|
||||
self->monitor.pos.y = rr_crtc_change_event->y;
|
||||
gsr_monitor *monitor = gsr_damage_get_monitor_by_id(self, rr_crtc_change_event->crtc);
|
||||
if(!monitor)
|
||||
return;
|
||||
|
||||
self->monitor.size.x = rr_crtc_change_event->width;
|
||||
self->monitor.size.y = rr_crtc_change_event->height;
|
||||
if(rr_crtc_change_event->x != monitor->pos.x || rr_crtc_change_event->y != monitor->pos.y ||
|
||||
(int)rr_crtc_change_event->width != monitor->size.x || (int)rr_crtc_change_event->height != monitor->size.y) {
|
||||
monitor->pos.x = rr_crtc_change_event->x;
|
||||
monitor->pos.y = rr_crtc_change_event->y;
|
||||
|
||||
monitor->size.x = rr_crtc_change_event->width;
|
||||
monitor->size.y = rr_crtc_change_event->height;
|
||||
}
|
||||
}
|
||||
|
||||
static void gsr_damage_on_output_change(gsr_damage *self, XEvent *xev) {
|
||||
const XRROutputChangeNotifyEvent *rr_output_change_event = (XRROutputChangeNotifyEvent*)xev;
|
||||
if(!rr_output_change_event->output || self->monitor.monitor_identifier == 0)
|
||||
if(!rr_output_change_event->output)
|
||||
return;
|
||||
|
||||
XRRScreenResources *screen_res = XRRGetScreenResources(self->display, DefaultRootWindow(self->display));
|
||||
if(!screen_res)
|
||||
return;
|
||||
|
||||
// TODO: What about scaled output? look at for_each_active_monitor_output_x11_not_cached
|
||||
XRROutputInfo *out_info = XRRGetOutputInfo(self->display, screen_res, rr_output_change_event->output);
|
||||
if(out_info && out_info->crtc && out_info->crtc == self->monitor.monitor_identifier) {
|
||||
XRRCrtcInfo *crtc_info = XRRGetCrtcInfo(self->display, screen_res, out_info->crtc);
|
||||
if(crtc_info && (crtc_info->x != self->monitor.pos.x || crtc_info->y != self->monitor.pos.y ||
|
||||
(int)crtc_info->width != self->monitor.size.x || (int)crtc_info->height != self->monitor.size.y))
|
||||
{
|
||||
self->monitor.pos.x = crtc_info->x;
|
||||
self->monitor.pos.y = crtc_info->y;
|
||||
|
||||
self->monitor.size.x = crtc_info->width;
|
||||
self->monitor.size.y = crtc_info->height;
|
||||
}
|
||||
|
||||
if(crtc_info)
|
||||
XRRFreeCrtcInfo(crtc_info);
|
||||
}
|
||||
|
||||
if(out_info)
|
||||
XRRFreeOutputInfo(out_info);
|
||||
|
||||
XRRFreeScreenResources(screen_res);
|
||||
gsr_damage_deinit_monitors(self);
|
||||
for_each_active_monitor_output_x11_not_cached(self->display, add_monitor_callback, self);
|
||||
}
|
||||
|
||||
static void gsr_damage_on_randr_event(gsr_damage *self, XEvent *xev) {
|
||||
@@ -232,19 +349,38 @@ static void gsr_damage_on_damage_event(gsr_damage *self, XEvent *xev) {
|
||||
/* Subtract all the damage, repairing the window */
|
||||
XDamageSubtract(self->display, de->damage, None, region);
|
||||
|
||||
if(self->track_type == GSR_DAMAGE_TRACK_WINDOW || (self->track_type == GSR_DAMAGE_TRACK_MONITOR && self->monitor.connector_id == 0)) {
|
||||
if(self->all_monitors_tracked_refcount > 0)
|
||||
self->damaged = true;
|
||||
} else {
|
||||
|
||||
if(!self->damaged) {
|
||||
for(int i = 0; i < self->num_windows_tracked; ++i) {
|
||||
const gsr_damage_window *damage_window = &self->windows_tracked[i];
|
||||
if(damage_window->window_id == (int64_t)de->drawable) {
|
||||
self->damaged = true;
|
||||
break;
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
if(!self->damaged) {
|
||||
int num_rectangles = 0;
|
||||
XRectangle *rectangles = XFixesFetchRegion(self->display, region, &num_rectangles);
|
||||
if(rectangles) {
|
||||
const gsr_rectangle monitor_region = { self->monitor.pos, self->monitor.size };
|
||||
for(int i = 0; i < num_rectangles; ++i) {
|
||||
const gsr_rectangle damage_region = { (vec2i){rectangles[i].x, rectangles[i].y}, (vec2i){rectangles[i].width, rectangles[i].height} };
|
||||
self->damaged = rectangles_intersect(monitor_region, damage_region);
|
||||
if(self->damaged)
|
||||
break;
|
||||
for(int j = 0; j < self->num_monitors_tracked; ++j) {
|
||||
const gsr_monitor *monitor = self->monitors_tracked[j].monitor;
|
||||
if(!monitor)
|
||||
continue;
|
||||
|
||||
const gsr_rectangle monitor_region = { monitor->pos, monitor->size };
|
||||
self->damaged = rectangles_intersect(monitor_region, damage_region);
|
||||
if(self->damaged)
|
||||
goto intersection_found;
|
||||
}
|
||||
}
|
||||
|
||||
intersection_found:
|
||||
XFree(rectangles);
|
||||
}
|
||||
}
|
||||
@@ -254,23 +390,35 @@ static void gsr_damage_on_damage_event(gsr_damage *self, XEvent *xev) {
|
||||
}
|
||||
|
||||
static void gsr_damage_on_tick_cursor(gsr_damage *self) {
|
||||
vec2i prev_cursor_pos = self->cursor.position;
|
||||
gsr_cursor_tick(&self->cursor, self->window);
|
||||
if(self->cursor.position.x != prev_cursor_pos.x || self->cursor.position.y != prev_cursor_pos.y) {
|
||||
const gsr_rectangle cursor_region = { self->cursor.position, self->cursor.size };
|
||||
switch(self->track_type) {
|
||||
case GSR_DAMAGE_TRACK_NONE: {
|
||||
if(self->cursor->position.x == self->cursor_pos.x && self->cursor->position.y == self->cursor_pos.y)
|
||||
return;
|
||||
|
||||
self->cursor_pos = self->cursor->position;
|
||||
const gsr_rectangle cursor_region = { self->cursor->position, self->cursor->size };
|
||||
|
||||
if(self->all_monitors_tracked_refcount > 0)
|
||||
self->damaged = true;
|
||||
|
||||
if(!self->damaged) {
|
||||
for(int i = 0; i < self->num_windows_tracked; ++i) {
|
||||
const gsr_damage_window *damage_window = &self->windows_tracked[i];
|
||||
const gsr_rectangle window_region = { damage_window->window_pos, damage_window->window_size };
|
||||
if(rectangles_intersect(window_region, cursor_region)) {
|
||||
self->damaged = true;
|
||||
break;
|
||||
}
|
||||
case GSR_DAMAGE_TRACK_WINDOW: {
|
||||
const gsr_rectangle window_region = { (vec2i){0, 0}, self->window_size };
|
||||
self->damaged = self->window_size.x == 0 || rectangles_intersect(window_region, cursor_region);
|
||||
break;
|
||||
}
|
||||
case GSR_DAMAGE_TRACK_MONITOR: {
|
||||
const gsr_rectangle monitor_region = { self->monitor.pos, self->monitor.size };
|
||||
self->damaged = self->monitor.monitor_identifier == 0 || rectangles_intersect(monitor_region, cursor_region);
|
||||
}
|
||||
}
|
||||
|
||||
if(!self->damaged) {
|
||||
for(int i = 0; i < self->num_monitors_tracked; ++i) {
|
||||
const gsr_monitor *monitor = self->monitors_tracked[i].monitor;
|
||||
if(!monitor)
|
||||
continue;
|
||||
|
||||
const gsr_rectangle monitor_region = { monitor->pos, monitor->size };
|
||||
if(rectangles_intersect(monitor_region, cursor_region)) {
|
||||
self->damaged = true;
|
||||
break;
|
||||
}
|
||||
}
|
||||
@@ -278,21 +426,24 @@ static void gsr_damage_on_tick_cursor(gsr_damage *self) {
|
||||
}
|
||||
|
||||
static void gsr_damage_on_window_configure_notify(gsr_damage *self, XEvent *xev) {
|
||||
if(xev->xconfigure.window != self->window)
|
||||
return;
|
||||
for(int i = 0; i < self->num_windows_tracked; ++i) {
|
||||
gsr_damage_window *damage_window = &self->windows_tracked[i];
|
||||
if(damage_window->window_id == (int64_t)xev->xconfigure.window) {
|
||||
damage_window->window_pos.x = xev->xconfigure.x;
|
||||
damage_window->window_pos.y = xev->xconfigure.y;
|
||||
|
||||
//self->window_pos.x = xev->xconfigure.x;
|
||||
//self->window_pos.y = xev->xconfigure.y;
|
||||
|
||||
self->window_size.x = xev->xconfigure.width;
|
||||
self->window_size.y = xev->xconfigure.height;
|
||||
damage_window->window_size.x = xev->xconfigure.width;
|
||||
damage_window->window_size.y = xev->xconfigure.height;
|
||||
break;
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
void gsr_damage_on_event(gsr_damage *self, XEvent *xev) {
|
||||
if(self->damage_event == 0 || self->track_type == GSR_DAMAGE_TRACK_NONE)
|
||||
if(self->damage_event == 0)
|
||||
return;
|
||||
|
||||
if(self->track_type == GSR_DAMAGE_TRACK_WINDOW && xev->type == ConfigureNotify)
|
||||
if(xev->type == ConfigureNotify)
|
||||
gsr_damage_on_window_configure_notify(self, xev);
|
||||
|
||||
if(self->randr_event) {
|
||||
@@ -305,21 +456,18 @@ void gsr_damage_on_event(gsr_damage *self, XEvent *xev) {
|
||||
|
||||
if(self->damage_event && xev->type == self->damage_event + XDamageNotify)
|
||||
gsr_damage_on_damage_event(self, xev);
|
||||
|
||||
if(self->track_cursor)
|
||||
gsr_cursor_on_event(&self->cursor, xev);
|
||||
}
|
||||
|
||||
void gsr_damage_tick(gsr_damage *self) {
|
||||
if(self->damage_event == 0 || self->track_type == GSR_DAMAGE_TRACK_NONE)
|
||||
if(self->damage_event == 0)
|
||||
return;
|
||||
|
||||
if(self->track_cursor && self->cursor.visible && !self->damaged)
|
||||
if(self->track_cursor && self->cursor->visible && !self->damaged)
|
||||
gsr_damage_on_tick_cursor(self);
|
||||
}
|
||||
|
||||
bool gsr_damage_is_damaged(gsr_damage *self) {
|
||||
return self->damage_event == 0 || !self->damage || self->damaged || self->track_type == GSR_DAMAGE_TRACK_NONE;
|
||||
return self->damage_event == 0 || self->damaged;
|
||||
}
|
||||
|
||||
void gsr_damage_clear(gsr_damage *self) {
|
||||
|
||||
983
src/main.cpp
983
src/main.cpp
File diff suppressed because it is too large
Load Diff
@@ -259,12 +259,19 @@ fail:
|
||||
|
||||
static bool pa_sound_device_should_reconnect(pa_handle *p, double now, char *device_name, size_t device_name_size) {
|
||||
std::lock_guard<std::mutex> lock(p->reconnect_mutex);
|
||||
|
||||
if(!p->reconnect && (!p->stream || !PA_STREAM_IS_GOOD(pa_stream_get_state(p->stream)))) {
|
||||
p->reconnect = true;
|
||||
p->reconnect_last_tried_seconds = now;
|
||||
}
|
||||
|
||||
if(p->reconnect && now - p->reconnect_last_tried_seconds >= RECONNECT_TRY_TIMEOUT_SECONDS) {
|
||||
p->reconnect_last_tried_seconds = now;
|
||||
// TODO: Size check
|
||||
snprintf(device_name, device_name_size, "%s", p->device_name);
|
||||
return true;
|
||||
}
|
||||
|
||||
return false;
|
||||
}
|
||||
|
||||
@@ -292,6 +299,8 @@ static bool pa_sound_device_handle_reconnect(pa_handle *p, char *device_name, si
|
||||
return false;
|
||||
}
|
||||
|
||||
pa_mainloop_iterate(p->mainloop, 0, NULL);
|
||||
|
||||
std::lock_guard<std::mutex> lock(p->reconnect_mutex);
|
||||
p->reconnect = false;
|
||||
return true;
|
||||
@@ -309,10 +318,11 @@ static int pa_sound_device_read(pa_handle *p, double timeout_seconds) {
|
||||
pa_usec_t latency = 0;
|
||||
int negative = 0;
|
||||
|
||||
if(!pa_sound_device_handle_reconnect(p, device_name, sizeof(device_name), start_time))
|
||||
pa_mainloop_iterate(p->mainloop, 0, NULL);
|
||||
|
||||
if(!pa_sound_device_handle_reconnect(p, device_name, sizeof(device_name), start_time) || !p->stream)
|
||||
goto fail;
|
||||
|
||||
pa_mainloop_iterate(p->mainloop, 0, NULL);
|
||||
if(pa_stream_get_state(p->stream) != PA_STREAM_READY)
|
||||
goto fail;
|
||||
|
||||
|
||||
37
src/utils.c
37
src/utils.c
@@ -1,5 +1,6 @@
|
||||
#include "../include/utils.h"
|
||||
#include "../include/window/window.h"
|
||||
#include "../include/capture/capture.h"
|
||||
|
||||
#include <time.h>
|
||||
#include <string.h>
|
||||
@@ -595,12 +596,44 @@ vec2i scale_keep_aspect_ratio(vec2i from, vec2i to) {
|
||||
return from;
|
||||
}
|
||||
|
||||
vec2i gsr_capture_get_target_position(vec2i output_size, gsr_capture_metadata *capture_metadata) {
|
||||
vec2i target_pos = {0, 0};
|
||||
|
||||
switch(capture_metadata->halign) {
|
||||
case GSR_CAPTURE_ALIGN_START:
|
||||
break;
|
||||
case GSR_CAPTURE_ALIGN_CENTER:
|
||||
target_pos.x = capture_metadata->video_size.x/2 - output_size.x/2;
|
||||
break;
|
||||
case GSR_CAPTURE_ALIGN_END:
|
||||
target_pos.x = capture_metadata->video_size.x - output_size.x;
|
||||
break;
|
||||
}
|
||||
|
||||
switch(capture_metadata->valign) {
|
||||
case GSR_CAPTURE_ALIGN_START:
|
||||
break;
|
||||
case GSR_CAPTURE_ALIGN_CENTER:
|
||||
target_pos.y = capture_metadata->video_size.y/2 - output_size.y/2;
|
||||
break;
|
||||
case GSR_CAPTURE_ALIGN_END:
|
||||
target_pos.y = capture_metadata->video_size.y - output_size.y;
|
||||
break;
|
||||
}
|
||||
|
||||
target_pos.x += capture_metadata->position.x;
|
||||
target_pos.y += capture_metadata->position.y;
|
||||
return target_pos;
|
||||
}
|
||||
|
||||
|
||||
unsigned int gl_create_texture(gsr_egl *egl, int width, int height, int internal_format, unsigned int format, int filter) {
|
||||
unsigned int texture_id = 0;
|
||||
egl->glGenTextures(1, &texture_id);
|
||||
egl->glBindTexture(GL_TEXTURE_2D, texture_id);
|
||||
//egl->glTexImage2D(GL_TEXTURE_2D, 0, internal_format, width, height, 0, format, GL_UNSIGNED_BYTE, NULL);
|
||||
egl->glTexStorage2D(GL_TEXTURE_2D, 1, internal_format, width, height);
|
||||
// TODO:
|
||||
egl->glTexImage2D(GL_TEXTURE_2D, 0, internal_format, width, height, 0, format, GL_UNSIGNED_BYTE, NULL);
|
||||
//egl->glTexStorage2D(GL_TEXTURE_2D, 1, internal_format, width, height);
|
||||
|
||||
egl->glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, filter);
|
||||
egl->glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, filter);
|
||||
|
||||
Reference in New Issue
Block a user