Compare commits
6 Commits
feature/bl
...
dynamic_re
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
a3e2fa7e07 | ||
|
|
23901bc8e4 | ||
|
|
ba996c58f5 | ||
|
|
9e8eb77328 | ||
|
|
81d2cb70b8 | ||
|
|
6dc8db2d8c |
2
.github/workflows/pylint.yml
vendored
@@ -7,7 +7,7 @@ jobs:
|
||||
runs-on: ubuntu-latest
|
||||
strategy:
|
||||
matrix:
|
||||
python-version: ["3.10", "3.11", "3.12"]
|
||||
python-version: ["3.8", "3.9", "3.10"]
|
||||
steps:
|
||||
- uses: actions/checkout@v3
|
||||
- name: Set up Python ${{ matrix.python-version }}
|
||||
|
||||
6
.gitignore
vendored
@@ -1,8 +1,8 @@
|
||||
/job_history.json
|
||||
*.icloud
|
||||
*.fcpxml
|
||||
/uploads
|
||||
*.pyc
|
||||
/server_state.json
|
||||
/.scheduler_prefs
|
||||
*.db
|
||||
/dist/
|
||||
/build/
|
||||
/.github/
|
||||
|
||||
@@ -1,4 +0,0 @@
|
||||
[MASTER]
|
||||
max-line-length = 120
|
||||
[MESSAGES CONTROL]
|
||||
disable = missing-docstring, invalid-name, import-error, logging-fstring-interpolation
|
||||
21
LICENSE.txt
@@ -1,21 +0,0 @@
|
||||
MIT License
|
||||
|
||||
Copyright (c) 2024 Brett Williams
|
||||
|
||||
Permission is hereby granted, free of charge, to any person obtaining a copy
|
||||
of this software and associated documentation files (the "Software"), to deal
|
||||
in the Software without restriction, including without limitation the rights
|
||||
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
|
||||
copies of the Software, and to permit persons to whom the Software is
|
||||
furnished to do so, subject to the following conditions:
|
||||
|
||||
The above copyright notice and this permission notice shall be included in all
|
||||
copies or substantial portions of the Software.
|
||||
|
||||
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
|
||||
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
|
||||
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
|
||||
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
|
||||
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
|
||||
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
|
||||
SOFTWARE.
|
||||
15
README.md
@@ -1,10 +1,19 @@
|
||||
# 🎬 Zordon - Render Management Tools
|
||||
# 🎬 Zordon - Render Management Tools 🎬
|
||||
|
||||
Welcome to Zordon! It's a local network render farm manager, aiming to streamline and simplify the rendering process across multiple home computers.
|
||||
Welcome to Zordon! This is a hobby project written with fellow filmmakers in mind. It's a local network render farm manager, aiming to streamline and simplify the rendering process across multiple home computers.
|
||||
|
||||
## 📦 Installation
|
||||
|
||||
Install the necessary dependencies: `pip3 install -r requirements.txt`
|
||||
Make sure to install the necessary dependencies: `pip3 install -r requirements.txt`
|
||||
|
||||
## 🚀 How to Use
|
||||
|
||||
Zordon has two main files: `start_server.py` and `start_client.py`.
|
||||
|
||||
- **start_server.py**: Run this on any computer you want to render jobs. It manages the incoming job queue and kicks off the appropriate render jobs when ready.
|
||||
- **start_client.py**: Run this to administer your render servers. It lets you manage and submit jobs.
|
||||
|
||||
When the server is running, the job queue can be accessed via a web browser on the server's hostname (default port is 8080). You can also access it via the GUI client or a simple view-only dashboard.
|
||||
|
||||
## 🎨 Supported Renderers
|
||||
|
||||
|
||||
@@ -3,7 +3,7 @@ update_engines_on_launch: true
|
||||
max_content_path: 100000000
|
||||
server_log_level: info
|
||||
log_buffer_length: 250
|
||||
worker_process_timeout: 120
|
||||
subjob_connection_timeout: 120
|
||||
flask_log_level: error
|
||||
flask_debug_enable: false
|
||||
queue_eval_seconds: 1
|
||||
|
||||
1
main.py
@@ -1,5 +1,4 @@
|
||||
#!/usr/bin/env python3
|
||||
|
||||
from src import init
|
||||
|
||||
if __name__ == '__main__':
|
||||
|
||||
@@ -1,37 +1,15 @@
|
||||
PyQt6>=6.6.1
|
||||
psutil>=5.9.8
|
||||
requests>=2.31.0
|
||||
Pillow>=10.2.0
|
||||
PyYAML>=6.0.1
|
||||
flask>=3.0.2
|
||||
tqdm>=4.66.2
|
||||
werkzeug>=3.0.1
|
||||
Pypubsub>=4.0.3
|
||||
zeroconf>=0.131.0
|
||||
SQLAlchemy>=2.0.25
|
||||
plyer>=2.1.0
|
||||
pytz>=2023.3.post1
|
||||
future>=0.18.3
|
||||
rich>=13.7.0
|
||||
pytest>=8.0.0
|
||||
numpy>=1.26.3
|
||||
setuptools>=69.0.3
|
||||
pandas>=2.2.0
|
||||
matplotlib>=3.8.2
|
||||
MarkupSafe>=2.1.4
|
||||
dmglib>=0.9.5; sys_platform == 'darwin'
|
||||
python-dateutil>=2.8.2
|
||||
certifi>=2023.11.17
|
||||
shiboken6>=6.6.1
|
||||
Pygments>=2.17.2
|
||||
cycler>=0.12.1
|
||||
contourpy>=1.2.0
|
||||
packaging>=23.2
|
||||
fonttools>=4.47.2
|
||||
Jinja2>=3.1.3
|
||||
pyparsing>=3.1.1
|
||||
kiwisolver>=1.4.5
|
||||
attrs>=23.2.0
|
||||
lxml>=5.1.0
|
||||
click>=8.1.7
|
||||
requests_toolbelt>=1.0.0
|
||||
requests==2.31.0
|
||||
psutil==5.9.6
|
||||
PyYAML==6.0.1
|
||||
Flask==3.0.0
|
||||
rich==13.6.0
|
||||
Werkzeug~=3.0.1
|
||||
json2html~=1.3.0
|
||||
SQLAlchemy~=2.0.15
|
||||
Pillow==10.1.0
|
||||
zeroconf==0.119.0
|
||||
Pypubsub~=4.0.3
|
||||
tqdm==4.66.1
|
||||
plyer==2.1.0
|
||||
PyQt6~=6.6.0
|
||||
PySide6~=6.6.0
|
||||
|
Before Width: | Height: | Size: 1.4 KiB After Width: | Height: | Size: 1.4 KiB |
|
Before Width: | Height: | Size: 2.0 KiB After Width: | Height: | Size: 2.0 KiB |
|
Before Width: | Height: | Size: 6.1 KiB After Width: | Height: | Size: 6.1 KiB |
|
Before Width: | Height: | Size: 921 B After Width: | Height: | Size: 921 B |
|
Before Width: | Height: | Size: 476 B After Width: | Height: | Size: 476 B |
|
Before Width: | Height: | Size: 979 B After Width: | Height: | Size: 979 B |
|
Before Width: | Height: | Size: 2.2 KiB After Width: | Height: | Size: 2.2 KiB |
|
Before Width: | Height: | Size: 4.7 KiB After Width: | Height: | Size: 4.7 KiB |
|
Before Width: | Height: | Size: 2.2 KiB After Width: | Height: | Size: 2.2 KiB |
|
Before Width: | Height: | Size: 450 B After Width: | Height: | Size: 450 B |
|
Before Width: | Height: | Size: 1.3 KiB After Width: | Height: | Size: 1.3 KiB |
|
Before Width: | Height: | Size: 2.5 KiB After Width: | Height: | Size: 2.5 KiB |
|
Before Width: | Height: | Size: 694 B After Width: | Height: | Size: 694 B |
|
Before Width: | Height: | Size: 1.4 KiB After Width: | Height: | Size: 1.4 KiB |
|
Before Width: | Height: | Size: 1.8 KiB After Width: | Height: | Size: 1.8 KiB |
|
Before Width: | Height: | Size: 1.4 KiB After Width: | Height: | Size: 1.4 KiB |
|
Before Width: | Height: | Size: 816 B After Width: | Height: | Size: 816 B |
|
Before Width: | Height: | Size: 1.7 KiB After Width: | Height: | Size: 1.7 KiB |
|
Before Width: | Height: | Size: 1.4 KiB After Width: | Height: | Size: 1.4 KiB |
|
Before Width: | Height: | Size: 806 B After Width: | Height: | Size: 806 B |
@@ -1,5 +1,5 @@
|
||||
#!/usr/bin/env python3
|
||||
from init import run
|
||||
from src.api.api_server import start_server
|
||||
|
||||
if __name__ == '__main__':
|
||||
run(server_only=True)
|
||||
start_server()
|
||||
|
||||
22
setup.py
@@ -1,22 +0,0 @@
|
||||
"""
|
||||
This is a setup.py script generated by py2applet
|
||||
|
||||
Usage:
|
||||
python setup.py py2app
|
||||
"""
|
||||
import glob
|
||||
|
||||
from setuptools import setup
|
||||
|
||||
APP = ['main.py']
|
||||
DATA_FILES = [('config', glob.glob('config/*.*')),
|
||||
('resources', glob.glob('resources/*.*'))]
|
||||
OPTIONS = {}
|
||||
|
||||
setup(
|
||||
app=APP,
|
||||
data_files=DATA_FILES,
|
||||
options={'py2app': OPTIONS},
|
||||
setup_requires=['py2app'],
|
||||
name='Zordon'
|
||||
)
|
||||
@@ -10,28 +10,14 @@ import requests
|
||||
from tqdm import tqdm
|
||||
from werkzeug.utils import secure_filename
|
||||
|
||||
from src.distributed_job_manager import DistributedJobManager
|
||||
from src.engines.engine_manager import EngineManager
|
||||
from src.render_queue import RenderQueue
|
||||
|
||||
logger = logging.getLogger()
|
||||
|
||||
|
||||
def handle_uploaded_project_files(request, jobs_list, upload_directory):
|
||||
"""
|
||||
Handles the uploaded project files.
|
||||
|
||||
This method takes a request with a file, a list of jobs, and an upload directory. It checks if the file was uploaded
|
||||
directly, if it needs to be downloaded from a URL, or if it's already present on the local file system. It then
|
||||
moves the file to the appropriate directory and returns the local path to the file and its name.
|
||||
|
||||
Args:
|
||||
request (Request): The request object containing the file.
|
||||
jobs_list (list): A list of jobs. The first job in the list is used to get the file's URL and local path.
|
||||
upload_directory (str): The directory where the file should be uploaded.
|
||||
|
||||
Raises:
|
||||
ValueError: If no valid project paths are found.
|
||||
|
||||
Returns:
|
||||
tuple: A tuple containing the local path to the loaded project file and its name.
|
||||
"""
|
||||
# Initialize default values
|
||||
loaded_project_local_path = None
|
||||
|
||||
@@ -49,11 +35,12 @@ def handle_uploaded_project_files(request, jobs_list, upload_directory):
|
||||
raise ValueError(f"Error downloading file from URL: {project_url}")
|
||||
elif local_path and os.path.exists(local_path):
|
||||
referred_name = os.path.basename(local_path)
|
||||
|
||||
else:
|
||||
raise ValueError("Cannot find any valid project paths")
|
||||
|
||||
# Prepare the local filepath
|
||||
cleaned_path_name = jobs_list[0].get('name', os.path.splitext(referred_name)[0]).replace(' ', '-')
|
||||
cleaned_path_name = os.path.splitext(referred_name)[0].replace(' ', '_')
|
||||
job_dir = os.path.join(upload_directory, '-'.join(
|
||||
[datetime.now().strftime("%Y.%m.%d_%H.%M.%S"), renderer, cleaned_path_name]))
|
||||
os.makedirs(job_dir, exist_ok=True)
|
||||
@@ -81,6 +68,7 @@ def download_project_from_url(project_url):
|
||||
# This nested function is to handle downloading from a URL
|
||||
logger.info(f"Downloading project from url: {project_url}")
|
||||
referred_name = os.path.basename(project_url)
|
||||
downloaded_file_url = None
|
||||
|
||||
try:
|
||||
response = requests.get(project_url, stream=True)
|
||||
@@ -107,21 +95,7 @@ def download_project_from_url(project_url):
|
||||
|
||||
|
||||
def process_zipped_project(zip_path):
|
||||
"""
|
||||
Processes a zipped project.
|
||||
|
||||
This method takes a path to a zip file, extracts its contents, and returns the path to the extracted project file.
|
||||
If the zip file contains more than one project file or none, an error is raised.
|
||||
|
||||
Args:
|
||||
zip_path (str): The path to the zip file.
|
||||
|
||||
Raises:
|
||||
ValueError: If there's more than 1 project file or none in the zip file.
|
||||
|
||||
Returns:
|
||||
str: The path to the main project file.
|
||||
"""
|
||||
# Given a zip path, extract its content, and return the main project file path
|
||||
work_path = os.path.dirname(zip_path)
|
||||
|
||||
try:
|
||||
@@ -148,3 +122,58 @@ def process_zipped_project(zip_path):
|
||||
logger.error(f"Error processing zip file: {e}")
|
||||
raise ValueError(f"Error processing zip file: {e}")
|
||||
return extracted_project_path
|
||||
|
||||
|
||||
def create_render_jobs(jobs_list, loaded_project_local_path, job_dir):
|
||||
results = []
|
||||
|
||||
for job_data in jobs_list:
|
||||
try:
|
||||
# get new output path in output_dir
|
||||
output_path = job_data.get('output_path')
|
||||
if not output_path:
|
||||
loaded_project_filename = os.path.basename(loaded_project_local_path)
|
||||
output_filename = os.path.splitext(loaded_project_filename)[0]
|
||||
else:
|
||||
output_filename = os.path.basename(output_path)
|
||||
|
||||
# Prepare output path
|
||||
output_dir = os.path.join(os.path.dirname(os.path.dirname(loaded_project_local_path)), 'output')
|
||||
output_path = os.path.join(output_dir, output_filename)
|
||||
os.makedirs(output_dir, exist_ok=True)
|
||||
logger.debug(f"New job output path: {output_path}")
|
||||
|
||||
# create & configure jobs
|
||||
worker = EngineManager.create_worker(renderer=job_data['renderer'],
|
||||
input_path=loaded_project_local_path,
|
||||
output_path=output_path,
|
||||
engine_version=job_data.get('engine_version'),
|
||||
args=job_data.get('args', {}))
|
||||
worker.status = job_data.get("initial_status", worker.status)
|
||||
worker.parent = job_data.get("parent", worker.parent)
|
||||
worker.name = job_data.get("name", worker.name)
|
||||
worker.priority = int(job_data.get('priority', worker.priority))
|
||||
worker.start_frame = int(job_data.get("start_frame", worker.start_frame))
|
||||
worker.end_frame = int(job_data.get("end_frame", worker.end_frame))
|
||||
|
||||
# determine if we can / should split the job
|
||||
if job_data.get("enable_split_jobs", False) and (worker.total_frames > 1) and not worker.parent:
|
||||
DistributedJobManager.split_into_subjobs(worker, job_data, loaded_project_local_path)
|
||||
else:
|
||||
logger.debug("Not splitting into subjobs")
|
||||
|
||||
RenderQueue.add_to_render_queue(worker, force_start=job_data.get('force_start', False))
|
||||
if not worker.parent:
|
||||
from src.api.api_server import make_job_ready
|
||||
make_job_ready(worker.id)
|
||||
results.append(worker.json())
|
||||
except FileNotFoundError as e:
|
||||
err_msg = f"Cannot create job: {e}"
|
||||
logger.error(err_msg)
|
||||
results.append({'error': err_msg})
|
||||
except Exception as e:
|
||||
err_msg = f"Exception creating render job: {e}"
|
||||
logger.exception(err_msg)
|
||||
results.append({'error': err_msg})
|
||||
|
||||
return results
|
||||
|
||||
@@ -1,65 +1,43 @@
|
||||
#!/usr/bin/env python3
|
||||
import concurrent.futures
|
||||
import json
|
||||
import logging
|
||||
import multiprocessing
|
||||
import os
|
||||
import pathlib
|
||||
import shutil
|
||||
import socket
|
||||
import ssl
|
||||
import tempfile
|
||||
import threading
|
||||
import time
|
||||
from datetime import datetime
|
||||
from zipfile import ZipFile
|
||||
|
||||
import json2html
|
||||
import psutil
|
||||
import yaml
|
||||
from flask import Flask, request, send_file, after_this_request, Response, redirect, url_for, abort
|
||||
from sqlalchemy.orm.exc import DetachedInstanceError
|
||||
from flask import Flask, request, render_template, send_file, after_this_request, Response, redirect, url_for, abort
|
||||
|
||||
from src.api.add_job_helpers import handle_uploaded_project_files, process_zipped_project
|
||||
from src.api.preview_manager import PreviewManager
|
||||
from src.api.add_job_helpers import handle_uploaded_project_files, process_zipped_project, create_render_jobs
|
||||
from src.api.serverproxy_manager import ServerProxyManager
|
||||
from src.distributed_job_manager import DistributedJobManager
|
||||
from src.engines.core.base_worker import string_to_status, RenderStatus
|
||||
from src.engines.engine_manager import EngineManager
|
||||
from src.render_queue import RenderQueue, JobNotFoundError
|
||||
from src.utilities.benchmark import cpu_benchmark, disk_io_benchmark
|
||||
from src.utilities.config import Config
|
||||
from src.utilities.misc_helper import system_safe_path, current_system_os, current_system_cpu, \
|
||||
current_system_os_version, num_to_alphanumeric
|
||||
current_system_os_version, config_dir
|
||||
from src.utilities.server_helper import generate_thumbnail_for_job
|
||||
from src.utilities.zeroconf_server import ZeroconfServer
|
||||
|
||||
logger = logging.getLogger()
|
||||
server = Flask(__name__)
|
||||
server = Flask(__name__, template_folder='web/templates', static_folder='web/static')
|
||||
ssl._create_default_https_context = ssl._create_unverified_context # disable SSL for downloads
|
||||
|
||||
categories = [RenderStatus.RUNNING, RenderStatus.ERROR, RenderStatus.NOT_STARTED, RenderStatus.SCHEDULED,
|
||||
RenderStatus.COMPLETED, RenderStatus.CANCELLED]
|
||||
|
||||
|
||||
# -- Error Handlers --
|
||||
|
||||
@server.errorhandler(JobNotFoundError)
|
||||
def handle_job_not_found(job_error):
|
||||
return str(job_error), 400
|
||||
|
||||
|
||||
@server.errorhandler(DetachedInstanceError)
|
||||
def handle_detached_instance(error):
|
||||
# logger.debug(f"detached instance: {error}")
|
||||
return "Unavailable", 503
|
||||
|
||||
|
||||
@server.errorhandler(Exception)
|
||||
def handle_general_error(general_error):
|
||||
err_msg = f"Server error: {general_error}"
|
||||
logger.error(err_msg)
|
||||
return err_msg, 500
|
||||
|
||||
|
||||
# -- Jobs --
|
||||
|
||||
|
||||
def sorted_jobs(all_jobs, sort_by_date=True):
|
||||
if not sort_by_date:
|
||||
sorted_job_list = []
|
||||
@@ -74,67 +52,95 @@ def sorted_jobs(all_jobs, sort_by_date=True):
|
||||
return sorted_job_list
|
||||
|
||||
|
||||
@server.route('/')
|
||||
@server.route('/index')
|
||||
def index():
|
||||
with open(system_safe_path(os.path.join(config_dir(), 'presets.yaml'))) as f:
|
||||
render_presets = yaml.load(f, Loader=yaml.FullLoader)
|
||||
|
||||
return render_template('index.html', all_jobs=sorted_jobs(RenderQueue.all_jobs()),
|
||||
hostname=server.config['HOSTNAME'], renderer_info=renderer_info(),
|
||||
render_clients=[server.config['HOSTNAME']], preset_list=render_presets)
|
||||
|
||||
|
||||
@server.get('/api/jobs')
|
||||
def jobs_json():
|
||||
try:
|
||||
all_jobs = [x.json() for x in RenderQueue.all_jobs()]
|
||||
job_cache_int = int(json.dumps(all_jobs).__hash__())
|
||||
job_cache_token = num_to_alphanumeric(job_cache_int)
|
||||
return {'jobs': all_jobs, 'token': job_cache_token}
|
||||
except DetachedInstanceError as e:
|
||||
raise e
|
||||
except Exception as e:
|
||||
logger.error(f"Error fetching jobs_json: {e}")
|
||||
raise e
|
||||
|
||||
|
||||
@server.get('/api/jobs_long_poll')
|
||||
def long_polling_jobs():
|
||||
try:
|
||||
hash_token = request.args.get('token', None)
|
||||
start_time = time.time()
|
||||
while True:
|
||||
all_jobs = jobs_json()
|
||||
if all_jobs['token'] != hash_token:
|
||||
return all_jobs
|
||||
# Break after 30 seconds to avoid gateway timeout
|
||||
if time.time() - start_time > 30:
|
||||
return {}, 204
|
||||
time.sleep(1)
|
||||
except DetachedInstanceError as e:
|
||||
raise e
|
||||
all_jobs = [x.json() for x in RenderQueue.all_jobs()]
|
||||
job_cache_token = str(json.dumps(all_jobs).__hash__())
|
||||
|
||||
if hash_token and hash_token == job_cache_token:
|
||||
return [], 204 # no need to update
|
||||
else:
|
||||
return {'jobs': all_jobs, 'token': job_cache_token}
|
||||
except Exception as e:
|
||||
logger.error(f"Error fetching long_polling_jobs: {e}")
|
||||
raise e
|
||||
logger.exception(f"Exception fetching all_jobs_cached: {e}")
|
||||
return [], 500
|
||||
|
||||
|
||||
@server.route('/ui/job/<job_id>/full_details')
|
||||
def job_detail(job_id):
|
||||
found_job = RenderQueue.job_with_id(job_id)
|
||||
table_html = json2html.json2html.convert(json=found_job.json(),
|
||||
table_attributes='class="table is-narrow is-striped is-fullwidth"')
|
||||
media_url = None
|
||||
if found_job.file_list() and found_job.status == RenderStatus.COMPLETED:
|
||||
media_basename = os.path.basename(found_job.file_list()[0])
|
||||
media_url = f"/api/job/{job_id}/file/{media_basename}"
|
||||
return render_template('details.html', detail_table=table_html, media_url=media_url,
|
||||
hostname=server.config['HOSTNAME'], job_status=found_job.status.value.title(),
|
||||
job=found_job, renderer_info=renderer_info())
|
||||
|
||||
|
||||
@server.route('/api/job/<job_id>/thumbnail')
|
||||
def job_thumbnail(job_id):
|
||||
big_thumb = request.args.get('size', False) == "big"
|
||||
video_ok = request.args.get('video_ok', False)
|
||||
found_job = RenderQueue.job_with_id(job_id, none_ok=True)
|
||||
if found_job:
|
||||
|
||||
try:
|
||||
big_thumb = request.args.get('size', False) == "big"
|
||||
video_ok = request.args.get('video_ok', False)
|
||||
found_job = RenderQueue.job_with_id(job_id, none_ok=False)
|
||||
os.makedirs(server.config['THUMBS_FOLDER'], exist_ok=True)
|
||||
thumb_video_path = os.path.join(server.config['THUMBS_FOLDER'], found_job.id + '.mp4')
|
||||
thumb_image_path = os.path.join(server.config['THUMBS_FOLDER'], found_job.id + '.jpg')
|
||||
big_video_path = os.path.join(server.config['THUMBS_FOLDER'], found_job.id + '_big.mp4')
|
||||
big_image_path = os.path.join(server.config['THUMBS_FOLDER'], found_job.id + '_big.jpg')
|
||||
|
||||
# trigger a thumbnail update - just in case
|
||||
PreviewManager.update_previews_for_job(found_job, wait_until_completion=True, timeout=60)
|
||||
previews = PreviewManager.get_previews_for_job(found_job)
|
||||
all_previews_list = previews.get('output', previews.get('input', []))
|
||||
# generate regular thumb if it doesn't exist
|
||||
if not os.path.exists(thumb_video_path) and not os.path.exists(thumb_video_path + '_IN-PROGRESS') and \
|
||||
found_job.status not in [RenderStatus.CANCELLED, RenderStatus.ERROR]:
|
||||
generate_thumbnail_for_job(found_job, thumb_video_path, thumb_image_path, max_width=240)
|
||||
|
||||
video_previews = [x for x in all_previews_list if x['kind'] == 'video']
|
||||
image_previews = [x for x in all_previews_list if x['kind'] == 'image']
|
||||
filtered_list = video_previews if video_previews and video_ok else image_previews
|
||||
# generate big thumb if it doesn't exist
|
||||
if not os.path.exists(big_video_path) and not os.path.exists(big_image_path + '_IN-PROGRESS') and \
|
||||
found_job.status not in [RenderStatus.CANCELLED, RenderStatus.ERROR]:
|
||||
generate_thumbnail_for_job(found_job, big_video_path, big_image_path, max_width=800)
|
||||
|
||||
# todo - sort by size or other metrics here
|
||||
if filtered_list:
|
||||
preview_to_send = filtered_list[0]
|
||||
mime_types = {'image': 'image/jpeg', 'video': 'video/mp4'}
|
||||
file_mime_type = mime_types.get(preview_to_send['kind'], 'unknown')
|
||||
return send_file(preview_to_send['filename'], mimetype=file_mime_type)
|
||||
except Exception as e:
|
||||
logger.error(f'Error getting thumbnail: {e}')
|
||||
return f'Error getting thumbnail: {e}', 500
|
||||
return "No thumbnail available", 404
|
||||
# generated videos
|
||||
if video_ok:
|
||||
if big_thumb and os.path.exists(big_video_path) and not os.path.exists(
|
||||
big_video_path + '_IN-PROGRESS'):
|
||||
return send_file(big_video_path, mimetype="video/mp4")
|
||||
elif os.path.exists(thumb_video_path) and not os.path.exists(thumb_video_path + '_IN-PROGRESS'):
|
||||
return send_file(thumb_video_path, mimetype="video/mp4")
|
||||
|
||||
# Generated thumbs
|
||||
if big_thumb and os.path.exists(big_image_path):
|
||||
return send_file(big_image_path, mimetype='image/jpeg')
|
||||
elif os.path.exists(thumb_image_path):
|
||||
return send_file(thumb_image_path, mimetype='image/jpeg')
|
||||
|
||||
# Misc status icons
|
||||
if found_job.status == RenderStatus.RUNNING:
|
||||
return send_file('../web/static/images/gears.png', mimetype="image/png")
|
||||
elif found_job.status == RenderStatus.CANCELLED:
|
||||
return send_file('../web/static/images/cancelled.png', mimetype="image/png")
|
||||
elif found_job.status == RenderStatus.SCHEDULED:
|
||||
return send_file('../web/static/images/scheduled.png', mimetype="image/png")
|
||||
elif found_job.status == RenderStatus.NOT_STARTED:
|
||||
return send_file('../web/static/images/not_started.png', mimetype="image/png")
|
||||
# errors
|
||||
return send_file('../web/static/images/error.png', mimetype="image/png")
|
||||
|
||||
|
||||
# Get job file routing
|
||||
@@ -159,17 +165,22 @@ def filtered_jobs_json(status_val):
|
||||
return f'Cannot find jobs with status {status_val}', 400
|
||||
|
||||
|
||||
@server.post('/api/job/<job_id>/send_subjob_update_notification')
|
||||
def subjob_update_notification(job_id):
|
||||
@server.post('/api/job/<job_id>/notify_parent_of_status_change')
|
||||
def subjob_status_change(job_id):
|
||||
try:
|
||||
subjob_details = request.json
|
||||
logger.info(f"Subjob to job id: {job_id} is now {subjob_details['status']}")
|
||||
DistributedJobManager.handle_subjob_update_notification(RenderQueue.job_with_id(job_id), subjob_data=subjob_details)
|
||||
DistributedJobManager.handle_subjob_status_change(RenderQueue.job_with_id(job_id), subjob_data=subjob_details)
|
||||
return Response(status=200)
|
||||
except JobNotFoundError:
|
||||
return "Job not found", 404
|
||||
|
||||
|
||||
@server.errorhandler(JobNotFoundError)
|
||||
def handle_job_not_found(job_error):
|
||||
return f'Cannot find job with ID {job_error.job_id}', 400
|
||||
|
||||
|
||||
@server.get('/api/job/<job_id>')
|
||||
def get_job_status(job_id):
|
||||
return RenderQueue.job_with_id(job_id).json()
|
||||
@@ -188,22 +199,25 @@ def get_job_logs(job_id):
|
||||
|
||||
@server.get('/api/job/<job_id>/file_list')
|
||||
def get_file_list(job_id):
|
||||
return [os.path.basename(x) for x in RenderQueue.job_with_id(job_id).file_list()]
|
||||
return RenderQueue.job_with_id(job_id).file_list()
|
||||
|
||||
|
||||
@server.route('/api/job/<job_id>/download')
|
||||
def download_file(job_id):
|
||||
|
||||
requested_filename = request.args.get('filename')
|
||||
if not requested_filename:
|
||||
return 'Filename required', 400
|
||||
|
||||
found_job = RenderQueue.job_with_id(job_id)
|
||||
for job_filename in found_job.file_list():
|
||||
if os.path.basename(job_filename).lower() == requested_filename.lower():
|
||||
return send_file(job_filename, as_attachment=True, )
|
||||
|
||||
return f"File '{requested_filename}' not found", 404
|
||||
@server.get('/api/job/<job_id>/make_ready')
|
||||
def make_job_ready(job_id):
|
||||
try:
|
||||
found_job = RenderQueue.job_with_id(job_id)
|
||||
if found_job.status in [RenderStatus.CONFIGURING, RenderStatus.NOT_STARTED]:
|
||||
if found_job.children:
|
||||
for child_key in found_job.children.keys():
|
||||
child_id = child_key.split('@')[0]
|
||||
hostname = child_key.split('@')[-1]
|
||||
ServerProxyManager.get_proxy_for_hostname(hostname).request_data(f'job/{child_id}/make_ready')
|
||||
found_job.status = RenderStatus.NOT_STARTED
|
||||
RenderQueue.save_state()
|
||||
return found_job.json(), 200
|
||||
except Exception as e:
|
||||
return "Error making job ready: {e}", 500
|
||||
return "Not valid command", 405
|
||||
|
||||
|
||||
@server.route('/api/job/<job_id>/download_all')
|
||||
@@ -213,10 +227,7 @@ def download_all(job_id):
|
||||
@after_this_request
|
||||
def clear_zip(response):
|
||||
if zip_filename and os.path.exists(zip_filename):
|
||||
try:
|
||||
os.remove(zip_filename)
|
||||
except Exception as e:
|
||||
logger.warning(f"Error removing zip file '{zip_filename}': {e}")
|
||||
os.remove(zip_filename)
|
||||
return response
|
||||
|
||||
found_job = RenderQueue.job_with_id(job_id)
|
||||
@@ -237,8 +248,8 @@ def download_all(job_id):
|
||||
def presets():
|
||||
presets_path = system_safe_path('config/presets.yaml')
|
||||
with open(presets_path) as f:
|
||||
loaded_presets = yaml.load(f, Loader=yaml.FullLoader)
|
||||
return loaded_presets
|
||||
presets = yaml.load(f, Loader=yaml.FullLoader)
|
||||
return presets
|
||||
|
||||
|
||||
@server.get('/api/full_status')
|
||||
@@ -280,7 +291,18 @@ def add_job_handler():
|
||||
elif request.form.get('json', None):
|
||||
jobs_list = json.loads(request.form['json'])
|
||||
else:
|
||||
return "Invalid data", 400
|
||||
# Cleanup flat form data into nested structure
|
||||
form_dict = {k: v for k, v in dict(request.form).items() if v}
|
||||
args = {}
|
||||
arg_keys = [k for k in form_dict.keys() if '-arg_' in k]
|
||||
for server_hostname in arg_keys:
|
||||
if form_dict['renderer'] in server_hostname or 'AnyRenderer' in server_hostname:
|
||||
cleaned_key = server_hostname.split('-arg_')[-1]
|
||||
args[cleaned_key] = form_dict[server_hostname]
|
||||
form_dict.pop(server_hostname)
|
||||
args['raw'] = form_dict.get('raw_args', None)
|
||||
form_dict['args'] = args
|
||||
jobs_list = [form_dict]
|
||||
except Exception as e:
|
||||
err_msg = f"Error processing job data: {e}"
|
||||
logger.error(err_msg)
|
||||
@@ -292,13 +314,16 @@ def add_job_handler():
|
||||
if loaded_project_local_path.lower().endswith('.zip'):
|
||||
loaded_project_local_path = process_zipped_project(loaded_project_local_path)
|
||||
|
||||
results = []
|
||||
for new_job_data in jobs_list:
|
||||
new_job = DistributedJobManager.create_render_job(new_job_data, loaded_project_local_path)
|
||||
results.append(new_job.json())
|
||||
return results, 200
|
||||
results = create_render_jobs(jobs_list, loaded_project_local_path, referred_name)
|
||||
for response in results:
|
||||
if response.get('error', None):
|
||||
return results, 400
|
||||
if request.args.get('redirect', False):
|
||||
return redirect(url_for('index'))
|
||||
else:
|
||||
return results, 200
|
||||
except Exception as e:
|
||||
logger.exception(f"Error adding job: {e}")
|
||||
logger.exception(f"Unknown error adding job: {e}")
|
||||
return 'unknown error', 500
|
||||
|
||||
|
||||
@@ -328,10 +353,14 @@ def delete_job(job_id):
|
||||
if server.config['UPLOAD_FOLDER'] in output_dir and os.path.exists(output_dir):
|
||||
shutil.rmtree(output_dir)
|
||||
|
||||
try:
|
||||
PreviewManager.delete_previews_for_job(found_job)
|
||||
except Exception as e:
|
||||
logger.error(f"Error deleting previews for {found_job}: {e}")
|
||||
# Remove any thumbnails
|
||||
for filename in os.listdir(server.config['THUMBS_FOLDER']):
|
||||
if job_id in filename:
|
||||
os.remove(os.path.join(server.config['THUMBS_FOLDER'], filename))
|
||||
|
||||
thumb_path = os.path.join(server.config['THUMBS_FOLDER'], found_job.id + '.mp4')
|
||||
if os.path.exists(thumb_path):
|
||||
os.remove(thumb_path)
|
||||
|
||||
# See if we own the project_dir (i.e. was it uploaded)
|
||||
project_dir = os.path.dirname(os.path.dirname(found_job.input_path))
|
||||
@@ -361,6 +390,13 @@ def clear_history():
|
||||
|
||||
@server.route('/api/status')
|
||||
def status():
|
||||
renderer_data = {}
|
||||
for render_class in EngineManager.supported_engines():
|
||||
if EngineManager.all_versions_for_engine(render_class.name): # only return renderers installed on host
|
||||
renderer_data[render_class.engine.name()] = \
|
||||
{'versions': EngineManager.all_versions_for_engine(render_class.engine.name()),
|
||||
'is_available': RenderQueue.is_available_for_job(render_class.engine.name())
|
||||
}
|
||||
|
||||
# Get system info
|
||||
return {"timestamp": datetime.now().isoformat(),
|
||||
@@ -374,6 +410,7 @@ def status():
|
||||
"memory_available": psutil.virtual_memory().available,
|
||||
"memory_percent": psutil.virtual_memory().percent,
|
||||
"job_counts": RenderQueue.job_counts(),
|
||||
"renderers": renderer_data,
|
||||
"hostname": server.config['HOSTNAME'],
|
||||
"port": server.config['PORT']
|
||||
}
|
||||
@@ -381,53 +418,18 @@ def status():
|
||||
|
||||
@server.get('/api/renderer_info')
|
||||
def renderer_info():
|
||||
|
||||
response_type = request.args.get('response_type', 'standard')
|
||||
|
||||
def process_engine(engine):
|
||||
try:
|
||||
# Get all installed versions of the engine
|
||||
installed_versions = EngineManager.all_versions_for_engine(engine.name())
|
||||
if installed_versions:
|
||||
# Use system-installed versions to avoid permission issues
|
||||
system_installed_versions = [x for x in installed_versions if x['type'] == 'system']
|
||||
install_path = system_installed_versions[0]['path'] if system_installed_versions else \
|
||||
installed_versions[0]['path']
|
||||
|
||||
en = engine(install_path)
|
||||
|
||||
if response_type == 'full': # Full dataset - Can be slow
|
||||
return {
|
||||
en.name(): {
|
||||
'is_available': RenderQueue.is_available_for_job(en.name()),
|
||||
'versions': installed_versions,
|
||||
'supported_extensions': engine.supported_extensions(),
|
||||
'supported_export_formats': en.get_output_formats(),
|
||||
'system_info': en.system_info()
|
||||
}
|
||||
}
|
||||
elif response_type == 'standard': # Simpler dataset to reduce response times
|
||||
return {
|
||||
en.name(): {
|
||||
'is_available': RenderQueue.is_available_for_job(en.name()),
|
||||
'versions': installed_versions,
|
||||
}
|
||||
}
|
||||
else:
|
||||
raise AttributeError(f"Invalid response_type: {response_type}")
|
||||
except Exception as e:
|
||||
logger.error(f'Error fetching details for {engine.name()} renderer: {e}')
|
||||
return {}
|
||||
|
||||
renderer_data = {}
|
||||
with concurrent.futures.ThreadPoolExecutor() as executor:
|
||||
futures = {executor.submit(process_engine, engine): engine.name() for engine in EngineManager.supported_engines()}
|
||||
|
||||
for future in concurrent.futures.as_completed(futures):
|
||||
result = future.result()
|
||||
if result:
|
||||
renderer_data.update(result)
|
||||
|
||||
for engine in EngineManager.supported_engines():
|
||||
# Get all installed versions of engine
|
||||
installed_versions = EngineManager.all_versions_for_engine(engine.name())
|
||||
if installed_versions:
|
||||
# fixme: using system versions only because downloaded versions may have permissions issues
|
||||
system_installed_versions = [x for x in installed_versions if x['type'] == 'system']
|
||||
install_path = system_installed_versions[0]['path'] if system_installed_versions else installed_versions[0]['path']
|
||||
renderer_data[engine.name()] = {'is_available': RenderQueue.is_available_for_job(engine.name()),
|
||||
'versions': installed_versions,
|
||||
'supported_extensions': engine.supported_extensions(),
|
||||
'supported_export_formats': engine(install_path).get_output_formats()}
|
||||
return renderer_data
|
||||
|
||||
|
||||
@@ -497,35 +499,65 @@ def get_renderer_help(renderer):
|
||||
return f"Cannot find renderer '{renderer}'", 400
|
||||
|
||||
|
||||
@server.get('/api/cpu_benchmark')
|
||||
def get_cpu_benchmark_score():
|
||||
return str(cpu_benchmark(10))
|
||||
@server.route('/upload')
|
||||
def upload_file_page():
|
||||
return render_template('upload.html', supported_renderers=EngineManager.supported_engines())
|
||||
|
||||
|
||||
@server.get('/api/disk_benchmark')
|
||||
def get_disk_benchmark():
|
||||
results = disk_io_benchmark()
|
||||
return {'write_speed': results[0], 'read_speed': results[-1]}
|
||||
|
||||
|
||||
def start_server(hostname=None):
|
||||
def start_server():
|
||||
def eval_loop(delay_sec=1):
|
||||
while True:
|
||||
RenderQueue.evaluate_queue()
|
||||
time.sleep(delay_sec)
|
||||
|
||||
# get hostname
|
||||
if not hostname:
|
||||
local_hostname = socket.gethostname()
|
||||
hostname = local_hostname + (".local" if not local_hostname.endswith(".local") else "")
|
||||
local_hostname = socket.gethostname()
|
||||
local_hostname = local_hostname + (".local" if not local_hostname.endswith(".local") else "")
|
||||
|
||||
# load flask settings
|
||||
server.config['HOSTNAME'] = hostname
|
||||
server.config['HOSTNAME'] = local_hostname
|
||||
server.config['PORT'] = int(Config.port_number)
|
||||
server.config['UPLOAD_FOLDER'] = system_safe_path(os.path.expanduser(Config.upload_folder))
|
||||
server.config['THUMBS_FOLDER'] = system_safe_path(os.path.join(os.path.expanduser(Config.upload_folder), 'thumbs'))
|
||||
server.config['MAX_CONTENT_PATH'] = Config.max_content_path
|
||||
server.config['enable_split_jobs'] = Config.enable_split_jobs
|
||||
|
||||
# Setup directory for saving engines to
|
||||
EngineManager.engines_path = system_safe_path(os.path.join(os.path.join(os.path.expanduser(Config.upload_folder),
|
||||
'engines')))
|
||||
os.makedirs(EngineManager.engines_path, exist_ok=True)
|
||||
|
||||
# Debug info
|
||||
logger.debug(f"Upload directory: {server.config['UPLOAD_FOLDER']}")
|
||||
logger.debug(f"Thumbs directory: {server.config['THUMBS_FOLDER']}")
|
||||
logger.debug(f"Engines directory: {EngineManager.engines_path}")
|
||||
|
||||
# disable most Flask logging
|
||||
flask_log = logging.getLogger('werkzeug')
|
||||
flask_log.setLevel(Config.flask_log_level.upper())
|
||||
|
||||
logger.debug('Starting API server')
|
||||
server.run(host='0.0.0.0', port=server.config['PORT'], debug=Config.flask_debug_enable, use_reloader=False,
|
||||
threaded=True)
|
||||
# check for updates for render engines if config'd or on first launch
|
||||
if Config.update_engines_on_launch or not EngineManager.all_engines():
|
||||
EngineManager.update_all_engines()
|
||||
|
||||
# Set up the RenderQueue object
|
||||
RenderQueue.load_state(database_directory=server.config['UPLOAD_FOLDER'])
|
||||
ServerProxyManager.subscribe_to_listener()
|
||||
DistributedJobManager.subscribe_to_listener()
|
||||
|
||||
thread = threading.Thread(target=eval_loop, kwargs={'delay_sec': Config.queue_eval_seconds}, daemon=True)
|
||||
thread.start()
|
||||
|
||||
logger.info(f"Starting Zordon Render Server - Hostname: '{server.config['HOSTNAME']}:'")
|
||||
ZeroconfServer.configure("_zordon._tcp.local.", server.config['HOSTNAME'], server.config['PORT'])
|
||||
ZeroconfServer.properties = {'system_cpu': current_system_cpu(), 'system_cpu_cores': multiprocessing.cpu_count(),
|
||||
'system_os': current_system_os(),
|
||||
'system_os_version': current_system_os_version()}
|
||||
ZeroconfServer.start()
|
||||
|
||||
try:
|
||||
server.run(host='0.0.0.0', port=server.config['PORT'], debug=Config.flask_debug_enable,
|
||||
use_reloader=False, threaded=True)
|
||||
finally:
|
||||
RenderQueue.save_state()
|
||||
ZeroconfServer.stop()
|
||||
|
||||
@@ -1,113 +0,0 @@
|
||||
import logging
|
||||
import os
|
||||
import subprocess
|
||||
import threading
|
||||
from pathlib import Path
|
||||
|
||||
from src.utilities.ffmpeg_helper import generate_thumbnail, save_first_frame
|
||||
|
||||
logger = logging.getLogger()
|
||||
supported_video_formats = ['.mp4', '.mov', '.avi', '.mpg', '.mpeg', '.mxf', '.m4v', 'mkv']
|
||||
supported_image_formats = ['.jpg', '.png', '.exr', '.tif']
|
||||
|
||||
|
||||
class PreviewManager:
|
||||
|
||||
storage_path = None
|
||||
_running_jobs = {}
|
||||
|
||||
@classmethod
|
||||
def __generate_job_preview_worker(cls, job, replace_existing=False, max_width=320):
|
||||
|
||||
# Determine best source file to use for thumbs
|
||||
job_file_list = job.file_list()
|
||||
source_files = job_file_list if job_file_list else [job.input_path]
|
||||
preview_label = "output" if job_file_list else "input"
|
||||
|
||||
# filter by type
|
||||
found_image_files = [f for f in source_files if os.path.splitext(f)[-1].lower() in supported_image_formats]
|
||||
found_video_files = [f for f in source_files if os.path.splitext(f)[-1].lower() in supported_video_formats]
|
||||
|
||||
# check if we even have any valid files to work from
|
||||
if source_files and not found_video_files and not found_image_files:
|
||||
logger.warning(f"No valid image or video files found in files from job: {job}")
|
||||
return
|
||||
|
||||
os.makedirs(cls.storage_path, exist_ok=True)
|
||||
base_path = os.path.join(cls.storage_path, f"{job.id}-{preview_label}-{max_width}")
|
||||
preview_video_path = base_path + '.mp4'
|
||||
preview_image_path = base_path + '.jpg'
|
||||
|
||||
if replace_existing:
|
||||
for x in [preview_image_path, preview_video_path]:
|
||||
try:
|
||||
os.remove(x)
|
||||
except OSError:
|
||||
pass
|
||||
|
||||
# Generate image previews
|
||||
if (found_video_files or found_image_files) and not os.path.exists(preview_image_path):
|
||||
try:
|
||||
path_of_source = found_image_files[-1] if found_image_files else found_video_files[-1]
|
||||
logger.debug(f"Generating image preview for {path_of_source}")
|
||||
save_first_frame(source_path=path_of_source, dest_path=preview_image_path, max_width=max_width)
|
||||
logger.debug(f"Successfully created image preview for {path_of_source}")
|
||||
except Exception as e:
|
||||
logger.error(f"Error generating image preview for {job}: {e}")
|
||||
|
||||
# Generate video previews
|
||||
if found_video_files and not os.path.exists(preview_video_path):
|
||||
try:
|
||||
path_of_source = found_video_files[0]
|
||||
logger.debug(f"Generating video preview for {path_of_source}")
|
||||
generate_thumbnail(source_path=path_of_source, dest_path=preview_video_path, max_width=max_width)
|
||||
logger.debug(f"Successfully created video preview for {path_of_source}")
|
||||
except subprocess.CalledProcessError as e:
|
||||
logger.error(f"Error generating video preview for {job}: {e}")
|
||||
|
||||
@classmethod
|
||||
def update_previews_for_job(cls, job, replace_existing=False, wait_until_completion=False, timeout=None):
|
||||
job_thread = cls._running_jobs.get(job.id)
|
||||
if job_thread and job_thread.is_alive():
|
||||
logger.debug(f'Preview generation job already running for {job}')
|
||||
else:
|
||||
job_thread = threading.Thread(target=cls.__generate_job_preview_worker, args=(job, replace_existing,))
|
||||
job_thread.start()
|
||||
cls._running_jobs[job.id] = job_thread
|
||||
|
||||
if wait_until_completion:
|
||||
job_thread.join(timeout=timeout)
|
||||
|
||||
@classmethod
|
||||
def get_previews_for_job(cls, job):
|
||||
|
||||
results = {}
|
||||
try:
|
||||
directory_path = Path(cls.storage_path)
|
||||
preview_files_for_job = [f for f in directory_path.iterdir() if f.is_file() and f.name.startswith(job.id)]
|
||||
|
||||
for preview_filename in preview_files_for_job:
|
||||
try:
|
||||
pixel_width = str(preview_filename).split('-')[-1]
|
||||
preview_label = str(os.path.basename(preview_filename)).split('-')[1]
|
||||
extension = os.path.splitext(preview_filename)[-1].lower()
|
||||
kind = 'video' if extension in supported_video_formats else \
|
||||
'image' if extension in supported_image_formats else 'unknown'
|
||||
results[preview_label] = results.get(preview_label, [])
|
||||
results[preview_label].append({'filename': str(preview_filename), 'width': pixel_width, 'kind': kind})
|
||||
except IndexError: # ignore invalid filenames
|
||||
pass
|
||||
except FileNotFoundError:
|
||||
pass
|
||||
return results
|
||||
|
||||
@classmethod
|
||||
def delete_previews_for_job(cls, job):
|
||||
all_previews = cls.get_previews_for_job(job)
|
||||
flattened_list = [item for sublist in all_previews.values() for item in sublist]
|
||||
for preview in flattened_list:
|
||||
try:
|
||||
logger.debug(f"Removing preview: {preview['filename']}")
|
||||
os.remove(preview['filename'])
|
||||
except OSError as e:
|
||||
logger.error(f"Error removing preview '{preview.get('filename')}': {e}")
|
||||
@@ -1,42 +1,29 @@
|
||||
import json
|
||||
import logging
|
||||
import os
|
||||
import socket
|
||||
import threading
|
||||
import time
|
||||
|
||||
import requests
|
||||
from requests_toolbelt.multipart import MultipartEncoder, MultipartEncoderMonitor
|
||||
from urllib.parse import urljoin
|
||||
|
||||
from src.utilities.misc_helper import is_localhost
|
||||
from src.utilities.status_utils import RenderStatus
|
||||
from src.utilities.zeroconf_server import ZeroconfServer
|
||||
|
||||
status_colors = {RenderStatus.ERROR: "red", RenderStatus.CANCELLED: 'orange1', RenderStatus.COMPLETED: 'green',
|
||||
RenderStatus.NOT_STARTED: "yellow", RenderStatus.SCHEDULED: 'purple',
|
||||
RenderStatus.RUNNING: 'cyan', RenderStatus.WAITING_FOR_SUBJOBS: 'blue'}
|
||||
|
||||
categories = [RenderStatus.RUNNING, RenderStatus.WAITING_FOR_SUBJOBS, RenderStatus.ERROR, RenderStatus.NOT_STARTED,
|
||||
RenderStatus.SCHEDULED, RenderStatus.COMPLETED, RenderStatus.CANCELLED, RenderStatus.UNDEFINED,
|
||||
RenderStatus.CONFIGURING]
|
||||
RenderStatus.SCHEDULED, RenderStatus.COMPLETED, RenderStatus.CANCELLED, RenderStatus.UNDEFINED]
|
||||
|
||||
logger = logging.getLogger()
|
||||
OFFLINE_MAX = 4
|
||||
LOOPBACK = '127.0.0.1'
|
||||
OFFLINE_MAX = 2
|
||||
|
||||
|
||||
class RenderServerProxy:
|
||||
"""
|
||||
The ServerProxy class is responsible for interacting with a remote server.
|
||||
It provides methods to request data from the server and store the status of the server.
|
||||
|
||||
Attributes:
|
||||
system_cpu (str): The CPU type of the system.
|
||||
system_cpu_count (int): The number of CPUs in the system.
|
||||
system_os (str): The operating system of the system.
|
||||
system_os_version (str): The version of the operating system.
|
||||
"""
|
||||
|
||||
def __init__(self, hostname, server_port="8080"):
|
||||
self.hostname = hostname
|
||||
self.port = server_port
|
||||
@@ -47,7 +34,6 @@ class RenderServerProxy:
|
||||
self.__background_thread = None
|
||||
self.__offline_flags = 0
|
||||
self.update_cadence = 5
|
||||
self.is_localhost = bool(is_localhost(hostname))
|
||||
|
||||
# Cache some basic server info
|
||||
self.system_cpu = None
|
||||
@@ -55,9 +41,6 @@ class RenderServerProxy:
|
||||
self.system_os = None
|
||||
self.system_os_version = None
|
||||
|
||||
def __repr__(self):
|
||||
return f"<RenderServerProxy - {self.hostname}>"
|
||||
|
||||
def connect(self):
|
||||
return self.status()
|
||||
|
||||
@@ -65,7 +48,7 @@ class RenderServerProxy:
|
||||
if self.__update_in_background:
|
||||
return self.__offline_flags < OFFLINE_MAX
|
||||
else:
|
||||
return self.get_status() is not None
|
||||
return self.connect() is not None
|
||||
|
||||
def status(self):
|
||||
if not self.is_online():
|
||||
@@ -76,9 +59,8 @@ class RenderServerProxy:
|
||||
def request_data(self, payload, timeout=5):
|
||||
try:
|
||||
req = self.request(payload, timeout)
|
||||
if req.ok:
|
||||
if req.ok and req.status_code == 200:
|
||||
self.__offline_flags = 0
|
||||
if req.status_code == 200:
|
||||
return req.json()
|
||||
except json.JSONDecodeError as e:
|
||||
logger.debug(f"JSON decode error: {e}")
|
||||
@@ -90,18 +72,10 @@ class RenderServerProxy:
|
||||
self.__offline_flags = self.__offline_flags + 1
|
||||
except Exception as e:
|
||||
logger.exception(f"Uncaught exception: {e}")
|
||||
|
||||
# If server unexpectedly drops off the network, stop background updates
|
||||
if self.__offline_flags > OFFLINE_MAX:
|
||||
try:
|
||||
self.stop_background_update()
|
||||
except KeyError:
|
||||
pass
|
||||
return None
|
||||
|
||||
def request(self, payload, timeout=5):
|
||||
hostname = LOOPBACK if self.is_localhost else self.hostname
|
||||
return requests.get(f'http://{hostname}:{self.port}/api/{payload}', timeout=timeout)
|
||||
return requests.get(f'http://{self.hostname}:{self.port}/api/{payload}', timeout=timeout)
|
||||
|
||||
def start_background_update(self):
|
||||
if self.__update_in_background:
|
||||
@@ -109,11 +83,9 @@ class RenderServerProxy:
|
||||
self.__update_in_background = True
|
||||
|
||||
def thread_worker():
|
||||
logger.debug(f'Starting background updates for {self.hostname}')
|
||||
while self.__update_in_background:
|
||||
self.__update_job_cache()
|
||||
time.sleep(self.update_cadence)
|
||||
logger.debug(f'Stopping background updates for {self.hostname}')
|
||||
|
||||
self.__background_thread = threading.Thread(target=thread_worker)
|
||||
self.__background_thread.daemon = True
|
||||
@@ -130,13 +102,8 @@ class RenderServerProxy:
|
||||
self.__update_job_cache(timeout, ignore_token)
|
||||
return self.__jobs_cache.copy() if self.__jobs_cache else None
|
||||
|
||||
def __update_job_cache(self, timeout=40, ignore_token=False):
|
||||
|
||||
if self.__offline_flags: # if we're offline, don't bother with the long poll
|
||||
ignore_token = True
|
||||
|
||||
url = f'jobs_long_poll?token={self.__jobs_cache_token}' if (self.__jobs_cache_token and
|
||||
not ignore_token) else 'jobs'
|
||||
def __update_job_cache(self, timeout=5, ignore_token=False):
|
||||
url = f'jobs?token={self.__jobs_cache_token}' if self.__jobs_cache_token and not ignore_token else 'jobs'
|
||||
status_result = self.request_data(url, timeout=timeout)
|
||||
if status_result is not None:
|
||||
sorted_jobs = []
|
||||
@@ -148,7 +115,8 @@ class RenderServerProxy:
|
||||
self.__jobs_cache_token = status_result['token']
|
||||
|
||||
def get_data(self, timeout=5):
|
||||
return self.request_data('full_status', timeout=timeout)
|
||||
all_data = self.request_data('full_status', timeout=timeout)
|
||||
return all_data
|
||||
|
||||
def cancel_job(self, job_id, confirm=False):
|
||||
return self.request_data(f'job/{job_id}/cancel?confirm={confirm}')
|
||||
@@ -158,7 +126,7 @@ class RenderServerProxy:
|
||||
|
||||
def get_status(self):
|
||||
status = self.request_data('status')
|
||||
if status and not self.system_cpu:
|
||||
if not self.system_cpu:
|
||||
self.system_cpu = status['system_cpu']
|
||||
self.system_cpu_count = status['cpu_count']
|
||||
self.system_os = status['system_os']
|
||||
@@ -171,117 +139,53 @@ class RenderServerProxy:
|
||||
def get_all_engines(self):
|
||||
return self.request_data('all_engines')
|
||||
|
||||
def send_subjob_update_notification(self, parent_id, subjob):
|
||||
"""
|
||||
Notifies the parent job of an update in a subjob.
|
||||
|
||||
Args:
|
||||
parent_id (str): The ID of the parent job.
|
||||
subjob (Job): The subjob that has updated.
|
||||
|
||||
Returns:
|
||||
Response: The response from the server.
|
||||
"""
|
||||
hostname = LOOPBACK if self.is_localhost else self.hostname
|
||||
return requests.post(f'http://{hostname}:{self.port}/api/job/{parent_id}/send_subjob_update_notification',
|
||||
def notify_parent_of_status_change(self, parent_id, subjob):
|
||||
return requests.post(f'http://{self.hostname}:{self.port}/api/job/{parent_id}/notify_parent_of_status_change',
|
||||
json=subjob.json())
|
||||
|
||||
def post_job_to_server(self, file_path, job_list, callback=None):
|
||||
"""
|
||||
Posts a job to the server.
|
||||
|
||||
Args:
|
||||
file_path (str): The path to the file to upload.
|
||||
job_list (list): A list of jobs to post.
|
||||
callback (function, optional): A callback function to call during the upload. Defaults to None.
|
||||
# bypass uploading file if posting to localhost
|
||||
if is_localhost(self.hostname):
|
||||
jobs_with_path = [{**item, "local_path": file_path} for item in job_list]
|
||||
return requests.post(f'http://{self.hostname}:{self.port}/api/add_job', data=json.dumps(jobs_with_path),
|
||||
headers={'Content-Type': 'application/json'})
|
||||
|
||||
Returns:
|
||||
Response: The response from the server.
|
||||
"""
|
||||
try:
|
||||
# Check if file exists
|
||||
if not os.path.exists(file_path):
|
||||
raise FileNotFoundError(f"File not found: {file_path}")
|
||||
# Prepare the form data
|
||||
encoder = MultipartEncoder({
|
||||
'file': (os.path.basename(file_path), open(file_path, 'rb'), 'application/octet-stream'),
|
||||
'json': (None, json.dumps(job_list), 'application/json'),
|
||||
})
|
||||
|
||||
# Bypass uploading file if posting to localhost
|
||||
if self.is_localhost:
|
||||
jobs_with_path = [{'local_path': file_path, **item} for item in job_list]
|
||||
job_data = json.dumps(jobs_with_path)
|
||||
url = urljoin(f'http://{LOOPBACK}:{self.port}', '/api/add_job')
|
||||
headers = {'Content-Type': 'application/json'}
|
||||
return requests.post(url, data=job_data, headers=headers)
|
||||
# Create a monitor that will track the upload progress
|
||||
if callback:
|
||||
monitor = MultipartEncoderMonitor(encoder, callback(encoder))
|
||||
else:
|
||||
monitor = MultipartEncoderMonitor(encoder)
|
||||
|
||||
# Prepare the form data for remote host
|
||||
with open(file_path, 'rb') as file:
|
||||
encoder = MultipartEncoder({
|
||||
'file': (os.path.basename(file_path), file, 'application/octet-stream'),
|
||||
'json': (None, json.dumps(job_list), 'application/json'),
|
||||
})
|
||||
# Send the request
|
||||
headers = {'Content-Type': monitor.content_type}
|
||||
return requests.post(f'http://{self.hostname}:{self.port}/api/add_job', data=monitor, headers=headers)
|
||||
|
||||
# Create a monitor that will track the upload progress
|
||||
monitor = MultipartEncoderMonitor(encoder, callback) if callback else MultipartEncoderMonitor(encoder)
|
||||
headers = {'Content-Type': monitor.content_type}
|
||||
url = urljoin(f'http://{self.hostname}:{self.port}', '/api/add_job')
|
||||
|
||||
# Send the request with proper resource management
|
||||
with requests.post(url, data=monitor, headers=headers) as response:
|
||||
return response
|
||||
|
||||
except requests.ConnectionError as e:
|
||||
logger.error(f"Connection error: {e}")
|
||||
except Exception as e:
|
||||
logger.error(f"An error occurred: {e}")
|
||||
|
||||
def get_job_files_list(self, job_id):
|
||||
return self.request_data(f"job/{job_id}/file_list")
|
||||
|
||||
def download_all_job_files(self, job_id, save_path):
|
||||
hostname = LOOPBACK if self.is_localhost else self.hostname
|
||||
url = f"http://{hostname}:{self.port}/api/job/{job_id}/download_all"
|
||||
return self.__download_file_from_url(url, output_filepath=save_path)
|
||||
|
||||
def download_job_file(self, job_id, job_filename, save_path):
|
||||
hostname = LOOPBACK if self.is_localhost else self.hostname
|
||||
url = f"http://{hostname}:{self.port}/api/job/{job_id}/download?filename={job_filename}"
|
||||
return self.__download_file_from_url(url, output_filepath=save_path)
|
||||
def get_job_files(self, job_id, save_path):
|
||||
url = f"http://{self.hostname}:{self.port}/api/job/{job_id}/download_all"
|
||||
return self.download_file(url, filename=save_path)
|
||||
|
||||
@staticmethod
|
||||
def __download_file_from_url(url, output_filepath):
|
||||
def download_file(url, filename):
|
||||
with requests.get(url, stream=True) as r:
|
||||
r.raise_for_status()
|
||||
with open(output_filepath, 'wb') as f:
|
||||
with open(filename, 'wb') as f:
|
||||
for chunk in r.iter_content(chunk_size=8192):
|
||||
f.write(chunk)
|
||||
return output_filepath
|
||||
return filename
|
||||
|
||||
# --- Renderer --- #
|
||||
|
||||
def get_renderer_info(self, response_type='standard', timeout=5):
|
||||
"""
|
||||
Fetches renderer information from the server.
|
||||
|
||||
Args:
|
||||
response_type (str, optional): Returns standard or full version of renderer info
|
||||
timeout (int, optional): The number of seconds to wait for a response from the server. Defaults to 5.
|
||||
|
||||
Returns:
|
||||
dict: A dictionary containing the renderer information.
|
||||
"""
|
||||
all_data = self.request_data(f"renderer_info?response_type={response_type}", timeout=timeout)
|
||||
def get_renderer_info(self, timeout=5):
|
||||
all_data = self.request_data(f'renderer_info', timeout=timeout)
|
||||
return all_data
|
||||
|
||||
def delete_engine(self, engine, version, system_cpu=None):
|
||||
"""
|
||||
Sends a request to the server to delete a specific engine.
|
||||
|
||||
Args:
|
||||
engine (str): The name of the engine to delete.
|
||||
version (str): The version of the engine to delete.
|
||||
system_cpu (str, optional): The system CPU type. Defaults to None.
|
||||
|
||||
Returns:
|
||||
Response: The response from the server.
|
||||
"""
|
||||
form_data = {'engine': engine, 'version': version, 'system_cpu': system_cpu}
|
||||
hostname = LOOPBACK if self.is_localhost else self.hostname
|
||||
return requests.post(f'http://{hostname}:{self.port}/api/delete_engine', json=form_data)
|
||||
return requests.post(f'http://{self.hostname}:{self.port}/api/delete_engine', json=form_data)
|
||||
|
||||
@@ -17,19 +17,19 @@ class ServerProxyManager:
|
||||
pub.subscribe(cls.__zeroconf_state_change, 'zeroconf_state_change')
|
||||
|
||||
@classmethod
|
||||
def __zeroconf_state_change(cls, hostname, state_change):
|
||||
def __zeroconf_state_change(cls, hostname, state_change, info):
|
||||
if state_change == ServiceStateChange.Added or state_change == ServiceStateChange.Updated:
|
||||
cls.get_proxy_for_hostname(hostname)
|
||||
else:
|
||||
cls.get_proxy_for_hostname(hostname).stop_background_update()
|
||||
cls.server_proxys.pop(hostname)
|
||||
|
||||
@classmethod
|
||||
def get_proxy_for_hostname(cls, hostname):
|
||||
found_proxy = cls.server_proxys.get(hostname)
|
||||
if hostname and not found_proxy:
|
||||
if not found_proxy:
|
||||
new_proxy = RenderServerProxy(hostname)
|
||||
new_proxy.start_background_update()
|
||||
cls.server_proxys[hostname] = new_proxy
|
||||
found_proxy = new_proxy
|
||||
return found_proxy
|
||||
|
||||
|
||||
@@ -1,20 +1,14 @@
|
||||
import logging
|
||||
import os
|
||||
import socket
|
||||
import threading
|
||||
import time
|
||||
import zipfile
|
||||
from concurrent.futures import ThreadPoolExecutor
|
||||
|
||||
import requests
|
||||
from plyer import notification
|
||||
from pubsub import pub
|
||||
|
||||
from src.api.preview_manager import PreviewManager
|
||||
from src.api.server_proxy import RenderServerProxy
|
||||
from src.engines.engine_manager import EngineManager
|
||||
from src.render_queue import RenderQueue
|
||||
from src.utilities.config import Config
|
||||
from src.utilities.misc_helper import get_file_size_human
|
||||
from src.utilities.status_utils import RenderStatus, string_to_status
|
||||
from src.utilities.zeroconf_server import ZeroconfServer
|
||||
@@ -34,43 +28,6 @@ class DistributedJobManager:
|
||||
This should be called once, typically during the initialization phase.
|
||||
"""
|
||||
pub.subscribe(cls.__local_job_status_changed, 'status_change')
|
||||
pub.subscribe(cls.__local_job_frame_complete, 'frame_complete')
|
||||
|
||||
@classmethod
|
||||
def __local_job_frame_complete(cls, job_id, frame_number, update_interval=5):
|
||||
|
||||
"""
|
||||
Responds to the 'frame_complete' pubsub message for local jobs.
|
||||
|
||||
Parameters:
|
||||
job_id (str): The ID of the job that has changed status.
|
||||
old_status (str): The previous status of the job.
|
||||
new_status (str): The new (current) status of the job.
|
||||
|
||||
Note: Do not call directly. Instead, call via the 'frame_complete' pubsub message.
|
||||
"""
|
||||
|
||||
render_job = RenderQueue.job_with_id(job_id, none_ok=True)
|
||||
if not render_job: # ignore jobs not in the queue
|
||||
return
|
||||
|
||||
logger.debug(f"Job {job_id} has completed frame #{frame_number}")
|
||||
replace_existing_previews = (frame_number % update_interval) == 0
|
||||
cls.__job_update_shared(render_job, replace_existing_previews)
|
||||
|
||||
@classmethod
|
||||
def __job_update_shared(cls, render_job, replace_existing_previews=False):
|
||||
# update previews
|
||||
PreviewManager.update_previews_for_job(job=render_job, replace_existing=replace_existing_previews)
|
||||
|
||||
# notify parent to allow individual frames to be copied instead of waiting until the end
|
||||
if render_job.parent:
|
||||
parent_id, parent_hostname = render_job.parent.split('@')[0], render_job.parent.split('@')[-1]
|
||||
try:
|
||||
logger.debug(f'Job {render_job.id} updating parent {parent_id}@{parent_hostname}')
|
||||
RenderServerProxy(parent_hostname).send_subjob_update_notification(parent_id, render_job)
|
||||
except Exception as e:
|
||||
logger.error(f"Error notifying parent {parent_hostname} about update in subjob {render_job.id}: {e}")
|
||||
|
||||
@classmethod
|
||||
def __local_job_status_changed(cls, job_id, old_status, new_status):
|
||||
@@ -91,34 +48,34 @@ class DistributedJobManager:
|
||||
return
|
||||
|
||||
logger.debug(f"Job {job_id} status change: {old_status} -> {new_status}")
|
||||
if render_job.parent: # If local job is a subjob from a remote server
|
||||
parent_id, hostname = render_job.parent.split('@')[0], render_job.parent.split('@')[-1]
|
||||
RenderServerProxy(hostname).notify_parent_of_status_change(parent_id=parent_id, subjob=render_job)
|
||||
|
||||
cls.__job_update_shared(render_job, replace_existing_previews=(render_job.status == RenderStatus.COMPLETED))
|
||||
|
||||
# Handle children
|
||||
if render_job.children:
|
||||
if new_status in [RenderStatus.CANCELLED, RenderStatus.ERROR]: # Cancel children if necessary
|
||||
for child in render_job.children:
|
||||
child_id, child_hostname = child.split('@')
|
||||
RenderServerProxy(child_hostname).cancel_job(child_id, confirm=True)
|
||||
# handle cancelling all the children
|
||||
elif render_job.children and new_status in [RenderStatus.CANCELLED, RenderStatus.ERROR]:
|
||||
for child in render_job.children:
|
||||
child_id, hostname = child.split('@')
|
||||
RenderServerProxy(hostname).cancel_job(child_id, confirm=True)
|
||||
|
||||
# UI Notifications
|
||||
try:
|
||||
if new_status == RenderStatus.COMPLETED:
|
||||
logger.debug("Show render complete notification")
|
||||
logger.debug("show render complete notification")
|
||||
notification.notify(
|
||||
title='Render Job Complete',
|
||||
message=f'{render_job.name} completed succesfully',
|
||||
timeout=10 # Display time in seconds
|
||||
)
|
||||
elif new_status == RenderStatus.ERROR:
|
||||
logger.debug("Show render error notification")
|
||||
logger.debug("show render complete notification")
|
||||
notification.notify(
|
||||
title='Render Job Failed',
|
||||
message=f'{render_job.name} failed rendering',
|
||||
timeout=10 # Display time in seconds
|
||||
)
|
||||
elif new_status == RenderStatus.RUNNING:
|
||||
logger.debug("Show render started notification")
|
||||
logger.debug("show render complete notification")
|
||||
notification.notify(
|
||||
title='Render Job Started',
|
||||
message=f'{render_job.name} started rendering',
|
||||
@@ -127,115 +84,42 @@ class DistributedJobManager:
|
||||
except Exception as e:
|
||||
logger.debug(f"Unable to show UI notification: {e}")
|
||||
|
||||
# --------------------------------------------
|
||||
# Create Job
|
||||
# --------------------------------------------
|
||||
|
||||
@classmethod
|
||||
def create_render_job(cls, job_data, loaded_project_local_path):
|
||||
def handle_subjob_status_change(cls, local_job, subjob_data):
|
||||
"""
|
||||
Creates render jobs.
|
||||
Responds to a status change from a remote subjob and triggers the creation or modification of subjobs as needed.
|
||||
|
||||
This method job data and a local path to a loaded project. It creates and returns new a render job.
|
||||
|
||||
Args:
|
||||
job_data (dict): Job data.
|
||||
loaded_project_local_path (str): The local path to the loaded project.
|
||||
Parameters:
|
||||
local_job (BaseRenderWorker): The local parent job worker.
|
||||
subjob_data (dict): subjob data sent from remote server.
|
||||
|
||||
Returns:
|
||||
worker: Created job worker
|
||||
"""
|
||||
|
||||
# get new output path in output_dir
|
||||
output_path = job_data.get('output_path')
|
||||
if not output_path:
|
||||
loaded_project_filename = os.path.basename(loaded_project_local_path)
|
||||
output_filename = os.path.splitext(loaded_project_filename)[0]
|
||||
else:
|
||||
output_filename = os.path.basename(output_path)
|
||||
|
||||
# Prepare output path
|
||||
output_dir = os.path.join(os.path.dirname(os.path.dirname(loaded_project_local_path)), 'output')
|
||||
output_path = os.path.join(output_dir, output_filename)
|
||||
os.makedirs(output_dir, exist_ok=True)
|
||||
logger.debug(f"New job output path: {output_path}")
|
||||
|
||||
# create & configure jobs
|
||||
worker = EngineManager.create_worker(renderer=job_data['renderer'],
|
||||
input_path=loaded_project_local_path,
|
||||
output_path=output_path,
|
||||
engine_version=job_data.get('engine_version'),
|
||||
args=job_data.get('args', {}),
|
||||
parent=job_data.get('parent'),
|
||||
name=job_data.get('name'))
|
||||
worker.status = job_data.get("initial_status", worker.status) # todo: is this necessary?
|
||||
worker.priority = int(job_data.get('priority', worker.priority))
|
||||
worker.start_frame = int(job_data.get("start_frame", worker.start_frame))
|
||||
worker.end_frame = int(job_data.get("end_frame", worker.end_frame))
|
||||
worker.watchdog_timeout = Config.worker_process_timeout
|
||||
worker.hostname = socket.gethostname()
|
||||
|
||||
# determine if we can / should split the job
|
||||
if job_data.get("enable_split_jobs", False) and (worker.total_frames > 1) and not worker.parent:
|
||||
cls.split_into_subjobs_async(worker, job_data, loaded_project_local_path)
|
||||
else:
|
||||
logger.debug("Not splitting into subjobs")
|
||||
|
||||
RenderQueue.add_to_render_queue(worker, force_start=job_data.get('force_start', False))
|
||||
PreviewManager.update_previews_for_job(worker)
|
||||
|
||||
return worker
|
||||
|
||||
# --------------------------------------------
|
||||
# Handling Subjobs
|
||||
# --------------------------------------------
|
||||
|
||||
@classmethod
|
||||
def handle_subjob_update_notification(cls, local_job, subjob_data):
|
||||
"""
|
||||
Responds to a notification from a remote subjob and the host requests any subsequent updates from the subjob.
|
||||
|
||||
Args:
|
||||
local_job (BaseRenderWorker): The local parent job worker.
|
||||
subjob_data (dict): Subjob data sent from the remote server.
|
||||
None
|
||||
"""
|
||||
|
||||
subjob_status = string_to_status(subjob_data['status'])
|
||||
subjob_id = subjob_data['id']
|
||||
subjob_hostname = subjob_data['hostname']
|
||||
subjob_key = f'{subjob_id}@{subjob_hostname}'
|
||||
old_status = local_job.children.get(subjob_key, {}).get('status')
|
||||
local_job.children[subjob_key] = subjob_data
|
||||
subjob_hostname = next((hostname.split('@')[1] for hostname in local_job.children if
|
||||
hostname.split('@')[0] == subjob_id), None)
|
||||
local_job.children[f'{subjob_id}@{subjob_hostname}'] = subjob_data
|
||||
|
||||
logname = f"<Parent: {local_job.id} | Child: {subjob_key}>"
|
||||
if old_status != subjob_status.value:
|
||||
logger.debug(f"Subjob status changed: {logname} -> {subjob_status.value}")
|
||||
logname = f"{local_job.id}:{subjob_id}@{subjob_hostname}"
|
||||
logger.debug(f"Subjob status changed: {logname} -> {subjob_status.value}")
|
||||
|
||||
cls.download_missing_frames_from_subjob(local_job, subjob_id, subjob_hostname)
|
||||
# Download complete or partial render jobs
|
||||
if subjob_status in [RenderStatus.COMPLETED, RenderStatus.CANCELLED, RenderStatus.ERROR] and \
|
||||
subjob_data['file_count']:
|
||||
download_result = cls.download_from_subjob(local_job, subjob_id, subjob_hostname)
|
||||
if not download_result:
|
||||
# todo: handle error
|
||||
logger.error(f"Unable to download subjob files from {logname} with status {subjob_status.value}")
|
||||
|
||||
if subjob_status == RenderStatus.CANCELLED or subjob_status == RenderStatus.ERROR:
|
||||
# todo: determine missing frames and schedule new job
|
||||
pass
|
||||
|
||||
@staticmethod
|
||||
def download_missing_frames_from_subjob(local_job, subjob_id, subjob_hostname):
|
||||
|
||||
try:
|
||||
local_files = [os.path.basename(x) for x in local_job.file_list()]
|
||||
subjob_proxy = RenderServerProxy(subjob_hostname)
|
||||
subjob_files = subjob_proxy.get_job_files_list(job_id=subjob_id) or []
|
||||
|
||||
for subjob_filename in subjob_files:
|
||||
if subjob_filename not in local_files:
|
||||
try:
|
||||
logger.debug(f"Downloading new file '{subjob_filename}' from {subjob_hostname}")
|
||||
local_save_path = os.path.join(os.path.dirname(local_job.output_path), subjob_filename)
|
||||
subjob_proxy.download_job_file(job_id=subjob_id, job_filename=subjob_filename,
|
||||
save_path=local_save_path)
|
||||
logger.debug(f'Downloaded successfully - {local_save_path}')
|
||||
except Exception as e:
|
||||
logger.error(f"Error downloading file '{subjob_filename}' from {subjob_hostname}: {e}")
|
||||
except Exception as e:
|
||||
logger.exception(f'Uncaught exception while trying to download from subjob: {e}')
|
||||
|
||||
@staticmethod
|
||||
def download_all_from_subjob(local_job, subjob_id, subjob_hostname):
|
||||
def download_from_subjob(local_job, subjob_id, subjob_hostname):
|
||||
"""
|
||||
Downloads and extracts files from a completed subjob on a remote server.
|
||||
|
||||
@@ -256,10 +140,10 @@ class DistributedJobManager:
|
||||
try:
|
||||
local_job.children[child_key]['download_status'] = 'working'
|
||||
logger.info(f"Downloading completed subjob files from {subjob_hostname} to localhost")
|
||||
RenderServerProxy(subjob_hostname).download_all_job_files(subjob_id, zip_file_path)
|
||||
RenderServerProxy(subjob_hostname).get_job_files(subjob_id, zip_file_path)
|
||||
logger.info(f"File transfer complete for {logname} - Transferred {get_file_size_human(zip_file_path)}")
|
||||
except Exception as e:
|
||||
logger.error(f"Error downloading files from remote server: {e}")
|
||||
logger.exception(f"Exception downloading files from remote server: {e}")
|
||||
local_job.children[child_key]['download_status'] = 'failed'
|
||||
return False
|
||||
|
||||
@@ -280,7 +164,6 @@ class DistributedJobManager:
|
||||
|
||||
@classmethod
|
||||
def wait_for_subjobs(cls, local_job):
|
||||
# todo: rewrite this method
|
||||
logger.debug(f"Waiting for subjobs for job {local_job}")
|
||||
local_job.status = RenderStatus.WAITING_FOR_SUBJOBS
|
||||
statuses_to_download = [RenderStatus.CANCELLED, RenderStatus.ERROR, RenderStatus.COMPLETED]
|
||||
@@ -320,10 +203,10 @@ class DistributedJobManager:
|
||||
|
||||
# Check if job is finished, but has not had files copied yet over yet
|
||||
if download_status is None and subjob_data['file_count'] and status in statuses_to_download:
|
||||
try:
|
||||
cls.download_missing_frames_from_subjob(local_job, subjob_id, subjob_hostname)
|
||||
except Exception as e:
|
||||
logger.error(f"Error downloading missing frames from subjob: {e}")
|
||||
download_result = cls.download_from_subjob(local_job, subjob_id, subjob_hostname)
|
||||
if not download_result:
|
||||
logger.error("Failed to download from subjob")
|
||||
# todo: error handling here
|
||||
|
||||
# Any finished jobs not successfully downloaded at this point are skipped
|
||||
if local_job.children[child_key].get('download_status', None) is None and \
|
||||
@@ -336,112 +219,87 @@ class DistributedJobManager:
|
||||
f"{', '.join(list(subjobs_not_downloaded().keys()))}")
|
||||
time.sleep(5)
|
||||
|
||||
# --------------------------------------------
|
||||
# Creating Subjobs
|
||||
# --------------------------------------------
|
||||
|
||||
@classmethod
|
||||
def split_into_subjobs_async(cls, parent_worker, job_data, project_path, system_os=None):
|
||||
# todo: I don't love this
|
||||
parent_worker.status = RenderStatus.CONFIGURING
|
||||
cls.background_worker = threading.Thread(target=cls.split_into_subjobs, args=(parent_worker, job_data,
|
||||
project_path, system_os))
|
||||
cls.background_worker.start()
|
||||
|
||||
@classmethod
|
||||
def split_into_subjobs(cls, parent_worker, job_data, project_path, system_os=None, specific_servers=None):
|
||||
"""
|
||||
Splits a job into subjobs and distributes them among available servers.
|
||||
|
||||
This method checks the availability of servers, distributes the work among them, and creates subjobs on each
|
||||
server. If a server is the local host, it adjusts the frame range of the parent job instead of creating a
|
||||
subjob.
|
||||
|
||||
Args:
|
||||
parent_worker (Worker): The worker that is handling the job.
|
||||
job_data (dict): The data for the job to be split.
|
||||
project_path (str): The path to the project associated with the job.
|
||||
system_os (str, optional): The operating system of the servers. Default is any OS.
|
||||
specific_servers (list, optional): List of specific servers to split work between. Defaults to all found.
|
||||
"""
|
||||
def split_into_subjobs(cls, worker, job_data, project_path, system_os=None):
|
||||
|
||||
# Check availability
|
||||
parent_worker.status = RenderStatus.CONFIGURING
|
||||
available_servers = specific_servers if specific_servers else cls.find_available_servers(parent_worker.renderer, system_os)
|
||||
available_servers = cls.find_available_servers(worker.renderer, system_os)
|
||||
logger.debug(f"Splitting into subjobs - Available servers: {available_servers}")
|
||||
all_subjob_server_data = cls.distribute_server_work(parent_worker.start_frame, parent_worker.end_frame, available_servers)
|
||||
subjob_servers = cls.distribute_server_work(worker.start_frame, worker.end_frame, available_servers)
|
||||
local_hostname = socket.gethostname()
|
||||
|
||||
# Prep and submit these sub-jobs
|
||||
logger.info(f"Job {parent_worker.id} split plan: {all_subjob_server_data}")
|
||||
logger.info(f"Job {worker.id} split plan: {subjob_servers}")
|
||||
try:
|
||||
for subjob_data in all_subjob_server_data:
|
||||
subjob_hostname = subjob_data['hostname']
|
||||
if subjob_hostname != parent_worker.hostname:
|
||||
post_results = cls.__create_subjob(job_data, project_path, subjob_data, subjob_hostname,
|
||||
parent_worker)
|
||||
if not post_results.ok:
|
||||
ValueError(f"Failed to create subjob on {subjob_hostname}")
|
||||
|
||||
# save child info
|
||||
submission_results = post_results.json()[0]
|
||||
child_key = f"{submission_results['id']}@{subjob_hostname}"
|
||||
parent_worker.children[child_key] = submission_results
|
||||
for server_data in subjob_servers:
|
||||
server_hostname = server_data['hostname']
|
||||
if server_hostname != local_hostname:
|
||||
post_results = cls.__create_subjob(job_data, local_hostname, project_path, server_data,
|
||||
server_hostname, worker)
|
||||
if post_results.ok:
|
||||
server_data['submission_results'] = post_results.json()[0]
|
||||
else:
|
||||
logger.error(f"Failed to create subjob on {server_hostname}")
|
||||
break
|
||||
else:
|
||||
# truncate parent render_job
|
||||
parent_worker.start_frame = max(subjob_data['frame_range'][0], parent_worker.start_frame)
|
||||
parent_worker.end_frame = min(subjob_data['frame_range'][-1], parent_worker.end_frame)
|
||||
logger.info(f"Local job now rendering from {parent_worker.start_frame} to {parent_worker.end_frame}")
|
||||
worker.start_frame = max(server_data['frame_range'][0], worker.start_frame)
|
||||
worker.end_frame = min(server_data['frame_range'][-1], worker.end_frame)
|
||||
logger.info(f"Local job now rendering from {worker.start_frame} to {worker.end_frame}")
|
||||
server_data['submission_results'] = worker.json()
|
||||
|
||||
# check that job posts were all successful.
|
||||
if not all(d.get('submission_results') is not None for d in subjob_servers):
|
||||
raise ValueError("Failed to create all subjobs") # look into recalculating job #s and use exising jobs
|
||||
|
||||
# start subjobs
|
||||
logger.debug(f"Created {len(all_subjob_server_data) - 1} subjobs successfully")
|
||||
parent_worker.name = f"{parent_worker.name}[{parent_worker.start_frame}-{parent_worker.end_frame}]"
|
||||
parent_worker.status = RenderStatus.NOT_STARTED # todo: this won't work with scheduled starts
|
||||
logger.debug(f"Starting {len(subjob_servers) - 1} attempted subjobs")
|
||||
for server_data in subjob_servers:
|
||||
if server_data['hostname'] != local_hostname:
|
||||
child_key = f"{server_data['submission_results']['id']}@{server_data['hostname']}"
|
||||
worker.children[child_key] = server_data['submission_results']
|
||||
worker.name = f"{worker.name}[{worker.start_frame}-{worker.end_frame}]"
|
||||
|
||||
except Exception as e:
|
||||
# cancel all the subjobs
|
||||
logger.error(f"Failed to split job into subjobs: {e}")
|
||||
logger.debug(f"Cancelling {len(all_subjob_server_data) - 1} attempted subjobs")
|
||||
RenderServerProxy(parent_worker.hostname).cancel_job(parent_worker.id, confirm=True)
|
||||
logger.debug(f"Cancelling {len(subjob_servers) - 1} attempted subjobs")
|
||||
# [RenderServerProxy(hostname).cancel_job(results['id'], confirm=True) for hostname, results in
|
||||
# submission_results.items()] # todo: fix this
|
||||
|
||||
@staticmethod
|
||||
def __create_subjob(job_data, project_path, server_data, server_hostname, parent_worker):
|
||||
def __create_subjob(job_data, local_hostname, project_path, server_data, server_hostname, worker):
|
||||
subjob = job_data.copy()
|
||||
subjob['name'] = f"{parent_worker.name}[{server_data['frame_range'][0]}-{server_data['frame_range'][-1]}]"
|
||||
subjob['parent'] = f"{parent_worker.id}@{parent_worker.hostname}"
|
||||
subjob['name'] = f"{worker.name}[{server_data['frame_range'][0]}-{server_data['frame_range'][-1]}]"
|
||||
subjob['parent'] = f"{worker.id}@{local_hostname}"
|
||||
subjob['start_frame'] = server_data['frame_range'][0]
|
||||
subjob['end_frame'] = server_data['frame_range'][-1]
|
||||
subjob['engine_version'] = parent_worker.renderer_version
|
||||
logger.debug(f"Posting subjob with frames {subjob['start_frame']}-"
|
||||
f"{subjob['end_frame']} to {server_hostname}")
|
||||
post_results = RenderServerProxy(server_hostname).post_job_to_server(
|
||||
file_path=project_path, job_list=[subjob])
|
||||
return post_results
|
||||
|
||||
# --------------------------------------------
|
||||
# Server Handling
|
||||
# --------------------------------------------
|
||||
|
||||
@staticmethod
|
||||
def distribute_server_work(start_frame, end_frame, available_servers, method='cpu_benchmark'):
|
||||
def distribute_server_work(start_frame, end_frame, available_servers, method='cpu_count'):
|
||||
"""
|
||||
Splits the frame range among available servers proportionally based on their performance (CPU count).
|
||||
|
||||
Args:
|
||||
start_frame (int): The start frame number of the animation to be rendered.
|
||||
end_frame (int): The end frame number of the animation to be rendered.
|
||||
available_servers (list): A list of available server dictionaries. Each server dictionary should include
|
||||
'hostname' and 'cpu_count' keys (see find_available_servers).
|
||||
method (str, optional): Specifies the distribution method. Possible values are 'cpu_benchmark', 'cpu_count'
|
||||
and 'evenly'.
|
||||
Defaults to 'cpu_benchmark'.
|
||||
:param start_frame: int, The start frame number of the animation to be rendered.
|
||||
:param end_frame: int, The end frame number of the animation to be rendered.
|
||||
:param available_servers: list, A list of available server dictionaries. Each server dictionary should include
|
||||
'hostname' and 'cpu_count' keys (see find_available_servers)
|
||||
:param method: str, Optional. Specifies the distribution method. Possible values are 'cpu_count' and 'equally'
|
||||
|
||||
Returns:
|
||||
list: A list of server dictionaries where each dictionary includes the frame range and total number of
|
||||
frames to be rendered by the server.
|
||||
|
||||
:return: A list of server dictionaries where each dictionary includes the frame range and total number of frames
|
||||
to be rendered by the server.
|
||||
"""
|
||||
|
||||
# Calculate respective frames for each server
|
||||
def divide_frames_by_cpu_count(frame_start, frame_end, servers):
|
||||
total_frames = frame_end - frame_start + 1
|
||||
total_cpus = sum(server['cpu_count'] for server in servers)
|
||||
total_performance = sum(server['cpu_count'] for server in servers)
|
||||
|
||||
frame_ranges = {}
|
||||
current_frame = frame_start
|
||||
@@ -452,47 +310,7 @@ class DistributedJobManager:
|
||||
# Give all remaining frames to the last server
|
||||
num_frames = total_frames - allocated_frames
|
||||
else:
|
||||
num_frames = round((server['cpu_count'] / total_cpus) * total_frames)
|
||||
allocated_frames += num_frames
|
||||
|
||||
frame_end_for_server = current_frame + num_frames - 1
|
||||
|
||||
if current_frame <= frame_end_for_server:
|
||||
frame_ranges[server['hostname']] = (current_frame, frame_end_for_server)
|
||||
current_frame = frame_end_for_server + 1
|
||||
|
||||
return frame_ranges
|
||||
|
||||
def divide_frames_by_benchmark(frame_start, frame_end, servers):
|
||||
|
||||
def fetch_benchmark(server):
|
||||
try:
|
||||
benchmark = requests.get(f'http://{server["hostname"]}:{ZeroconfServer.server_port}'
|
||||
f'/api/cpu_benchmark').text
|
||||
server['cpu_benchmark'] = benchmark
|
||||
logger.debug(f'Benchmark for {server["hostname"]}: {benchmark}')
|
||||
except requests.exceptions.RequestException as e:
|
||||
logger.error(f'Error fetching benchmark for {server["hostname"]}: {e}')
|
||||
|
||||
# Number of threads to use (can adjust based on your needs or number of servers)
|
||||
threads = len(servers)
|
||||
|
||||
with ThreadPoolExecutor(max_workers=threads) as executor:
|
||||
executor.map(fetch_benchmark, servers)
|
||||
|
||||
total_frames = frame_end - frame_start + 1
|
||||
total_performance = sum(int(server['cpu_benchmark']) for server in servers)
|
||||
|
||||
frame_ranges = {}
|
||||
current_frame = frame_start
|
||||
allocated_frames = 0
|
||||
|
||||
for i, server in enumerate(servers):
|
||||
if i == len(servers) - 1: # if it's the last server
|
||||
# Give all remaining frames to the last server
|
||||
num_frames = total_frames - allocated_frames
|
||||
else:
|
||||
num_frames = round((int(server['cpu_benchmark']) / total_performance) * total_frames)
|
||||
num_frames = round((server['cpu_count'] / total_performance) * total_frames)
|
||||
allocated_frames += num_frames
|
||||
|
||||
frame_end_for_server = current_frame + num_frames - 1
|
||||
@@ -521,18 +339,12 @@ class DistributedJobManager:
|
||||
|
||||
return frame_ranges
|
||||
|
||||
if len(available_servers) == 1:
|
||||
breakdown = {available_servers[0]['hostname']: (start_frame, end_frame)}
|
||||
if method == 'equally':
|
||||
breakdown = divide_frames_equally(start_frame, end_frame, available_servers)
|
||||
# elif method == 'benchmark_score': # todo: implement benchmark score
|
||||
# pass
|
||||
else:
|
||||
logger.debug(f'Splitting between {len(available_servers)} servers by {method} method')
|
||||
if method == 'evenly':
|
||||
breakdown = divide_frames_equally(start_frame, end_frame, available_servers)
|
||||
elif method == 'cpu_benchmark':
|
||||
breakdown = divide_frames_by_benchmark(start_frame, end_frame, available_servers)
|
||||
elif method == 'cpu_count':
|
||||
breakdown = divide_frames_by_cpu_count(start_frame, end_frame, available_servers)
|
||||
else:
|
||||
raise ValueError(f"Invalid distribution method: {method}")
|
||||
breakdown = divide_frames_by_cpu_count(start_frame, end_frame, available_servers)
|
||||
|
||||
server_breakdown = [server for server in available_servers if breakdown.get(server['hostname']) is not None]
|
||||
for server in server_breakdown:
|
||||
@@ -558,17 +370,3 @@ class DistributedJobManager:
|
||||
available_servers.append(response)
|
||||
|
||||
return available_servers
|
||||
|
||||
|
||||
if __name__ == '__main__':
|
||||
logging.basicConfig(level=logging.INFO, format='%(asctime)s - %(levelname)s - %(message)s')
|
||||
|
||||
ZeroconfServer.configure("_zordon._tcp.local.", 'testing', 8080)
|
||||
ZeroconfServer.start(listen_only=True)
|
||||
print("Starting Zeroconf...")
|
||||
time.sleep(2)
|
||||
available_servers = DistributedJobManager.find_available_servers('blender')
|
||||
print(f"AVAILABLE SERVERS ({len(available_servers)}): {available_servers}")
|
||||
# results = DistributedJobManager.distribute_server_work(1, 100, available_servers)
|
||||
# print(f"RESULTS: {results}")
|
||||
ZeroconfServer.stop()
|
||||
|
||||
@@ -1,6 +1,5 @@
|
||||
import logging
|
||||
import re
|
||||
import threading
|
||||
|
||||
import requests
|
||||
|
||||
@@ -44,13 +43,11 @@ class BlenderDownloader(EngineDownloader):
|
||||
response = requests.get(base_url, timeout=5)
|
||||
response.raise_for_status()
|
||||
|
||||
versions_pattern = \
|
||||
r'<a href="(?P<file>[^"]+)">blender-(?P<version>[\d\.]+)-(?P<system_os>\w+)-(?P<cpu>\w+).*</a>'
|
||||
versions_pattern = r'<a href="(?P<file>[^"]+)">blender-(?P<version>[\d\.]+)-(?P<system_os>\w+)-(?P<cpu>\w+).*</a>'
|
||||
versions_data = [match.groupdict() for match in re.finditer(versions_pattern, response.text)]
|
||||
|
||||
# Filter to just the supported formats
|
||||
versions_data = [item for item in versions_data if any(item["file"].endswith(ext) for ext in
|
||||
supported_formats)]
|
||||
versions_data = [item for item in versions_data if any(item["file"].endswith(ext) for ext in supported_formats)]
|
||||
|
||||
# Filter down OS and CPU
|
||||
system_os = system_os or current_system_os()
|
||||
@@ -81,31 +78,6 @@ class BlenderDownloader(EngineDownloader):
|
||||
|
||||
return lts_versions
|
||||
|
||||
@classmethod
|
||||
def all_versions(cls, system_os=None, cpu=None):
|
||||
majors = cls.__get_major_versions()
|
||||
all_versions = []
|
||||
threads = []
|
||||
results = [[] for _ in majors]
|
||||
|
||||
def thread_function(major_version, index, system_os, cpu):
|
||||
results[index] = cls.__get_minor_versions(major_version, system_os, cpu)
|
||||
|
||||
for i, m in enumerate(majors):
|
||||
thread = threading.Thread(target=thread_function, args=(m, i, system_os, cpu))
|
||||
threads.append(thread)
|
||||
thread.start()
|
||||
|
||||
# Wait for all threads to complete
|
||||
for thread in threads:
|
||||
thread.join()
|
||||
|
||||
# Extend all_versions with the results from each thread
|
||||
for result in results:
|
||||
all_versions.extend(result)
|
||||
|
||||
return all_versions
|
||||
|
||||
@classmethod
|
||||
def find_most_recent_version(cls, system_os=None, cpu=None, lts_only=False):
|
||||
try:
|
||||
@@ -133,10 +105,11 @@ class BlenderDownloader(EngineDownloader):
|
||||
try:
|
||||
logger.info(f"Requesting download of blender-{version}-{system_os}-{cpu}")
|
||||
major_version = '.'.join(version.split('.')[:2])
|
||||
minor_versions = [x for x in cls.__get_minor_versions(major_version, system_os, cpu) if
|
||||
x['version'] == version]
|
||||
minor_versions = [x for x in cls.__get_minor_versions(major_version, system_os, cpu) if x['version'] == version]
|
||||
# we get the URL instead of calculating it ourselves. May change this
|
||||
|
||||
cls.download_and_extract_app(remote_url=minor_versions[0]['url'], download_location=download_location,
|
||||
timeout=timeout)
|
||||
timeout=timeout)
|
||||
except IndexError:
|
||||
logger.error("Cannot find requested engine")
|
||||
|
||||
@@ -144,4 +117,5 @@ class BlenderDownloader(EngineDownloader):
|
||||
if __name__ == '__main__':
|
||||
logging.basicConfig(level=logging.INFO, format='%(asctime)s - %(name)s - %(levelname)s - %(message)s')
|
||||
|
||||
print(BlenderDownloader.find_most_recent_version())
|
||||
print(BlenderDownloader.__get_major_versions())
|
||||
|
||||
|
||||
@@ -1,6 +1,5 @@
|
||||
import json
|
||||
import re
|
||||
from concurrent.futures import ThreadPoolExecutor
|
||||
|
||||
from src.engines.core.base_engine import *
|
||||
from src.utilities.misc_helper import system_safe_path
|
||||
@@ -23,10 +22,6 @@ class Blender(BaseRenderEngine):
|
||||
from src.engines.blender.blender_worker import BlenderRenderWorker
|
||||
return BlenderRenderWorker
|
||||
|
||||
def ui_options(self):
|
||||
from src.engines.blender.blender_ui import BlenderUI
|
||||
return BlenderUI.get_options(self)
|
||||
|
||||
@staticmethod
|
||||
def supported_extensions():
|
||||
return ['blend']
|
||||
@@ -57,27 +52,25 @@ class Blender(BaseRenderEngine):
|
||||
else:
|
||||
raise FileNotFoundError(f'Project file not found: {project_path}')
|
||||
|
||||
def run_python_script(self, script_path, project_path=None, timeout=None):
|
||||
|
||||
if project_path and not os.path.exists(project_path):
|
||||
def run_python_script(self, project_path, script_path, timeout=None):
|
||||
if os.path.exists(project_path) and os.path.exists(script_path):
|
||||
try:
|
||||
return subprocess.run([self.renderer_path(), '-b', project_path, '--python', script_path],
|
||||
capture_output=True, timeout=timeout)
|
||||
except Exception as e:
|
||||
logger.warning(f"Error running python script in blender: {e}")
|
||||
pass
|
||||
elif not os.path.exists(project_path):
|
||||
raise FileNotFoundError(f'Project file not found: {project_path}')
|
||||
elif not os.path.exists(script_path):
|
||||
raise FileNotFoundError(f'Python script not found: {script_path}')
|
||||
|
||||
try:
|
||||
command = [self.renderer_path(), '-b', '--python', script_path]
|
||||
if project_path:
|
||||
command.insert(2, project_path)
|
||||
return subprocess.run(command, capture_output=True, timeout=timeout)
|
||||
except Exception as e:
|
||||
logger.exception(f"Error running python script in blender: {e}")
|
||||
raise Exception("Uncaught exception")
|
||||
|
||||
def get_project_info(self, project_path, timeout=10):
|
||||
scene_info = {}
|
||||
try:
|
||||
script_path = os.path.join(os.path.dirname(os.path.realpath(__file__)), 'scripts', 'get_file_info.py')
|
||||
results = self.run_python_script(project_path=project_path, script_path=system_safe_path(script_path),
|
||||
timeout=timeout)
|
||||
results = self.run_python_script(project_path, system_safe_path(script_path), timeout=timeout)
|
||||
result_text = results.stdout.decode()
|
||||
for line in result_text.splitlines():
|
||||
if line.startswith('SCENE_DATA:'):
|
||||
@@ -95,8 +88,7 @@ class Blender(BaseRenderEngine):
|
||||
try:
|
||||
logger.info(f"Starting to pack Blender file: {project_path}")
|
||||
script_path = os.path.join(os.path.dirname(os.path.realpath(__file__)), 'scripts', 'pack_project.py')
|
||||
results = self.run_python_script(project_path=project_path, script_path=system_safe_path(script_path),
|
||||
timeout=timeout)
|
||||
results = self.run_python_script(project_path, system_safe_path(script_path), timeout=timeout)
|
||||
|
||||
result_text = results.stdout.decode()
|
||||
dir_name = os.path.dirname(project_path)
|
||||
@@ -148,26 +140,12 @@ class Blender(BaseRenderEngine):
|
||||
|
||||
return options
|
||||
|
||||
def system_info(self):
|
||||
with ThreadPoolExecutor() as executor:
|
||||
future_render_devices = executor.submit(self.get_render_devices)
|
||||
future_engines = executor.submit(self.supported_render_engines)
|
||||
render_devices = future_render_devices.result()
|
||||
engines = future_engines.result()
|
||||
|
||||
return {'render_devices': render_devices, 'engines': engines}
|
||||
|
||||
def get_render_devices(self):
|
||||
script_path = os.path.join(os.path.dirname(os.path.realpath(__file__)), 'scripts', 'get_system_info.py')
|
||||
results = self.run_python_script(script_path=script_path)
|
||||
output = results.stdout.decode()
|
||||
match = re.search(r"GPU DATA:(\[[\s\S]*\])", output)
|
||||
if match:
|
||||
gpu_data_json = match.group(1)
|
||||
gpus_info = json.loads(gpu_data_json)
|
||||
return gpus_info
|
||||
else:
|
||||
logger.error("GPU data not found in the output.")
|
||||
def get_detected_gpus(self):
|
||||
# no longer works on 4.0
|
||||
engine_output = subprocess.run([self.renderer_path(), '-E', 'help'], timeout=SUBPROCESS_TIMEOUT,
|
||||
capture_output=True).stdout.decode('utf-8')
|
||||
gpu_names = re.findall(r"DETECTED GPU: (.+)", engine_output)
|
||||
return gpu_names
|
||||
|
||||
def supported_render_engines(self):
|
||||
engine_output = subprocess.run([self.renderer_path(), '-E', 'help'], timeout=SUBPROCESS_TIMEOUT,
|
||||
@@ -175,11 +153,18 @@ class Blender(BaseRenderEngine):
|
||||
render_engines = [x.strip() for x in engine_output.split('Blender Engine Listing:')[-1].strip().splitlines()]
|
||||
return render_engines
|
||||
|
||||
# UI and setup
|
||||
def get_options(self):
|
||||
options = [
|
||||
{'name': 'engine', 'options': self.supported_render_engines()},
|
||||
]
|
||||
return options
|
||||
|
||||
def perform_presubmission_tasks(self, project_path):
|
||||
packed_path = self.pack_project_file(project_path, timeout=30)
|
||||
return packed_path
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
x = Blender().get_render_devices()
|
||||
x = Blender.get_detected_gpus()
|
||||
print(x)
|
||||
|
||||
@@ -1,9 +0,0 @@
|
||||
|
||||
class BlenderUI:
|
||||
@staticmethod
|
||||
def get_options(instance):
|
||||
options = [
|
||||
{'name': 'engine', 'options': instance.supported_render_engines()},
|
||||
{'name': 'render_device', 'options': ['Any', 'GPU', 'CPU']},
|
||||
]
|
||||
return options
|
||||
@@ -12,7 +12,13 @@ class BlenderRenderWorker(BaseRenderWorker):
|
||||
engine = Blender
|
||||
|
||||
def __init__(self, input_path, output_path, engine_path, args=None, parent=None, name=None):
|
||||
super(BlenderRenderWorker, self).__init__(input_path=input_path, output_path=output_path, engine_path=engine_path, args=args, parent=parent, name=name)
|
||||
super(BlenderRenderWorker, self).__init__(input_path=input_path, output_path=output_path,
|
||||
engine_path=engine_path, args=args, parent=parent, name=name)
|
||||
|
||||
# Args
|
||||
self.blender_engine = self.args.get('engine', 'BLENDER_EEVEE').upper()
|
||||
self.export_format = self.args.get('export_format', None) or 'JPEG'
|
||||
self.camera = self.args.get('camera', None)
|
||||
|
||||
# Stats
|
||||
self.__frame_percent_complete = 0.0
|
||||
@@ -31,44 +37,16 @@ class BlenderRenderWorker(BaseRenderWorker):
|
||||
cmd.append('-b')
|
||||
cmd.append(self.input_path)
|
||||
|
||||
# Start Python expressions - # todo: investigate splitting into separate 'setup' script
|
||||
# Python expressions
|
||||
cmd.append('--python-expr')
|
||||
python_exp = 'import bpy; bpy.context.scene.render.use_overwrite = False;'
|
||||
|
||||
# Setup Custom Camera
|
||||
custom_camera = self.args.get('camera', None)
|
||||
if custom_camera:
|
||||
python_exp = python_exp + f"bpy.context.scene.camera = bpy.data.objects['{custom_camera}'];"
|
||||
|
||||
# Setup Render Engines
|
||||
self.args['engine'] = self.args.get('engine', 'CYCLES').upper() # set default render engine
|
||||
# Configure Cycles
|
||||
if self.args['engine'] == 'CYCLES':
|
||||
# Set Render Device (gpu/cpu/any)
|
||||
render_device = self.args.get('render_device', 'any').lower()
|
||||
if render_device not in ['any', 'gpu', 'cpu']:
|
||||
raise AttributeError(f"Invalid Cycles render device: {render_device}")
|
||||
|
||||
use_gpu = render_device in {'any', 'gpu'}
|
||||
use_cpu = render_device in {'any', 'cpu'}
|
||||
|
||||
python_exp = python_exp + ("exec(\"for device in bpy.context.preferences.addons["
|
||||
f"'cycles'].preferences.devices: device.use = {use_cpu} if device.type == 'CPU'"
|
||||
f" else {use_gpu}\")")
|
||||
|
||||
# -- insert any other python exp checks / generators here --
|
||||
|
||||
# End Python expressions here
|
||||
if self.camera:
|
||||
python_exp = python_exp + f"bpy.context.scene.camera = bpy.data.objects['{self.camera}'];"
|
||||
# insert any other python exp checks here
|
||||
cmd.append(python_exp)
|
||||
|
||||
# Export format
|
||||
export_format = self.args.get('export_format', None) or 'JPEG'
|
||||
|
||||
main_part, ext = os.path.splitext(self.output_path)
|
||||
# Remove the extension only if it is not composed entirely of digits
|
||||
path_without_ext = main_part if not ext[1:].isdigit() else self.output_path
|
||||
path_without_ext += "_"
|
||||
cmd.extend(['-E', blender_engine, '-o', path_without_ext, '-F', export_format])
|
||||
path_without_ext = os.path.splitext(self.output_path)[0] + "_"
|
||||
cmd.extend(['-E', self.blender_engine, '-o', path_without_ext, '-F', self.export_format])
|
||||
|
||||
# set frame range
|
||||
cmd.extend(['-s', self.start_frame, '-e', self.end_frame, '-a'])
|
||||
@@ -106,30 +84,22 @@ class BlenderRenderWorker(BaseRenderWorker):
|
||||
elif line.lower().startswith('error'):
|
||||
self.log_error(line)
|
||||
elif 'Saved' in line or 'Saving' in line or 'quit' in line:
|
||||
render_stats_match = re.match(r'Time: (.*) \(Saving', line)
|
||||
output_filename_match = re.match(r"Saved: .*_(\d+)\.\w+", line) # try to get frame # from filename
|
||||
if output_filename_match:
|
||||
output_file_number = output_filename_match.groups()[0]
|
||||
try:
|
||||
self.current_frame = int(output_file_number)
|
||||
self._send_frame_complete_notification()
|
||||
except ValueError:
|
||||
pass
|
||||
elif render_stats_match:
|
||||
time_completed = render_stats_match.groups()[0]
|
||||
match = re.match(r'Time: (.*) \(Saving', line)
|
||||
if match:
|
||||
time_completed = match.groups()[0]
|
||||
frame_count = self.current_frame - self.end_frame + self.total_frames
|
||||
logger.info(f'Frame #{self.current_frame} - '
|
||||
f'{frame_count} of {self.total_frames} completed in {time_completed} | '
|
||||
f'Total Elapsed Time: {datetime.now() - self.start_time}')
|
||||
else:
|
||||
logger.debug(line)
|
||||
else:
|
||||
pass
|
||||
# if len(line.strip()):
|
||||
# logger.debug(line.strip())
|
||||
|
||||
def percent_complete(self):
|
||||
if self.status == RenderStatus.COMPLETED:
|
||||
return 1
|
||||
elif self.total_frames <= 1:
|
||||
if self.total_frames <= 1:
|
||||
return self.__frame_percent_complete
|
||||
else:
|
||||
whole_frame_percent = (self.current_frame - self.start_frame) / self.total_frames
|
||||
|
||||
@@ -1,17 +0,0 @@
|
||||
import bpy
|
||||
import json
|
||||
|
||||
# Ensure Cycles is available
|
||||
bpy.context.preferences.addons['cycles'].preferences.get_devices()
|
||||
|
||||
# Collect the devices information
|
||||
devices_info = []
|
||||
for device in bpy.context.preferences.addons['cycles'].preferences.devices:
|
||||
devices_info.append({
|
||||
"name": device.name,
|
||||
"type": device.type,
|
||||
"use": device.use
|
||||
})
|
||||
|
||||
# Print the devices information in JSON format
|
||||
print("GPU DATA:" + json.dumps(devices_info))
|
||||
@@ -98,7 +98,7 @@ class EngineDownloader:
|
||||
zip_ref.extractall(download_location)
|
||||
logger.info(
|
||||
f'Successfully extracted {os.path.basename(temp_downloaded_file_path)} to {download_location}')
|
||||
except zipfile.BadZipFile:
|
||||
except zipfile.BadZipFile as e:
|
||||
logger.error(f'Error: {temp_downloaded_file_path} is not a valid ZIP file.')
|
||||
except FileNotFoundError:
|
||||
logger.error(f'File not found: {temp_downloaded_file_path}')
|
||||
@@ -110,8 +110,7 @@ class EngineDownloader:
|
||||
for mount_point in dmg.attach():
|
||||
try:
|
||||
copy_directory_contents(mount_point, os.path.join(download_location, output_dir_name))
|
||||
logger.info(f'Successfully copied {os.path.basename(temp_downloaded_file_path)} '
|
||||
f'to {download_location}')
|
||||
logger.info(f'Successfully copied {os.path.basename(temp_downloaded_file_path)} to {download_location}')
|
||||
except FileNotFoundError:
|
||||
logger.error(f'Error: The source .app bundle does not exist.')
|
||||
except PermissionError:
|
||||
|
||||
@@ -13,13 +13,9 @@ class BaseRenderEngine(object):
|
||||
|
||||
def __init__(self, custom_path=None):
|
||||
self.custom_renderer_path = custom_path
|
||||
if not self.renderer_path() or not os.path.exists(self.renderer_path()):
|
||||
if not self.renderer_path():
|
||||
raise FileNotFoundError(f"Cannot find path to renderer for {self.name()} instance")
|
||||
|
||||
if not os.access(self.renderer_path(), os.X_OK):
|
||||
logger.warning(f"Path is not executable. Setting permissions to 755 for {self.renderer_path()}")
|
||||
os.chmod(self.renderer_path(), 0o755)
|
||||
|
||||
def renderer_path(self):
|
||||
return self.custom_renderer_path or self.default_renderer_path()
|
||||
|
||||
@@ -51,9 +47,6 @@ class BaseRenderEngine(object):
|
||||
def worker_class(): # override when subclassing to link worker class
|
||||
raise NotImplementedError("Worker class not implemented")
|
||||
|
||||
def ui_options(self): # override to return options for ui
|
||||
return {}
|
||||
|
||||
def get_help(self): # override if renderer uses different help flag
|
||||
path = self.renderer_path()
|
||||
if not path:
|
||||
@@ -69,11 +62,13 @@ class BaseRenderEngine(object):
|
||||
def get_output_formats(cls):
|
||||
raise NotImplementedError(f"get_output_formats not implemented for {cls.__name__}")
|
||||
|
||||
def get_arguments(self):
|
||||
@classmethod
|
||||
def get_arguments(cls):
|
||||
pass
|
||||
|
||||
def system_info(self):
|
||||
pass
|
||||
def get_options(self): # override to return options for ui
|
||||
return {}
|
||||
|
||||
def perform_presubmission_tasks(self, project_path):
|
||||
return project_path
|
||||
|
||||
|
||||
@@ -5,14 +5,12 @@ import logging
|
||||
import os
|
||||
import subprocess
|
||||
import threading
|
||||
import time
|
||||
from datetime import datetime
|
||||
|
||||
import psutil
|
||||
from pubsub import pub
|
||||
from sqlalchemy import Column, Integer, String, DateTime, JSON
|
||||
from sqlalchemy.ext.declarative import declarative_base
|
||||
from sqlalchemy.ext.mutable import MutableDict
|
||||
|
||||
from src.utilities.misc_helper import get_time_elapsed
|
||||
from src.utilities.status_utils import RenderStatus, string_to_status
|
||||
@@ -25,7 +23,6 @@ class BaseRenderWorker(Base):
|
||||
__tablename__ = 'render_workers'
|
||||
|
||||
id = Column(String, primary_key=True)
|
||||
hostname = Column(String, nullable=True)
|
||||
input_path = Column(String)
|
||||
output_path = Column(String)
|
||||
date_created = Column(DateTime)
|
||||
@@ -39,8 +36,7 @@ class BaseRenderWorker(Base):
|
||||
start_frame = Column(Integer)
|
||||
end_frame = Column(Integer, nullable=True)
|
||||
parent = Column(String, nullable=True)
|
||||
children = Column(MutableDict.as_mutable(JSON))
|
||||
args = Column(MutableDict.as_mutable(JSON))
|
||||
children = Column(JSON)
|
||||
name = Column(String)
|
||||
file_hash = Column(String)
|
||||
_status = Column(String)
|
||||
@@ -64,7 +60,6 @@ class BaseRenderWorker(Base):
|
||||
|
||||
# Essential Info
|
||||
self.id = generate_id()
|
||||
self.hostname = None
|
||||
self.input_path = input_path
|
||||
self.output_path = output_path
|
||||
self.args = args or {}
|
||||
@@ -77,12 +72,11 @@ class BaseRenderWorker(Base):
|
||||
self.parent = parent
|
||||
self.children = {}
|
||||
self.name = name or os.path.basename(input_path)
|
||||
self.maximum_attempts = 3
|
||||
|
||||
# Frame Ranges
|
||||
self.project_length = 0 # is this necessary?
|
||||
self.current_frame = 0
|
||||
self.start_frame = 0
|
||||
self.project_length = -1
|
||||
self.current_frame = 0 # should this be a 1 ?
|
||||
self.start_frame = 0 # should this be a 1 ?
|
||||
self.end_frame = None
|
||||
|
||||
# Logging
|
||||
@@ -90,20 +84,15 @@ class BaseRenderWorker(Base):
|
||||
self.end_time = None
|
||||
|
||||
# History
|
||||
self.status = RenderStatus.NOT_STARTED
|
||||
self.status = RenderStatus.CONFIGURING
|
||||
self.warnings = []
|
||||
self.errors = []
|
||||
|
||||
# Threads and processes
|
||||
self.__thread = threading.Thread(target=self.__run, args=())
|
||||
self.__thread = threading.Thread(target=self.run, args=())
|
||||
self.__thread.daemon = True
|
||||
self.__process = None
|
||||
self.last_output = None
|
||||
self.__last_output_time = None
|
||||
self.watchdog_timeout = 120
|
||||
|
||||
def __repr__(self):
|
||||
return f"<Job id:{self.id} p{self.priority} {self.renderer}-{self.renderer_version} '{self.name}' status:{self.status.value}>"
|
||||
|
||||
@property
|
||||
def total_frames(self):
|
||||
@@ -127,16 +116,19 @@ class BaseRenderWorker(Base):
|
||||
self._status = RenderStatus.CANCELLED.value
|
||||
return string_to_status(self._status)
|
||||
|
||||
def _send_frame_complete_notification(self):
|
||||
pub.sendMessage('frame_complete', job_id=self.id, frame_number=self.current_frame)
|
||||
def validate(self):
|
||||
if not os.path.exists(self.input_path):
|
||||
raise FileNotFoundError(f"Cannot find input path: {self.input_path}")
|
||||
self.generate_subprocess()
|
||||
|
||||
def generate_subprocess(self):
|
||||
# Convert raw args from string if available and catch conflicts
|
||||
generated_args = [str(x) for x in self.generate_worker_subprocess()]
|
||||
generated_args_flags = [x for x in generated_args if x.startswith('-')]
|
||||
if len(generated_args_flags) != len(set(generated_args_flags)):
|
||||
msg = f"Cannot generate subprocess - Multiple arg conflicts detected: {generated_args}"
|
||||
msg = "Cannot generate subprocess - Multiple arg conflicts detected"
|
||||
logger.error(msg)
|
||||
logger.debug(f"Generated args for subprocess: {generated_args}")
|
||||
raise ValueError(msg)
|
||||
return generated_args
|
||||
|
||||
@@ -164,33 +156,32 @@ class BaseRenderWorker(Base):
|
||||
|
||||
if not os.path.exists(self.input_path):
|
||||
self.status = RenderStatus.ERROR
|
||||
msg = f'Cannot find input path: {self.input_path}'
|
||||
msg = 'Cannot find input path: {}'.format(self.input_path)
|
||||
logger.error(msg)
|
||||
self.errors.append(msg)
|
||||
return
|
||||
|
||||
if not os.path.exists(self.renderer_path):
|
||||
self.status = RenderStatus.ERROR
|
||||
msg = f'Cannot find render engine path for {self.engine.name()}'
|
||||
msg = 'Cannot find render engine path for {}'.format(self.engine.name())
|
||||
logger.error(msg)
|
||||
self.errors.append(msg)
|
||||
return
|
||||
|
||||
self.status = RenderStatus.RUNNING
|
||||
self.start_time = datetime.now()
|
||||
self.__thread.start()
|
||||
|
||||
def __run(self):
|
||||
logger.info(f'Starting {self.engine.name()} {self.renderer_version} Render for {self.input_path} | '
|
||||
f'Frame Count: {self.total_frames}')
|
||||
self.__thread.start()
|
||||
|
||||
def run(self):
|
||||
# Setup logging
|
||||
log_dir = os.path.dirname(self.log_path())
|
||||
os.makedirs(log_dir, exist_ok=True)
|
||||
|
||||
subprocess_cmds = self.generate_subprocess()
|
||||
initial_file_count = len(self.file_list())
|
||||
failed_attempts = 0
|
||||
attempt_number = 0
|
||||
|
||||
with open(self.log_path(), "a") as f:
|
||||
|
||||
@@ -201,55 +192,47 @@ class BaseRenderWorker(Base):
|
||||
|
||||
while True:
|
||||
# Log attempt #
|
||||
if failed_attempts:
|
||||
if failed_attempts >= self.maximum_attempts:
|
||||
err_msg = f"Maximum attempts exceeded ({self.maximum_attempts})"
|
||||
logger.error(err_msg)
|
||||
self.status = RenderStatus.ERROR
|
||||
self.errors.append(err_msg)
|
||||
return
|
||||
else:
|
||||
f.write(f'\n{"=" * 20} Attempt #{failed_attempts + 1} {"=" * 20}\n\n')
|
||||
logger.warning(f"Restarting render - Attempt #{failed_attempts + 1}")
|
||||
self.status = RenderStatus.RUNNING
|
||||
if attempt_number:
|
||||
f.write(f'\n{"=" * 80} Attempt #{attempt_number} {"=" * 30}\n\n')
|
||||
logger.warning(f"Restarting render - Attempt #{attempt_number}")
|
||||
attempt_number += 1
|
||||
|
||||
return_code = self.__setup_and_run_process(f, subprocess_cmds)
|
||||
# Start process and get updates
|
||||
self.__process = subprocess.Popen(subprocess_cmds, stdout=subprocess.PIPE, stderr=subprocess.STDOUT,
|
||||
universal_newlines=False)
|
||||
|
||||
for c in io.TextIOWrapper(self.__process.stdout, encoding="utf-8"): # or another encoding
|
||||
f.write(c)
|
||||
self.last_output = c.strip()
|
||||
self._parse_stdout(c.strip())
|
||||
|
||||
f.write('\n')
|
||||
|
||||
# Check return codes and process
|
||||
return_code = self.__process.wait()
|
||||
self.end_time = datetime.now()
|
||||
|
||||
message = f"{'=' * 50}\n\n{self.engine.name()} render ended with code {return_code} " \
|
||||
f"after {self.time_elapsed()}\n\n"
|
||||
f.write(message)
|
||||
|
||||
# Teardown
|
||||
if self.status in [RenderStatus.CANCELLED, RenderStatus.ERROR]:
|
||||
if self.status in [RenderStatus.CANCELLED, RenderStatus.ERROR]: # user cancelled
|
||||
message = f"{self.engine.name()} render ended with status '{self.status}' " \
|
||||
f"after {self.time_elapsed()}"
|
||||
f.write(message)
|
||||
return
|
||||
|
||||
# if file output hasn't increased, return as error, otherwise restart process.
|
||||
file_count_has_increased = len(self.file_list()) > initial_file_count
|
||||
if (self.status == RenderStatus.RUNNING) and file_count_has_increased and not return_code:
|
||||
message = (f"{'=' * 50}\n\n{self.engine.name()} render completed successfully in "
|
||||
f"{self.time_elapsed()}\n")
|
||||
if not return_code:
|
||||
message = f"{'=' * 50}\n\n{self.engine.name()} render completed successfully in {self.time_elapsed()}"
|
||||
f.write(message)
|
||||
break
|
||||
|
||||
if return_code:
|
||||
err_msg = f"{self.engine.name()} render failed with code {return_code}"
|
||||
logger.error(err_msg)
|
||||
self.errors.append(err_msg)
|
||||
# Handle non-zero return codes
|
||||
message = f"{'=' * 50}\n\n{self.engine.name()} render failed with code {return_code} " \
|
||||
f"after {self.time_elapsed()}"
|
||||
f.write(message)
|
||||
self.errors.append(message)
|
||||
|
||||
# handle instances where renderer exits ok but doesnt generate files
|
||||
if not return_code and not file_count_has_increased:
|
||||
err_msg = (f"{self.engine.name()} render exited ok, but file count has not increased. "
|
||||
f"Count is still {len(self.file_list())}")
|
||||
f.write(f'Error: {err_msg}\n\n')
|
||||
self.errors.append(err_msg)
|
||||
|
||||
# only count the attempt as failed if renderer creates no output - ignore error codes for now
|
||||
if not file_count_has_increased:
|
||||
failed_attempts += 1
|
||||
# if file output hasn't increased, return as error, otherwise restart process.
|
||||
if len(self.file_list()) <= initial_file_count:
|
||||
self.status = RenderStatus.ERROR
|
||||
return
|
||||
|
||||
if self.children:
|
||||
from src.distributed_job_manager import DistributedJobManager
|
||||
@@ -261,65 +244,6 @@ class BaseRenderWorker(Base):
|
||||
self.status = RenderStatus.COMPLETED
|
||||
logger.info(f"Render {self.id}-{self.name} completed successfully after {self.time_elapsed()}")
|
||||
|
||||
def __setup_and_run_process(self, f, subprocess_cmds):
|
||||
|
||||
def watchdog():
|
||||
logger.debug(f'Starting process watchdog for {self} with {self.watchdog_timeout}s timeout')
|
||||
while self.__process.poll() is None:
|
||||
time_since_last_update = time.time() - self.__last_output_time
|
||||
if time_since_last_update > self.watchdog_timeout:
|
||||
logger.error(f"Process for {self} terminated due to exceeding timeout ({self.watchdog_timeout}s)")
|
||||
self.__process.kill()
|
||||
break
|
||||
# logger.debug(f'Watchdog for {self} - Time since last update: {time_since_last_update}')
|
||||
time.sleep(1)
|
||||
|
||||
logger.debug(f'Stopping process watchdog for {self}')
|
||||
|
||||
return_code = -1
|
||||
watchdog_thread = threading.Thread(target=watchdog)
|
||||
watchdog_thread.daemon = True
|
||||
|
||||
try:
|
||||
# Start process and get updates
|
||||
self.__process = subprocess.Popen(subprocess_cmds, stdout=subprocess.PIPE, stderr=subprocess.STDOUT,
|
||||
universal_newlines=False)
|
||||
|
||||
# Start watchdog
|
||||
self.__last_output_time = time.time()
|
||||
watchdog_thread.start()
|
||||
|
||||
for c in io.TextIOWrapper(self.__process.stdout, encoding="utf-8"): # or another encoding
|
||||
self.last_output = c.strip()
|
||||
self.__last_output_time = time.time()
|
||||
try:
|
||||
f.write(c)
|
||||
f.flush()
|
||||
os.fsync(f.fileno())
|
||||
except Exception as e:
|
||||
logger.error(f"Error saving log to disk: {e}")
|
||||
|
||||
try:
|
||||
self._parse_stdout(c.strip())
|
||||
except Exception as e:
|
||||
logger.error(f'Error parsing stdout: {e}')
|
||||
|
||||
f.write('\n')
|
||||
|
||||
# Check return codes and process
|
||||
return_code = self.__process.wait()
|
||||
except Exception as e:
|
||||
message = f'Uncaught error running render process: {e}'
|
||||
f.write(message)
|
||||
logger.exception(message)
|
||||
self.__process.kill()
|
||||
|
||||
# let watchdog end before continuing - prevents multiple watchdogs running when process restarts
|
||||
if watchdog_thread.is_alive():
|
||||
watchdog_thread.join()
|
||||
|
||||
return return_code
|
||||
|
||||
def post_processing(self):
|
||||
pass
|
||||
|
||||
@@ -352,8 +276,6 @@ class BaseRenderWorker(Base):
|
||||
self.status = RenderStatus.CANCELLED
|
||||
|
||||
def percent_complete(self):
|
||||
if self.status == RenderStatus.COMPLETED:
|
||||
return 1.0
|
||||
return 0
|
||||
|
||||
def _parse_stdout(self, line):
|
||||
@@ -375,7 +297,6 @@ class BaseRenderWorker(Base):
|
||||
job_dict = {
|
||||
'id': self.id,
|
||||
'name': self.name,
|
||||
'hostname': self.hostname,
|
||||
'input_path': self.input_path,
|
||||
'output_path': self.output_path,
|
||||
'priority': self.priority,
|
||||
@@ -395,8 +316,7 @@ class BaseRenderWorker(Base):
|
||||
'end_frame': self.end_frame,
|
||||
'total_frames': self.total_frames,
|
||||
'last_output': getattr(self, 'last_output', None),
|
||||
'log_path': self.log_path(),
|
||||
'args': self.args
|
||||
'log_path': self.log_path()
|
||||
}
|
||||
|
||||
# convert to json and back to auto-convert dates to iso format
|
||||
|
||||
@@ -2,7 +2,6 @@ import logging
|
||||
import os
|
||||
import shutil
|
||||
import threading
|
||||
import concurrent.futures
|
||||
|
||||
from src.engines.blender.blender_engine import Blender
|
||||
from src.engines.ffmpeg.ffmpeg_engine import FFMPEG
|
||||
@@ -27,77 +26,57 @@ class EngineManager:
|
||||
return obj
|
||||
|
||||
@classmethod
|
||||
def get_engines(cls, filter_name=None):
|
||||
def all_engines(cls):
|
||||
|
||||
if not cls.engines_path:
|
||||
raise FileNotFoundError("Engine path is not set")
|
||||
raise FileNotFoundError("Engines path must be set before requesting downloads")
|
||||
|
||||
# Parse downloaded engine directory
|
||||
results = []
|
||||
try:
|
||||
all_items = os.listdir(cls.engines_path)
|
||||
all_directories = [item for item in all_items if os.path.isdir(os.path.join(cls.engines_path, item))]
|
||||
keys = ["engine", "version", "system_os", "cpu"] # Define keys for result dictionary
|
||||
|
||||
for directory in all_directories:
|
||||
# Split directory name into segments
|
||||
# Split the input string by dashes to get segments
|
||||
segments = directory.split('-')
|
||||
# Create a dictionary mapping keys to corresponding segments
|
||||
|
||||
# Create a dictionary with named keys
|
||||
keys = ["engine", "version", "system_os", "cpu"]
|
||||
result_dict = {keys[i]: segments[i] for i in range(min(len(keys), len(segments)))}
|
||||
result_dict['type'] = 'managed'
|
||||
|
||||
# Initialize binary_name with engine name
|
||||
# Figure out the binary name for the path
|
||||
binary_name = result_dict['engine'].lower()
|
||||
# Determine the correct binary name based on the engine and system_os
|
||||
for eng in cls.supported_engines():
|
||||
if eng.name().lower() == result_dict['engine']:
|
||||
binary_name = eng.binary_names.get(result_dict['system_os'], binary_name)
|
||||
|
||||
# Find path to binary
|
||||
path = None
|
||||
for root, _, files in os.walk(system_safe_path(os.path.join(cls.engines_path, directory))):
|
||||
if binary_name in files:
|
||||
path = os.path.join(root, binary_name)
|
||||
break
|
||||
|
||||
# Find the path to the binary file
|
||||
path = next(
|
||||
(os.path.join(root, binary_name) for root, _, files in
|
||||
os.walk(system_safe_path(os.path.join(cls.engines_path, directory))) if binary_name in files),
|
||||
None
|
||||
)
|
||||
|
||||
result_dict['path'] = path
|
||||
# Add the result dictionary to results if it matches the filter_name or if no filter is applied
|
||||
if not filter_name or filter_name == result_dict['engine']:
|
||||
results.append(result_dict)
|
||||
results.append(result_dict)
|
||||
except FileNotFoundError as e:
|
||||
logger.warning(f"Cannot find local engines download directory: {e}")
|
||||
|
||||
# add system installs to this list - use bg thread because it can be slow
|
||||
def fetch_engine_details(eng):
|
||||
return {
|
||||
'engine': eng.name(),
|
||||
'version': eng().version(),
|
||||
'system_os': current_system_os(),
|
||||
'cpu': current_system_cpu(),
|
||||
'path': eng.default_renderer_path(),
|
||||
'type': 'system'
|
||||
}
|
||||
|
||||
with concurrent.futures.ThreadPoolExecutor() as executor:
|
||||
futures = {
|
||||
executor.submit(fetch_engine_details, eng): eng.name()
|
||||
for eng in cls.supported_engines()
|
||||
if eng.default_renderer_path() and (not filter_name or filter_name == eng.name())
|
||||
}
|
||||
|
||||
for future in concurrent.futures.as_completed(futures):
|
||||
result = future.result()
|
||||
if result:
|
||||
results.append(result)
|
||||
# add system installs to this list
|
||||
for eng in cls.supported_engines():
|
||||
if eng.default_renderer_path():
|
||||
results.append({'engine': eng.name(), 'version': eng().version(),
|
||||
'system_os': current_system_os(),
|
||||
'cpu': current_system_cpu(),
|
||||
'path': eng.default_renderer_path(), 'type': 'system'})
|
||||
|
||||
return results
|
||||
|
||||
@classmethod
|
||||
def all_versions_for_engine(cls, engine_name):
|
||||
versions = cls.get_engines(filter_name=engine_name)
|
||||
sorted_versions = sorted(versions, key=lambda x: x['version'], reverse=True)
|
||||
return sorted_versions
|
||||
def all_versions_for_engine(cls, engine):
|
||||
return [x for x in cls.all_engines() if x['engine'] == engine]
|
||||
|
||||
@classmethod
|
||||
def newest_engine_version(cls, engine, system_os=None, cpu=None):
|
||||
@@ -105,20 +84,20 @@ class EngineManager:
|
||||
cpu = cpu or current_system_cpu()
|
||||
|
||||
try:
|
||||
filtered = [x for x in cls.all_versions_for_engine(engine) if x['system_os'] == system_os and
|
||||
x['cpu'] == cpu]
|
||||
return filtered[0]
|
||||
filtered = [x for x in cls.all_engines() if x['engine'] == engine and x['system_os'] == system_os and x['cpu'] == cpu]
|
||||
versions = sorted(filtered, key=lambda x: x['version'], reverse=True)
|
||||
return versions[0]
|
||||
except IndexError:
|
||||
logger.error(f"Cannot find newest engine version for {engine}-{system_os}-{cpu}")
|
||||
return None
|
||||
return None
|
||||
|
||||
@classmethod
|
||||
def is_version_downloaded(cls, engine, version, system_os=None, cpu=None):
|
||||
system_os = system_os or current_system_os()
|
||||
cpu = cpu or current_system_cpu()
|
||||
|
||||
filtered = [x for x in cls.get_engines(filter_name=engine) if x['system_os'] == system_os and
|
||||
x['cpu'] == cpu and x['version'] == version]
|
||||
filtered = [x for x in cls.all_engines() if
|
||||
x['engine'] == engine and x['system_os'] == system_os and x['cpu'] == cpu and x['version'] == version]
|
||||
return filtered[0] if filtered else False
|
||||
|
||||
@classmethod
|
||||
@@ -127,7 +106,6 @@ class EngineManager:
|
||||
downloader = cls.engine_with_name(engine).downloader()
|
||||
return downloader.version_is_available_to_download(version=version, system_os=system_os, cpu=cpu)
|
||||
except Exception as e:
|
||||
logger.debug(f"Exception in version_is_available_to_download: {e}")
|
||||
return None
|
||||
|
||||
@classmethod
|
||||
@@ -136,11 +114,10 @@ class EngineManager:
|
||||
downloader = cls.engine_with_name(engine).downloader()
|
||||
return downloader.find_most_recent_version(system_os=system_os, cpu=cpu)
|
||||
except Exception as e:
|
||||
logger.debug(f"Exception in find_most_recent_version: {e}")
|
||||
return None
|
||||
|
||||
@classmethod
|
||||
def get_existing_download_task(cls, engine, version, system_os=None, cpu=None):
|
||||
def is_already_downloading(cls, engine, version, system_os=None, cpu=None):
|
||||
for task in cls.download_tasks:
|
||||
task_parts = task.name.split('-')
|
||||
task_engine, task_version, task_system_os, task_cpu = task_parts[:4]
|
||||
@@ -148,17 +125,26 @@ class EngineManager:
|
||||
if engine == task_engine and version == task_version:
|
||||
if system_os in (task_system_os, None) and cpu in (task_cpu, None):
|
||||
return task
|
||||
return None
|
||||
return False
|
||||
|
||||
@classmethod
|
||||
def download_engine(cls, engine, version, system_os=None, cpu=None, background=False):
|
||||
def download_engine_task(engine, version, system_os=None, cpu=None):
|
||||
existing_download = cls.is_version_downloaded(engine, version, system_os, cpu)
|
||||
if existing_download:
|
||||
logger.info(f"Requested download of {engine} {version}, but local copy already exists")
|
||||
return existing_download
|
||||
|
||||
# Get the appropriate downloader class based on the engine type
|
||||
cls.engine_with_name(engine).downloader().download_engine(version, download_location=cls.engines_path,
|
||||
system_os=system_os, cpu=cpu, timeout=300)
|
||||
|
||||
engine_to_download = cls.engine_with_name(engine)
|
||||
existing_task = cls.get_existing_download_task(engine, version, system_os, cpu)
|
||||
existing_task = cls.is_already_downloading(engine, version, system_os, cpu)
|
||||
if existing_task:
|
||||
logger.debug(f"Already downloading {engine} {version}")
|
||||
if not background:
|
||||
existing_task.join() # If download task exists, wait until it's done downloading
|
||||
existing_task.join() # If download task exists, wait until its done downloading
|
||||
return
|
||||
elif not engine_to_download.downloader():
|
||||
logger.warning("No valid downloader for this engine. Please update this software manually.")
|
||||
@@ -166,18 +152,20 @@ class EngineManager:
|
||||
elif not cls.engines_path:
|
||||
raise FileNotFoundError("Engines path must be set before requesting downloads")
|
||||
|
||||
thread = EngineDownloadWorker(engine, version, system_os, cpu)
|
||||
thread = threading.Thread(target=download_engine_task, args=(engine, version, system_os, cpu),
|
||||
name=f'{engine}-{version}-{system_os}-{cpu}')
|
||||
cls.download_tasks.append(thread)
|
||||
thread.start()
|
||||
|
||||
if background:
|
||||
return thread
|
||||
else:
|
||||
thread.join()
|
||||
found_engine = cls.is_version_downloaded(engine, version, system_os, cpu) # Check that engine downloaded
|
||||
if not found_engine:
|
||||
logger.error(f"Error downloading {engine}")
|
||||
return found_engine
|
||||
|
||||
thread.join()
|
||||
found_engine = cls.is_version_downloaded(engine, version, system_os, cpu) # Check that engine downloaded
|
||||
if not found_engine:
|
||||
logger.error(f"Error downloading {engine}")
|
||||
return found_engine
|
||||
|
||||
@classmethod
|
||||
def delete_engine_download(cls, engine, version, system_os=None, cpu=None):
|
||||
@@ -201,22 +189,16 @@ class EngineManager:
|
||||
|
||||
@classmethod
|
||||
def update_all_engines(cls):
|
||||
def engine_update_task(engine_class):
|
||||
logger.debug(f"Checking for updates to {engine_class.name()}")
|
||||
latest_version = engine_class.downloader().find_most_recent_version()
|
||||
|
||||
if not latest_version:
|
||||
logger.warning(f"Could not find most recent version of {engine.name()} to download")
|
||||
return
|
||||
|
||||
version_num = latest_version.get('version')
|
||||
if cls.is_version_downloaded(engine_class.name(), version_num):
|
||||
logger.debug(f"Latest version of {engine_class.name()} ({version_num}) already downloaded")
|
||||
return
|
||||
|
||||
# download the engine
|
||||
logger.info(f"Downloading latest version of {engine_class.name()} ({version_num})...")
|
||||
cls.download_engine(engine=engine_class.name(), version=version_num, background=True)
|
||||
def engine_update_task(engine):
|
||||
logger.debug(f"Checking for updates to {engine.name()}")
|
||||
latest_version = engine.downloader().find_most_recent_version()
|
||||
if latest_version:
|
||||
logger.debug(f"Latest version of {engine.name()} available: {latest_version.get('version')}")
|
||||
if not cls.is_version_downloaded(engine.name(), latest_version.get('version')):
|
||||
logger.info(f"Downloading latest version of {engine.name()}...")
|
||||
cls.download_engine(engine=engine.name(), version=latest_version['version'], background=True)
|
||||
else:
|
||||
logger.warning(f"Unable to get check for updates for {engine.name()}")
|
||||
|
||||
logger.info(f"Checking for updates for render engines...")
|
||||
threads = []
|
||||
@@ -226,19 +208,20 @@ class EngineManager:
|
||||
threads.append(thread)
|
||||
thread.start()
|
||||
|
||||
|
||||
@classmethod
|
||||
def create_worker(cls, renderer, input_path, output_path, engine_version=None, args=None, parent=None, name=None):
|
||||
|
||||
worker_class = cls.engine_with_name(renderer).worker_class()
|
||||
|
||||
# check to make sure we have versions installed
|
||||
all_versions = cls.all_versions_for_engine(renderer)
|
||||
all_versions = EngineManager.all_versions_for_engine(renderer)
|
||||
if not all_versions:
|
||||
raise FileNotFoundError(f"Cannot find any installed {renderer} engines")
|
||||
|
||||
# Find the path to the requested engine version or use default
|
||||
engine_path = None
|
||||
if engine_version and engine_version != 'latest':
|
||||
engine_path = None if engine_version else all_versions[0]['path']
|
||||
if engine_version:
|
||||
for ver in all_versions:
|
||||
if ver['version'] == engine_version:
|
||||
engine_path = ver['path']
|
||||
@@ -246,14 +229,11 @@ class EngineManager:
|
||||
|
||||
# Download the required engine if not found locally
|
||||
if not engine_path:
|
||||
download_result = cls.download_engine(renderer, engine_version)
|
||||
download_result = EngineManager.download_engine(renderer, engine_version)
|
||||
if not download_result:
|
||||
raise FileNotFoundError(f"Cannot download requested version: {renderer} {engine_version}")
|
||||
engine_path = download_result['path']
|
||||
logger.info("Engine downloaded. Creating worker.")
|
||||
else:
|
||||
logger.debug(f"Using latest engine version ({all_versions[0]['version']})")
|
||||
engine_path = all_versions[0]['path']
|
||||
|
||||
if not engine_path:
|
||||
raise FileNotFoundError(f"Cannot find requested engine version {engine_version}")
|
||||
@@ -263,7 +243,7 @@ class EngineManager:
|
||||
|
||||
@classmethod
|
||||
def engine_for_project_path(cls, path):
|
||||
_, extension = os.path.splitext(path)
|
||||
name, extension = os.path.splitext(path)
|
||||
extension = extension.lower().strip('.')
|
||||
for engine in cls.supported_engines():
|
||||
if extension in engine.supported_extensions():
|
||||
@@ -272,29 +252,6 @@ class EngineManager:
|
||||
return undefined_renderer_support[0]
|
||||
|
||||
|
||||
class EngineDownloadWorker(threading.Thread):
|
||||
def __init__(self, engine, version, system_os=None, cpu=None):
|
||||
super().__init__()
|
||||
self.engine = engine
|
||||
self.version = version
|
||||
self.system_os = system_os
|
||||
self.cpu = cpu
|
||||
|
||||
def run(self):
|
||||
existing_download = EngineManager.is_version_downloaded(self.engine, self.version, self.system_os, self.cpu)
|
||||
if existing_download:
|
||||
logger.info(f"Requested download of {self.engine} {self.version}, but local copy already exists")
|
||||
return existing_download
|
||||
|
||||
# Get the appropriate downloader class based on the engine type
|
||||
EngineManager.engine_with_name(self.engine).downloader().download_engine(
|
||||
self.version, download_location=EngineManager.engines_path, system_os=self.system_os, cpu=self.cpu,
|
||||
timeout=300)
|
||||
|
||||
# remove itself from the downloader list
|
||||
EngineManager.download_tasks.remove(self)
|
||||
|
||||
|
||||
if __name__ == '__main__':
|
||||
logging.basicConfig(level=logging.INFO, format='%(asctime)s - %(name)s - %(levelname)s - %(message)s')
|
||||
|
||||
@@ -302,4 +259,4 @@ if __name__ == '__main__':
|
||||
# EngineManager.delete_engine_download('blender', '3.2.1', 'macos', 'a')
|
||||
EngineManager.engines_path = "/Users/brettwilliams/zordon-uploads/engines"
|
||||
# print(EngineManager.is_version_downloaded("ffmpeg", "6.0"))
|
||||
print(EngineManager.get_engines())
|
||||
print(EngineManager.all_engines())
|
||||
|
||||
@@ -90,7 +90,7 @@ class FFMPEGDownloader(EngineDownloader):
|
||||
return releases
|
||||
|
||||
@classmethod
|
||||
def all_versions(cls, system_os=None, cpu=None):
|
||||
def __all_versions(cls, system_os=None, cpu=None):
|
||||
system_os = system_os or current_system_os()
|
||||
cpu = cpu or current_system_cpu()
|
||||
versions_per_os = {'linux': cls.__get_linux_versions, 'macos': cls.__get_macos_versions,
|
||||
@@ -131,14 +131,14 @@ class FFMPEGDownloader(EngineDownloader):
|
||||
try:
|
||||
system_os = system_os or current_system_os()
|
||||
cpu = cpu or current_system_cpu()
|
||||
return cls.all_versions(system_os, cpu)[0]
|
||||
except (IndexError, requests.exceptions.RequestException) as e:
|
||||
logger.error(f"Cannot get most recent version of ffmpeg: {e}")
|
||||
return cls.__all_versions(system_os, cpu)[0]
|
||||
except (IndexError, requests.exceptions.RequestException):
|
||||
logger.error(f"Cannot get most recent version of ffmpeg")
|
||||
return {}
|
||||
|
||||
@classmethod
|
||||
def version_is_available_to_download(cls, version, system_os=None, cpu=None):
|
||||
for ver in cls.all_versions(system_os, cpu):
|
||||
for ver in cls.__all_versions(system_os, cpu):
|
||||
if ver['version'] == version:
|
||||
return ver
|
||||
return None
|
||||
@@ -149,7 +149,7 @@ class FFMPEGDownloader(EngineDownloader):
|
||||
cpu = cpu or current_system_cpu()
|
||||
|
||||
# Verify requested version is available
|
||||
found_version = [item for item in cls.all_versions(system_os, cpu) if item['version'] == version]
|
||||
found_version = [item for item in cls.__all_versions(system_os, cpu) if item['version'] == version]
|
||||
if not found_version:
|
||||
logger.error(f"Cannot find FFMPEG version {version} for {system_os} and {cpu}")
|
||||
return
|
||||
@@ -182,5 +182,4 @@ if __name__ == "__main__":
|
||||
logging.basicConfig(level=logging.INFO, format='%(asctime)s - %(name)s - %(levelname)s - %(message)s')
|
||||
# print(FFMPEGDownloader.download_engine('6.0', '/Users/brett/zordon-uploads/engines/'))
|
||||
# print(FFMPEGDownloader.find_most_recent_version(system_os='linux'))
|
||||
print(FFMPEGDownloader.download_engine(version='6.0', download_location='/Users/brett/zordon-uploads/engines/',
|
||||
system_os='linux', cpu='x64'))
|
||||
print(FFMPEGDownloader.download_engine(version='6.0', download_location='/Users/brett/zordon-uploads/engines/', system_os='linux', cpu='x64'))
|
||||
@@ -5,6 +5,7 @@ from src.engines.core.base_engine import *
|
||||
|
||||
|
||||
class FFMPEG(BaseRenderEngine):
|
||||
|
||||
binary_names = {'linux': 'ffmpeg', 'windows': 'ffmpeg.exe', 'macos': 'ffmpeg'}
|
||||
|
||||
@staticmethod
|
||||
@@ -17,15 +18,11 @@ class FFMPEG(BaseRenderEngine):
|
||||
from src.engines.ffmpeg.ffmpeg_worker import FFMPEGRenderWorker
|
||||
return FFMPEGRenderWorker
|
||||
|
||||
def ui_options(self):
|
||||
from src.engines.ffmpeg.ffmpeg_ui import FFMPEGUI
|
||||
return FFMPEGUI.get_options(self)
|
||||
|
||||
@classmethod
|
||||
def supported_extensions(cls):
|
||||
help_text = (subprocess.check_output([cls().renderer_path(), '-h', 'full'], stderr=subprocess.STDOUT)
|
||||
.decode('utf-8'))
|
||||
found = re.findall(r'extensions that .* is allowed to access \(default "(.*)"', help_text)
|
||||
found = re.findall('extensions that .* is allowed to access \(default "(.*)"', help_text)
|
||||
found_extensions = set()
|
||||
for match in found:
|
||||
found_extensions.update(match.split(','))
|
||||
@@ -36,7 +33,7 @@ class FFMPEG(BaseRenderEngine):
|
||||
try:
|
||||
ver_out = subprocess.check_output([self.renderer_path(), '-version'],
|
||||
timeout=SUBPROCESS_TIMEOUT).decode('utf-8')
|
||||
match = re.match(r".*version\s*([\w.*]+)\W*", ver_out)
|
||||
match = re.match(".*version\s*(\S+)\s*Copyright", ver_out)
|
||||
if match:
|
||||
version = match.groups()[0]
|
||||
except Exception as e:
|
||||
@@ -50,8 +47,8 @@ class FFMPEG(BaseRenderEngine):
|
||||
'ffprobe', '-v', 'quiet', '-print_format', 'json',
|
||||
'-show_streams', '-select_streams', 'v', project_path
|
||||
]
|
||||
output = subprocess.check_output(cmd, stderr=subprocess.STDOUT, text=True)
|
||||
video_info = json.loads(output)
|
||||
result = subprocess.run(cmd, stdout=subprocess.PIPE, stderr=subprocess.PIPE, text=True)
|
||||
video_info = json.loads(result.stdout)
|
||||
|
||||
# Extract the necessary information
|
||||
video_stream = video_info['streams'][0]
|
||||
@@ -82,7 +79,7 @@ class FFMPEG(BaseRenderEngine):
|
||||
def get_encoders(self):
|
||||
raw_stdout = subprocess.check_output([self.renderer_path(), '-encoders'], stderr=subprocess.DEVNULL,
|
||||
timeout=SUBPROCESS_TIMEOUT).decode('utf-8')
|
||||
pattern = r'(?P<type>[VASFXBD.]{6})\s+(?P<name>\S{2,})\s+(?P<description>.*)'
|
||||
pattern = '(?P<type>[VASFXBD.]{6})\s+(?P<name>\S{2,})\s+(?P<description>.*)'
|
||||
encoders = [m.groupdict() for m in re.finditer(pattern, raw_stdout)]
|
||||
return encoders
|
||||
|
||||
@@ -93,8 +90,8 @@ class FFMPEG(BaseRenderEngine):
|
||||
def get_all_formats(self):
|
||||
try:
|
||||
formats_raw = subprocess.check_output([self.renderer_path(), '-formats'], stderr=subprocess.DEVNULL,
|
||||
timeout=SUBPROCESS_TIMEOUT).decode('utf-8')
|
||||
pattern = r'(?P<type>[DE]{1,2})\s+(?P<id>\S{2,})\s+(?P<name>.*)'
|
||||
timeout=SUBPROCESS_TIMEOUT).decode('utf-8')
|
||||
pattern = '(?P<type>[DE]{1,2})\s+(?P<id>\S{2,})\s+(?P<name>.*)'
|
||||
all_formats = [m.groupdict() for m in re.finditer(pattern, formats_raw)]
|
||||
return all_formats
|
||||
except Exception as e:
|
||||
@@ -118,13 +115,12 @@ class FFMPEG(BaseRenderEngine):
|
||||
|
||||
def get_frame_count(self, path_to_file):
|
||||
raw_stdout = subprocess.check_output([self.renderer_path(), '-i', path_to_file, '-map', '0:v:0', '-c', 'copy',
|
||||
'-f', 'null', '-'], stderr=subprocess.STDOUT,
|
||||
timeout=SUBPROCESS_TIMEOUT).decode('utf-8')
|
||||
'-f', 'null', '-'], stderr=subprocess.STDOUT,
|
||||
timeout=SUBPROCESS_TIMEOUT).decode('utf-8')
|
||||
match = re.findall(r'frame=\s*(\d+)', raw_stdout)
|
||||
if match:
|
||||
frame_number = int(match[-1])
|
||||
return frame_number
|
||||
return -1
|
||||
|
||||
def get_arguments(self):
|
||||
help_text = (subprocess.check_output([self.renderer_path(), '-h', 'long'], stderr=subprocess.STDOUT)
|
||||
@@ -155,4 +151,4 @@ class FFMPEG(BaseRenderEngine):
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
print(FFMPEG().get_all_formats())
|
||||
print(FFMPEG().get_all_formats())
|
||||
@@ -1,5 +0,0 @@
|
||||
class FFMPEGUI:
|
||||
@staticmethod
|
||||
def get_options(instance):
|
||||
options = []
|
||||
return options
|
||||
@@ -1,5 +1,6 @@
|
||||
#!/usr/bin/env python3
|
||||
import re
|
||||
import subprocess
|
||||
|
||||
from src.engines.core.base_worker import BaseRenderWorker
|
||||
from src.engines.ffmpeg.ffmpeg_engine import FFMPEG
|
||||
@@ -16,7 +17,7 @@ class FFMPEGRenderWorker(BaseRenderWorker):
|
||||
|
||||
def generate_worker_subprocess(self):
|
||||
|
||||
cmd = [self.renderer_path, '-y', '-stats', '-i', self.input_path]
|
||||
cmd = [self.engine.default_renderer_path(), '-y', '-stats', '-i', self.input_path]
|
||||
|
||||
# Resize frame
|
||||
if self.args.get('x_resolution', None) and self.args.get('y_resolution', None):
|
||||
@@ -28,7 +29,7 @@ class FFMPEGRenderWorker(BaseRenderWorker):
|
||||
cmd.extend(raw_args.split(' '))
|
||||
|
||||
# Close with output path
|
||||
cmd.extend(['-max_muxing_queue_size', '1024', self.output_path])
|
||||
cmd.append(self.output_path)
|
||||
return cmd
|
||||
|
||||
def percent_complete(self):
|
||||
|
||||
166
src/init.py
@@ -1,27 +1,22 @@
|
||||
''' app/init.py '''
|
||||
import logging
|
||||
import multiprocessing
|
||||
import os
|
||||
import socket
|
||||
import sys
|
||||
import threading
|
||||
import time
|
||||
from collections import deque
|
||||
|
||||
from PyQt6.QtCore import QObject, pyqtSignal
|
||||
from PyQt6.QtWidgets import QApplication
|
||||
|
||||
from .render_queue import RenderQueue
|
||||
from .ui.main_window import MainWindow
|
||||
|
||||
from src.api.api_server import start_server
|
||||
from src.api.preview_manager import PreviewManager
|
||||
from src.api.serverproxy_manager import ServerProxyManager
|
||||
from src.distributed_job_manager import DistributedJobManager
|
||||
from src.engines.engine_manager import EngineManager
|
||||
from src.render_queue import RenderQueue
|
||||
from src.utilities.config import Config
|
||||
from src.utilities.misc_helper import system_safe_path, current_system_cpu, current_system_os, current_system_os_version
|
||||
from src.utilities.zeroconf_server import ZeroconfServer
|
||||
|
||||
logger = logging.getLogger()
|
||||
from src.utilities.misc_helper import system_safe_path
|
||||
|
||||
|
||||
def run(server_only=False) -> int:
|
||||
def run() -> int:
|
||||
"""
|
||||
Initializes the application and runs it.
|
||||
|
||||
@@ -29,130 +24,47 @@ def run(server_only=False) -> int:
|
||||
int: The exit status code.
|
||||
"""
|
||||
|
||||
# setup logging
|
||||
# Load Config YAML
|
||||
config_dir = os.path.join(os.path.dirname(os.path.dirname(os.path.abspath(__file__))), 'config')
|
||||
Config.load_config(system_safe_path(os.path.join(config_dir, 'config.yaml')))
|
||||
|
||||
logging.basicConfig(format='%(asctime)s: %(levelname)s: %(module)s: %(message)s', datefmt='%d-%b-%y %H:%M:%S',
|
||||
level=Config.server_log_level.upper())
|
||||
logging.getLogger("requests").setLevel(logging.WARNING) # suppress noisy requests/urllib3 logging
|
||||
logging.getLogger("urllib3").setLevel(logging.WARNING)
|
||||
|
||||
app: QApplication = QApplication(sys.argv)
|
||||
|
||||
# Start server in background
|
||||
background_server = threading.Thread(target=start_server)
|
||||
background_server.daemon = True
|
||||
background_server.start()
|
||||
|
||||
# Setup logging for console ui
|
||||
buffer_handler = __setup_buffer_handler() if not server_only else None
|
||||
|
||||
logger.info(f"Starting Zordon Render Server")
|
||||
return_code = 0
|
||||
try:
|
||||
# Load Config YAML
|
||||
Config.setup_config_dir()
|
||||
Config.load_config(system_safe_path(os.path.join(Config.config_dir(), 'config.yaml')))
|
||||
|
||||
# configure default paths
|
||||
EngineManager.engines_path = system_safe_path(
|
||||
os.path.join(os.path.join(os.path.expanduser(Config.upload_folder),
|
||||
'engines')))
|
||||
os.makedirs(EngineManager.engines_path, exist_ok=True)
|
||||
PreviewManager.storage_path = system_safe_path(
|
||||
os.path.join(os.path.expanduser(Config.upload_folder), 'previews'))
|
||||
|
||||
# Debug info
|
||||
logger.debug(f"Upload directory: {os.path.expanduser(Config.upload_folder)}")
|
||||
logger.debug(f"Thumbs directory: {PreviewManager.storage_path}")
|
||||
logger.debug(f"Engines directory: {EngineManager.engines_path}")
|
||||
|
||||
# Set up the RenderQueue object
|
||||
RenderQueue.load_state(database_directory=system_safe_path(os.path.expanduser(Config.upload_folder)))
|
||||
ServerProxyManager.subscribe_to_listener()
|
||||
DistributedJobManager.subscribe_to_listener()
|
||||
|
||||
# check for updates for render engines if configured or on first launch
|
||||
if Config.update_engines_on_launch or not EngineManager.get_engines():
|
||||
EngineManager.update_all_engines()
|
||||
|
||||
# get hostname
|
||||
local_hostname = socket.gethostname()
|
||||
local_hostname = local_hostname + (".local" if not local_hostname.endswith(".local") else "")
|
||||
|
||||
# configure and start API server
|
||||
api_server = threading.Thread(target=start_server, args=(local_hostname,))
|
||||
api_server.daemon = True
|
||||
api_server.start()
|
||||
|
||||
# start zeroconf server
|
||||
ZeroconfServer.configure("_zordon._tcp.local.", local_hostname, Config.port_number)
|
||||
ZeroconfServer.properties = {'system_cpu': current_system_cpu(),
|
||||
'system_cpu_cores': multiprocessing.cpu_count(),
|
||||
'system_os': current_system_os(),
|
||||
'system_os_version': current_system_os_version()}
|
||||
ZeroconfServer.start()
|
||||
logger.info(f"Zordon Render Server started - Hostname: {local_hostname}")
|
||||
|
||||
RenderQueue.evaluation_inverval = Config.queue_eval_seconds
|
||||
RenderQueue.start()
|
||||
|
||||
# start in gui or server only (cli) mode
|
||||
logger.debug(f"Launching in {'server only' if server_only else 'GUI'} mode")
|
||||
if server_only: # CLI only
|
||||
api_server.join()
|
||||
else: # GUI
|
||||
return_code = __show_gui(buffer_handler)
|
||||
|
||||
except KeyboardInterrupt:
|
||||
pass
|
||||
except Exception as e:
|
||||
logging.error(f"Unhandled exception: {e}")
|
||||
return_code = 1
|
||||
finally:
|
||||
# shut down gracefully
|
||||
logger.info(f"Zordon Render Server is preparing to shut down")
|
||||
try:
|
||||
RenderQueue.prepare_for_shutdown()
|
||||
except Exception as e:
|
||||
logger.exception(f"Exception during prepare for shutdown: {e}")
|
||||
ZeroconfServer.stop()
|
||||
logger.info(f"Zordon Render Server has shut down")
|
||||
return sys.exit(return_code)
|
||||
|
||||
|
||||
def __setup_buffer_handler():
|
||||
# lazy load GUI frameworks
|
||||
from PyQt6.QtCore import QObject, pyqtSignal
|
||||
|
||||
class BufferingHandler(logging.Handler, QObject):
|
||||
new_record = pyqtSignal(str)
|
||||
|
||||
def __init__(self, capacity=100):
|
||||
logging.Handler.__init__(self)
|
||||
QObject.__init__(self)
|
||||
self.buffer = deque(maxlen=capacity) # Define a buffer with a fixed capacity
|
||||
|
||||
def emit(self, record):
|
||||
try:
|
||||
msg = self.format(record)
|
||||
self.buffer.append(msg) # Add message to the buffer
|
||||
self.new_record.emit(msg) # Emit signal
|
||||
except RuntimeError:
|
||||
pass
|
||||
|
||||
def get_buffer(self):
|
||||
return list(self.buffer) # Return a copy of the buffer
|
||||
|
||||
buffer_handler = BufferingHandler()
|
||||
buffer_handler.setFormatter(logging.getLogger().handlers[0].formatter)
|
||||
logger = logging.getLogger()
|
||||
logger.addHandler(buffer_handler)
|
||||
return buffer_handler
|
||||
|
||||
|
||||
def __show_gui(buffer_handler):
|
||||
# lazy load GUI frameworks
|
||||
from PyQt6.QtWidgets import QApplication
|
||||
|
||||
# load application
|
||||
app: QApplication = QApplication(sys.argv)
|
||||
|
||||
# configure main window
|
||||
from src.ui.main_window import MainWindow
|
||||
window: MainWindow = MainWindow()
|
||||
window.buffer_handler = buffer_handler
|
||||
window.show()
|
||||
|
||||
return app.exec()
|
||||
return_code = app.exec()
|
||||
RenderQueue.prepare_for_shutdown()
|
||||
return sys.exit(return_code)
|
||||
|
||||
|
||||
class BufferingHandler(logging.Handler, QObject):
|
||||
new_record = pyqtSignal(str)
|
||||
|
||||
def __init__(self, capacity=100):
|
||||
logging.Handler.__init__(self)
|
||||
QObject.__init__(self)
|
||||
self.buffer = deque(maxlen=capacity) # Define a buffer with a fixed capacity
|
||||
|
||||
def emit(self, record):
|
||||
msg = self.format(record)
|
||||
self.buffer.append(msg) # Add message to the buffer
|
||||
self.new_record.emit(msg) # Emit signal
|
||||
|
||||
def get_buffer(self):
|
||||
return list(self.buffer) # Return a copy of the buffer
|
||||
|
||||
@@ -2,13 +2,12 @@ import logging
|
||||
import os
|
||||
from datetime import datetime
|
||||
|
||||
from pubsub import pub
|
||||
from sqlalchemy import create_engine
|
||||
from sqlalchemy.orm import sessionmaker
|
||||
from sqlalchemy.orm.exc import DetachedInstanceError
|
||||
|
||||
from src.engines.core.base_worker import Base
|
||||
from src.utilities.status_utils import RenderStatus
|
||||
from src.engines.engine_manager import EngineManager
|
||||
from src.engines.core.base_worker import Base
|
||||
|
||||
logger = logging.getLogger()
|
||||
|
||||
@@ -18,9 +17,6 @@ class JobNotFoundError(Exception):
|
||||
super().__init__(args)
|
||||
self.job_id = job_id
|
||||
|
||||
def __str__(self):
|
||||
return f"Cannot find job with ID: {self.job_id}"
|
||||
|
||||
|
||||
class RenderQueue:
|
||||
engine = None
|
||||
@@ -28,46 +24,18 @@ class RenderQueue:
|
||||
job_queue = []
|
||||
maximum_renderer_instances = {'blender': 1, 'aerender': 1, 'ffmpeg': 4}
|
||||
last_saved_counts = {}
|
||||
is_running = False
|
||||
__eval_thread = None
|
||||
evaluation_inverval = 1
|
||||
|
||||
# --------------------------------------------
|
||||
# Start / Stop Background Updates
|
||||
# --------------------------------------------
|
||||
|
||||
@classmethod
|
||||
def start(cls):
|
||||
logger.debug("Starting render queue updates")
|
||||
cls.is_running = True
|
||||
cls.evaluate_queue()
|
||||
|
||||
@classmethod
|
||||
def __local_job_status_changed(cls, job_id, old_status, new_status):
|
||||
render_job = RenderQueue.job_with_id(job_id, none_ok=True)
|
||||
if render_job and cls.is_running: # ignore changes from render jobs not in the queue yet
|
||||
logger.debug(f"RenderQueue detected job {job_id} has changed from {old_status} -> {new_status}")
|
||||
RenderQueue.evaluate_queue()
|
||||
|
||||
@classmethod
|
||||
def stop(cls):
|
||||
logger.debug("Stopping render queue updates")
|
||||
cls.is_running = False
|
||||
|
||||
# --------------------------------------------
|
||||
# Queue Management
|
||||
# --------------------------------------------
|
||||
def __init__(self):
|
||||
pass
|
||||
|
||||
@classmethod
|
||||
def add_to_render_queue(cls, render_job, force_start=False):
|
||||
logger.info(f"Adding job to render queue: {render_job}")
|
||||
logger.debug('Adding priority {} job to render queue: {}'.format(render_job.priority, render_job))
|
||||
cls.job_queue.append(render_job)
|
||||
if cls.is_running and force_start and render_job.status in (RenderStatus.NOT_STARTED, RenderStatus.SCHEDULED):
|
||||
if force_start:
|
||||
cls.start_job(render_job)
|
||||
cls.session.add(render_job)
|
||||
cls.save_state()
|
||||
if cls.is_running:
|
||||
cls.evaluate_queue()
|
||||
|
||||
@classmethod
|
||||
def all_jobs(cls):
|
||||
@@ -113,7 +81,6 @@ class RenderQueue:
|
||||
cls.session = sessionmaker(bind=cls.engine)()
|
||||
from src.engines.core.base_worker import BaseRenderWorker
|
||||
cls.job_queue = cls.session.query(BaseRenderWorker).all()
|
||||
pub.subscribe(cls.__local_job_status_changed, 'status_change')
|
||||
|
||||
@classmethod
|
||||
def save_state(cls):
|
||||
@@ -122,7 +89,6 @@ class RenderQueue:
|
||||
@classmethod
|
||||
def prepare_for_shutdown(cls):
|
||||
logger.debug("Closing session")
|
||||
cls.stop()
|
||||
running_jobs = cls.jobs_with_status(RenderStatus.RUNNING) # cancel all running jobs
|
||||
[cls.cancel_job(job) for job in running_jobs]
|
||||
cls.save_state()
|
||||
@@ -131,6 +97,9 @@ class RenderQueue:
|
||||
@classmethod
|
||||
def is_available_for_job(cls, renderer, priority=2):
|
||||
|
||||
if not EngineManager.all_versions_for_engine(renderer):
|
||||
return False
|
||||
|
||||
instances = cls.renderer_instances()
|
||||
higher_priority_jobs = [x for x in cls.running_jobs() if x.priority < priority]
|
||||
max_allowed_instances = cls.maximum_renderer_instances.get(renderer, 1)
|
||||
@@ -139,38 +108,35 @@ class RenderQueue:
|
||||
|
||||
@classmethod
|
||||
def evaluate_queue(cls):
|
||||
try:
|
||||
not_started = cls.jobs_with_status(RenderStatus.NOT_STARTED, priority_sorted=True)
|
||||
for job in not_started:
|
||||
if cls.is_available_for_job(job.renderer, job.priority):
|
||||
cls.start_job(job)
|
||||
not_started = cls.jobs_with_status(RenderStatus.NOT_STARTED, priority_sorted=True)
|
||||
for job in not_started:
|
||||
if cls.is_available_for_job(job.renderer, job.priority):
|
||||
cls.start_job(job)
|
||||
|
||||
scheduled = cls.jobs_with_status(RenderStatus.SCHEDULED, priority_sorted=True)
|
||||
for job in scheduled:
|
||||
if job.scheduled_start <= datetime.now():
|
||||
logger.debug(f"Starting scheduled job: {job}")
|
||||
cls.start_job(job)
|
||||
scheduled = cls.jobs_with_status(RenderStatus.SCHEDULED, priority_sorted=True)
|
||||
for job in scheduled:
|
||||
if job.scheduled_start <= datetime.now():
|
||||
logger.debug(f"Starting scheduled job: {job}")
|
||||
cls.start_job(job)
|
||||
|
||||
if cls.last_saved_counts != cls.job_counts():
|
||||
cls.save_state()
|
||||
except DetachedInstanceError:
|
||||
pass
|
||||
if cls.last_saved_counts != cls.job_counts():
|
||||
cls.save_state()
|
||||
|
||||
@classmethod
|
||||
def start_job(cls, job):
|
||||
logger.info(f'Starting job: {job}')
|
||||
logger.info(f'Starting render: {job.name} - Priority {job.priority}')
|
||||
job.start()
|
||||
cls.save_state()
|
||||
|
||||
@classmethod
|
||||
def cancel_job(cls, job):
|
||||
logger.info(f'Cancelling job: {job}')
|
||||
logger.info(f'Cancelling job ID: {job.id}')
|
||||
job.stop()
|
||||
return job.status == RenderStatus.CANCELLED
|
||||
|
||||
@classmethod
|
||||
def delete_job(cls, job):
|
||||
logger.info(f"Deleting job: {job}")
|
||||
logger.info(f"Deleting job ID: {job.id}")
|
||||
job.stop()
|
||||
cls.job_queue.remove(job)
|
||||
cls.session.delete(job)
|
||||
|
||||
@@ -21,17 +21,10 @@ from src.utilities.zeroconf_server import ZeroconfServer
|
||||
class NewRenderJobForm(QWidget):
|
||||
def __init__(self, project_path=None):
|
||||
super().__init__()
|
||||
self.notes_group = None
|
||||
self.frame_rate_input = None
|
||||
self.resolution_x_input = None
|
||||
self.renderer_group = None
|
||||
self.output_settings_group = None
|
||||
self.resolution_y_input = None
|
||||
|
||||
self.project_path = project_path
|
||||
|
||||
# UI
|
||||
self.project_group = None
|
||||
self.load_file_group = None
|
||||
self.current_engine_options = None
|
||||
self.file_format_combo = None
|
||||
self.renderer_options_layout = None
|
||||
@@ -55,7 +48,7 @@ class NewRenderJobForm(QWidget):
|
||||
self.priority_input = None
|
||||
self.end_frame_input = None
|
||||
self.start_frame_input = None
|
||||
self.render_name_input = None
|
||||
self.output_path_input = None
|
||||
self.scene_file_input = None
|
||||
self.scene_file_browse_button = None
|
||||
self.job_name_input = None
|
||||
@@ -80,41 +73,41 @@ class NewRenderJobForm(QWidget):
|
||||
# Main Layout
|
||||
main_layout = QVBoxLayout(self)
|
||||
|
||||
# Loading File Group
|
||||
self.load_file_group = QGroupBox("Loading")
|
||||
load_file_layout = QVBoxLayout(self.load_file_group)
|
||||
# progress bar
|
||||
progress_layout = QHBoxLayout()
|
||||
self.process_progress_bar = QProgressBar()
|
||||
self.process_progress_bar.setMinimum(0)
|
||||
self.process_progress_bar.setMaximum(0)
|
||||
self.process_label = QLabel("Processing")
|
||||
progress_layout.addWidget(self.process_label)
|
||||
progress_layout.addWidget(self.process_progress_bar)
|
||||
load_file_layout.addLayout(progress_layout)
|
||||
main_layout.addWidget(self.load_file_group)
|
||||
|
||||
# Project Group
|
||||
self.project_group = QGroupBox("Project")
|
||||
server_layout = QVBoxLayout(self.project_group)
|
||||
# File Path
|
||||
# Scene File Group
|
||||
scene_file_group = QGroupBox("Project")
|
||||
scene_file_layout = QVBoxLayout(scene_file_group)
|
||||
scene_file_picker_layout = QHBoxLayout()
|
||||
self.scene_file_input = QLineEdit()
|
||||
self.scene_file_input.setText(self.project_path)
|
||||
self.scene_file_browse_button = QPushButton("Browse...")
|
||||
self.scene_file_browse_button.clicked.connect(self.browse_scene_file)
|
||||
scene_file_picker_layout.addWidget(QLabel("File:"))
|
||||
scene_file_picker_layout.addWidget(self.scene_file_input)
|
||||
scene_file_picker_layout.addWidget(self.scene_file_browse_button)
|
||||
server_layout.addLayout(scene_file_picker_layout)
|
||||
scene_file_layout.addLayout(scene_file_picker_layout)
|
||||
# progress bar
|
||||
progress_layout = QHBoxLayout()
|
||||
self.process_progress_bar = QProgressBar()
|
||||
self.process_progress_bar.setMinimum(0)
|
||||
self.process_progress_bar.setMaximum(0)
|
||||
self.process_progress_bar.setHidden(True)
|
||||
self.process_label = QLabel("Processing")
|
||||
self.process_label.setHidden(True)
|
||||
progress_layout.addWidget(self.process_label)
|
||||
progress_layout.addWidget(self.process_progress_bar)
|
||||
scene_file_layout.addLayout(progress_layout)
|
||||
main_layout.addWidget(scene_file_group)
|
||||
|
||||
# Server Group
|
||||
# Server List
|
||||
self.server_group = QGroupBox("Server")
|
||||
server_layout = QVBoxLayout(self.server_group)
|
||||
server_list_layout = QHBoxLayout()
|
||||
server_list_layout.setSpacing(0)
|
||||
self.server_input = QComboBox()
|
||||
server_list_layout.addWidget(QLabel("Hostname:"), 1)
|
||||
server_list_layout.addWidget(self.server_input, 3)
|
||||
server_layout.addLayout(server_list_layout)
|
||||
main_layout.addWidget(self.project_group)
|
||||
main_layout.addWidget(self.server_group)
|
||||
self.update_server_list()
|
||||
# Priority
|
||||
priority_layout = QHBoxLayout()
|
||||
@@ -136,11 +129,11 @@ class NewRenderJobForm(QWidget):
|
||||
self.output_settings_group = QGroupBox("Output Settings")
|
||||
output_settings_layout = QVBoxLayout(self.output_settings_group)
|
||||
# output path
|
||||
render_name_layout = QHBoxLayout()
|
||||
render_name_layout.addWidget(QLabel("Render name:"))
|
||||
self.render_name_input = QLineEdit()
|
||||
render_name_layout.addWidget(self.render_name_input)
|
||||
output_settings_layout.addLayout(render_name_layout)
|
||||
output_path_layout = QHBoxLayout()
|
||||
output_path_layout.addWidget(QLabel("Render name:"))
|
||||
self.output_path_input = QLineEdit()
|
||||
output_path_layout.addWidget(self.output_path_input)
|
||||
output_settings_layout.addLayout(output_path_layout)
|
||||
# file format
|
||||
file_format_layout = QHBoxLayout()
|
||||
file_format_layout.addWidget(QLabel("Format:"))
|
||||
@@ -192,7 +185,6 @@ class NewRenderJobForm(QWidget):
|
||||
# Version
|
||||
renderer_layout.addWidget(QLabel("Version:"))
|
||||
self.renderer_version_combo = QComboBox()
|
||||
self.renderer_version_combo.addItem('latest')
|
||||
renderer_layout.addWidget(self.renderer_version_combo)
|
||||
renderer_group_layout.addLayout(renderer_layout)
|
||||
# dynamic options
|
||||
@@ -243,7 +235,7 @@ class NewRenderJobForm(QWidget):
|
||||
|
||||
def update_renderer_info(self):
|
||||
# get the renderer info and add them all to the ui
|
||||
self.renderer_info = self.server_proxy.get_renderer_info(response_type='full')
|
||||
self.renderer_info = self.server_proxy.get_renderer_info()
|
||||
self.renderer_type.addItems(self.renderer_info.keys())
|
||||
# select the best renderer for the file type
|
||||
engine = EngineManager.engine_for_project_path(self.project_path)
|
||||
@@ -255,7 +247,6 @@ class NewRenderJobForm(QWidget):
|
||||
# load the version numbers
|
||||
current_renderer = self.renderer_type.currentText().lower() or self.renderer_type.itemText(0)
|
||||
self.renderer_version_combo.clear()
|
||||
self.renderer_version_combo.addItem('latest')
|
||||
self.file_format_combo.clear()
|
||||
if current_renderer:
|
||||
renderer_vers = [version_info['version'] for version_info in self.renderer_info[current_renderer]['versions']]
|
||||
@@ -281,7 +272,7 @@ class NewRenderJobForm(QWidget):
|
||||
|
||||
output_name, _ = os.path.splitext(os.path.basename(self.scene_file_input.text()))
|
||||
output_name = output_name.replace(' ', '_')
|
||||
self.render_name_input.setText(output_name)
|
||||
self.output_path_input.setText(output_name)
|
||||
file_name = self.scene_file_input.text()
|
||||
|
||||
# setup bg worker
|
||||
@@ -292,7 +283,7 @@ class NewRenderJobForm(QWidget):
|
||||
def browse_output_path(self):
|
||||
directory = QFileDialog.getExistingDirectory(self, "Select Output Directory")
|
||||
if directory:
|
||||
self.render_name_input.setText(directory)
|
||||
self.output_path_input.setText(directory)
|
||||
|
||||
def args_help_button_clicked(self):
|
||||
url = (f'http://{self.server_proxy.hostname}:{self.server_proxy.port}/api/renderer/'
|
||||
@@ -316,8 +307,11 @@ class NewRenderJobForm(QWidget):
|
||||
self.renderer_type.setCurrentIndex(0) #todo: find out why we don't have renderer info yet
|
||||
# not ideal but if we don't have the renderer info we have to pick something
|
||||
|
||||
self.output_path_input.setText(os.path.basename(input_path))
|
||||
|
||||
# cleanup progress UI
|
||||
self.load_file_group.setHidden(True)
|
||||
self.process_progress_bar.setHidden(True)
|
||||
self.process_label.setHidden(True)
|
||||
self.toggle_renderer_enablement(True)
|
||||
|
||||
# Load scene data
|
||||
@@ -348,10 +342,10 @@ class NewRenderJobForm(QWidget):
|
||||
# Dynamic Engine Options
|
||||
clear_layout(self.renderer_options_layout) # clear old options
|
||||
# dynamically populate option list
|
||||
self.current_engine_options = engine().ui_options()
|
||||
self.current_engine_options = engine().get_options()
|
||||
for option in self.current_engine_options:
|
||||
h_layout = QHBoxLayout()
|
||||
label = QLabel(option['name'].replace('_', ' ').capitalize() + ':')
|
||||
label = QLabel(option['name'].capitalize() + ':')
|
||||
h_layout.addWidget(label)
|
||||
if option.get('options'):
|
||||
combo_box = QComboBox()
|
||||
@@ -362,12 +356,12 @@ class NewRenderJobForm(QWidget):
|
||||
text_box = QLineEdit()
|
||||
h_layout.addWidget(text_box)
|
||||
self.renderer_options_layout.addLayout(h_layout)
|
||||
except AttributeError:
|
||||
except AttributeError as e:
|
||||
pass
|
||||
|
||||
def toggle_renderer_enablement(self, enabled=False):
|
||||
"""Toggle on/off all the render settings"""
|
||||
self.project_group.setHidden(not enabled)
|
||||
self.server_group.setHidden(not enabled)
|
||||
self.output_settings_group.setHidden(not enabled)
|
||||
self.renderer_group.setHidden(not enabled)
|
||||
self.notes_group.setHidden(not enabled)
|
||||
@@ -448,17 +442,15 @@ class SubmitWorker(QThread):
|
||||
hostname = self.window.server_input.currentText()
|
||||
job_json = {'owner': psutil.Process().username() + '@' + socket.gethostname(),
|
||||
'renderer': self.window.renderer_type.currentText().lower(),
|
||||
'engine_version': self.window.renderer_version_combo.currentText(),
|
||||
'args': {'raw': self.window.raw_args.text(),
|
||||
'export_format': self.window.file_format_combo.currentText()},
|
||||
'output_path': self.window.render_name_input.text(),
|
||||
'renderer_version': self.window.renderer_version_combo.currentText(),
|
||||
'args': {'raw': self.window.raw_args.text()},
|
||||
'output_path': self.window.output_path_input.text(),
|
||||
'start_frame': self.window.start_frame_input.value(),
|
||||
'end_frame': self.window.end_frame_input.value(),
|
||||
'priority': self.window.priority_input.currentIndex() + 1,
|
||||
'notes': self.window.notes_input.toPlainText(),
|
||||
'enable_split_jobs': self.window.enable_splitjobs.isChecked(),
|
||||
'split_jobs_same_os': self.window.splitjobs_same_os.isChecked(),
|
||||
'name': self.window.render_name_input.text()}
|
||||
'split_jobs_same_os': self.window.splitjobs_same_os.isChecked()}
|
||||
|
||||
# get the dynamic args
|
||||
for i in range(self.window.renderer_options_layout.count()):
|
||||
@@ -487,8 +479,7 @@ class SubmitWorker(QThread):
|
||||
for cam in selected_cameras:
|
||||
job_copy = copy.deepcopy(job_json)
|
||||
job_copy['args']['camera'] = cam
|
||||
job_copy['name'] = job_copy['name'].replace(' ', '-') + "_" + cam.replace(' ', '')
|
||||
job_copy['output_path'] = job_copy['name']
|
||||
job_copy['name'] = pathlib.Path(input_path).stem.replace(' ', '_') + "-" + cam.replace(' ', '')
|
||||
job_list.append(job_copy)
|
||||
else:
|
||||
job_list = [job_json]
|
||||
|
||||
@@ -1,3 +1,4 @@
|
||||
import sys
|
||||
import logging
|
||||
|
||||
from PyQt6.QtGui import QFont
|
||||
@@ -15,10 +16,7 @@ class QSignalHandler(logging.Handler, QObject):
|
||||
|
||||
def emit(self, record):
|
||||
msg = self.format(record)
|
||||
try:
|
||||
self.new_record.emit(msg) # Emit signal
|
||||
except RuntimeError:
|
||||
pass
|
||||
self.new_record.emit(msg) # Emit signal
|
||||
|
||||
|
||||
class ConsoleWindow(QMainWindow):
|
||||
|
||||
@@ -4,7 +4,6 @@ import subprocess
|
||||
import sys
|
||||
import threading
|
||||
|
||||
from PyQt6.QtCore import QTimer
|
||||
from PyQt6.QtWidgets import (
|
||||
QMainWindow, QWidget, QVBoxLayout, QPushButton, QTableWidget, QTableWidgetItem, QHBoxLayout, QAbstractItemView,
|
||||
QHeaderView, QProgressBar, QLabel, QMessageBox
|
||||
@@ -12,7 +11,7 @@ from PyQt6.QtWidgets import (
|
||||
|
||||
from src.api.server_proxy import RenderServerProxy
|
||||
from src.engines.engine_manager import EngineManager
|
||||
from src.utilities.misc_helper import is_localhost, launch_url
|
||||
from src.utilities.misc_helper import is_localhost
|
||||
|
||||
|
||||
class EngineBrowserWindow(QMainWindow):
|
||||
@@ -29,7 +28,6 @@ class EngineBrowserWindow(QMainWindow):
|
||||
self.setGeometry(100, 100, 500, 300)
|
||||
self.engine_data = []
|
||||
self.initUI()
|
||||
self.init_timer()
|
||||
|
||||
def initUI(self):
|
||||
# Central widget
|
||||
@@ -84,12 +82,6 @@ class EngineBrowserWindow(QMainWindow):
|
||||
|
||||
self.update_download_status()
|
||||
|
||||
def init_timer(self):
|
||||
# Set up the timer
|
||||
self.timer = QTimer(self)
|
||||
self.timer.timeout.connect(self.update_download_status)
|
||||
self.timer.start(1000)
|
||||
|
||||
def update_table(self):
|
||||
|
||||
def update_table_worker():
|
||||
@@ -98,7 +90,7 @@ class EngineBrowserWindow(QMainWindow):
|
||||
return
|
||||
|
||||
table_data = [] # convert the data into a flat list
|
||||
for _, engine_data in raw_server_data.items():
|
||||
for engine_name, engine_data in raw_server_data.items():
|
||||
table_data.extend(engine_data['versions'])
|
||||
self.engine_data = table_data
|
||||
|
||||
@@ -132,19 +124,21 @@ class EngineBrowserWindow(QMainWindow):
|
||||
hide_progress = not bool(running_tasks)
|
||||
self.progress_bar.setHidden(hide_progress)
|
||||
self.progress_label.setHidden(hide_progress)
|
||||
# Update the status labels
|
||||
if len(EngineManager.download_tasks) == 0:
|
||||
new_status = ""
|
||||
elif len(EngineManager.download_tasks) == 1:
|
||||
task = EngineManager.download_tasks[0]
|
||||
new_status = f"Downloading {task.engine.capitalize()} {task.version}..."
|
||||
else:
|
||||
new_status = f"Downloading {len(EngineManager.download_tasks)} engines..."
|
||||
self.progress_label.setText(new_status)
|
||||
|
||||
# todo: update progress bar with status
|
||||
self.progress_label.setText(f"Downloading {len(running_tasks)} engines")
|
||||
|
||||
def launch_button_click(self):
|
||||
engine_info = self.engine_data[self.table_widget.currentRow()]
|
||||
launch_url(engine_info['path'])
|
||||
path = engine_info['path']
|
||||
if sys.platform.startswith('darwin'):
|
||||
subprocess.run(['open', path])
|
||||
elif sys.platform.startswith('win32'):
|
||||
os.startfile(path)
|
||||
elif sys.platform.startswith('linux'):
|
||||
subprocess.run(['xdg-open', path])
|
||||
else:
|
||||
raise OSError("Unsupported operating system")
|
||||
|
||||
def install_button_click(self):
|
||||
self.update_download_status()
|
||||
|
||||
@@ -1,14 +1,13 @@
|
||||
''' app/ui/main_window.py '''
|
||||
import datetime
|
||||
import io
|
||||
import logging
|
||||
import os
|
||||
import socket
|
||||
import subprocess
|
||||
import sys
|
||||
import threading
|
||||
import time
|
||||
|
||||
import PIL
|
||||
from PIL import Image
|
||||
from PyQt6.QtCore import Qt, QByteArray, QBuffer, QIODevice, QThread
|
||||
from PyQt6.QtGui import QPixmap, QImage, QFont, QIcon
|
||||
@@ -16,6 +15,7 @@ from PyQt6.QtWidgets import QMainWindow, QWidget, QHBoxLayout, QListWidget, QTab
|
||||
QTableWidgetItem, QLabel, QVBoxLayout, QHeaderView, QMessageBox, QGroupBox, QPushButton, QListWidgetItem, \
|
||||
QFileDialog
|
||||
|
||||
from src.api.server_proxy import RenderServerProxy
|
||||
from src.render_queue import RenderQueue
|
||||
from src.utilities.misc_helper import get_time_elapsed, resources_dir, is_localhost
|
||||
from src.utilities.status_utils import RenderStatus
|
||||
@@ -29,7 +29,6 @@ from .widgets.proportional_image_label import ProportionalImageLabel
|
||||
from .widgets.statusbar import StatusBar
|
||||
from .widgets.toolbar import ToolBar
|
||||
from src.api.serverproxy_manager import ServerProxyManager
|
||||
from src.utilities.misc_helper import launch_url
|
||||
|
||||
logger = logging.getLogger()
|
||||
|
||||
@@ -49,11 +48,6 @@ class MainWindow(QMainWindow):
|
||||
super().__init__()
|
||||
|
||||
# Load the queue
|
||||
self.job_list_view = None
|
||||
self.server_info_ram = None
|
||||
self.server_info_cpu = None
|
||||
self.server_info_os = None
|
||||
self.server_info_hostname = None
|
||||
self.engine_browser_window = None
|
||||
self.server_info_group = None
|
||||
self.current_hostname = None
|
||||
@@ -73,7 +67,7 @@ class MainWindow(QMainWindow):
|
||||
# Create a QLabel widget to display the image
|
||||
self.image_label = ProportionalImageLabel()
|
||||
self.image_label.setMaximumSize(700, 500)
|
||||
self.image_label.setFixedHeight(300)
|
||||
self.image_label.setFixedHeight(500)
|
||||
self.image_label.setAlignment(Qt.AlignmentFlag.AlignTop | Qt.AlignmentFlag.AlignHCenter)
|
||||
self.load_image_path(os.path.join(resources_dir(), 'Rectangle.png'))
|
||||
|
||||
@@ -183,13 +177,8 @@ class MainWindow(QMainWindow):
|
||||
|
||||
def __background_update(self):
|
||||
while True:
|
||||
try:
|
||||
self.update_servers()
|
||||
self.fetch_jobs()
|
||||
except RuntimeError:
|
||||
pass
|
||||
except Exception as e:
|
||||
logger.error(f"Uncaught exception in background update: {e}")
|
||||
self.update_servers()
|
||||
self.fetch_jobs()
|
||||
time.sleep(0.5)
|
||||
|
||||
def closeEvent(self, event):
|
||||
@@ -289,25 +278,15 @@ class MainWindow(QMainWindow):
|
||||
|
||||
def fetch_preview(job_id):
|
||||
try:
|
||||
default_image_path = "error.png"
|
||||
before_fetch_hostname = self.current_server_proxy.hostname
|
||||
|
||||
response = self.current_server_proxy.request(f'job/{job_id}/thumbnail?size=big')
|
||||
if response.ok:
|
||||
try:
|
||||
with io.BytesIO(response.content) as image_data_stream:
|
||||
image = Image.open(image_data_stream)
|
||||
if self.current_server_proxy.hostname == before_fetch_hostname and job_id == \
|
||||
self.selected_job_ids()[0]:
|
||||
self.load_image_data(image)
|
||||
return
|
||||
except PIL.UnidentifiedImageError:
|
||||
default_image_path = response.text
|
||||
else:
|
||||
default_image_path = default_image_path or response.text
|
||||
|
||||
self.load_image_path(os.path.join(resources_dir(), default_image_path))
|
||||
|
||||
import io
|
||||
image_data = response.content
|
||||
image = Image.open(io.BytesIO(image_data))
|
||||
if self.current_server_proxy.hostname == before_fetch_hostname and job_id == \
|
||||
self.selected_job_ids()[0]:
|
||||
self.load_image_data(image)
|
||||
except ConnectionError as e:
|
||||
logger.error(f"Connection error fetching image: {e}")
|
||||
except Exception as e:
|
||||
@@ -350,15 +329,12 @@ class MainWindow(QMainWindow):
|
||||
self.topbar.actions_call['Open Files'].setVisible(False)
|
||||
|
||||
def selected_job_ids(self):
|
||||
try:
|
||||
selected_rows = self.job_list_view.selectionModel().selectedRows()
|
||||
job_ids = []
|
||||
for selected_row in selected_rows:
|
||||
id_item = self.job_list_view.item(selected_row.row(), 0)
|
||||
job_ids.append(id_item.text())
|
||||
return job_ids
|
||||
except AttributeError:
|
||||
return []
|
||||
selected_rows = self.job_list_view.selectionModel().selectedRows()
|
||||
job_ids = []
|
||||
for selected_row in selected_rows:
|
||||
id_item = self.job_list_view.item(selected_row.row(), 0)
|
||||
job_ids.append(id_item.text())
|
||||
return job_ids
|
||||
|
||||
def refresh_job_headers(self):
|
||||
self.job_list_view.setHorizontalHeaderLabels(["ID", "Name", "Renderer", "Priority", "Status",
|
||||
@@ -377,36 +353,30 @@ class MainWindow(QMainWindow):
|
||||
|
||||
def load_image_path(self, image_path):
|
||||
# Load and set the image using QPixmap
|
||||
try:
|
||||
pixmap = QPixmap(image_path)
|
||||
if not pixmap:
|
||||
logger.error("Error loading image")
|
||||
return
|
||||
self.image_label.setPixmap(pixmap)
|
||||
except Exception as e:
|
||||
logger.error(f"Error loading image path: {e}")
|
||||
pixmap = QPixmap(image_path)
|
||||
if not pixmap:
|
||||
logger.error("Error loading image")
|
||||
return
|
||||
self.image_label.setPixmap(pixmap)
|
||||
|
||||
def load_image_data(self, pillow_image):
|
||||
try:
|
||||
# Convert the Pillow Image to a QByteArray (byte buffer)
|
||||
byte_array = QByteArray()
|
||||
buffer = QBuffer(byte_array)
|
||||
buffer.open(QIODevice.OpenModeFlag.WriteOnly)
|
||||
pillow_image.save(buffer, "PNG")
|
||||
buffer.close()
|
||||
# Convert the Pillow Image to a QByteArray (byte buffer)
|
||||
byte_array = QByteArray()
|
||||
buffer = QBuffer(byte_array)
|
||||
buffer.open(QIODevice.OpenModeFlag.WriteOnly)
|
||||
pillow_image.save(buffer, "PNG")
|
||||
buffer.close()
|
||||
|
||||
# Create a QImage from the QByteArray
|
||||
image = QImage.fromData(byte_array)
|
||||
# Create a QImage from the QByteArray
|
||||
image = QImage.fromData(byte_array)
|
||||
|
||||
# Create a QPixmap from the QImage
|
||||
pixmap = QPixmap.fromImage(image)
|
||||
# Create a QPixmap from the QImage
|
||||
pixmap = QPixmap.fromImage(image)
|
||||
|
||||
if not pixmap:
|
||||
logger.error("Error loading image")
|
||||
return
|
||||
self.image_label.setPixmap(pixmap)
|
||||
except Exception as e:
|
||||
logger.error(f"Error loading image data: {e}")
|
||||
if not pixmap:
|
||||
logger.error("Error loading image")
|
||||
return
|
||||
self.image_label.setPixmap(pixmap)
|
||||
|
||||
def update_servers(self):
|
||||
found_servers = list(set(ZeroconfServer.found_hostnames() + self.added_hostnames))
|
||||
@@ -431,7 +401,7 @@ class MainWindow(QMainWindow):
|
||||
for hostname in found_servers:
|
||||
if hostname not in current_server_list:
|
||||
properties = ZeroconfServer.get_hostname_properties(hostname)
|
||||
image_path = os.path.join(resources_dir(), f"{properties.get('system_os', 'Monitor')}.png")
|
||||
image_path = os.path.join(resources_dir(), 'icons', f"{properties.get('system_os', 'Monitor')}.png")
|
||||
list_widget = QListWidgetItem(QIcon(image_path), hostname)
|
||||
self.server_list_view.addItem(list_widget)
|
||||
|
||||
@@ -468,22 +438,23 @@ class MainWindow(QMainWindow):
|
||||
|
||||
# Top Toolbar Buttons
|
||||
self.topbar.add_button(
|
||||
"Console", f"{resources_directory}/Console.png", self.open_console_window)
|
||||
"New Job", f"{resources_directory}/icons/AddProduct.png", self.new_job)
|
||||
self.topbar.add_button(
|
||||
"Engines", f"{resources_directory}/SoftwareInstaller.png", self.engine_browser)
|
||||
"Engines", f"{resources_directory}/icons/SoftwareInstaller.png", self.engine_browser)
|
||||
self.topbar.add_button(
|
||||
"Console", f"{resources_directory}/icons/Console.png", self.open_console_window)
|
||||
self.topbar.add_separator()
|
||||
self.topbar.add_button(
|
||||
"Stop Job", f"{resources_directory}/StopSign.png", self.stop_job)
|
||||
"Stop Job", f"{resources_directory}/icons/StopSign.png", self.stop_job)
|
||||
self.topbar.add_button(
|
||||
"Delete Job", f"{resources_directory}/Trash.png", self.delete_job)
|
||||
"Delete Job", f"{resources_directory}/icons/Trash.png", self.delete_job)
|
||||
self.topbar.add_button(
|
||||
"Render Log", f"{resources_directory}/Document.png", self.job_logs)
|
||||
"Render Log", f"{resources_directory}/icons/Document.png", self.job_logs)
|
||||
self.topbar.add_button(
|
||||
"Download", f"{resources_directory}/Download.png", self.download_files)
|
||||
"Download", f"{resources_directory}/icons/Download.png", self.download_files)
|
||||
self.topbar.add_button(
|
||||
"Open Files", f"{resources_directory}/SearchFolder.png", self.open_files)
|
||||
self.topbar.add_button(
|
||||
"New Job", f"{resources_directory}/AddProduct.png", self.new_job)
|
||||
"Open Files", f"{resources_directory}/icons/SearchFolder.png", self.open_files)
|
||||
|
||||
self.addToolBar(Qt.ToolBarArea.TopToolBarArea, self.topbar)
|
||||
|
||||
# -- Toolbar Buttons -- #
|
||||
@@ -575,7 +546,15 @@ class MainWindow(QMainWindow):
|
||||
for job_id in job_ids:
|
||||
job_info = self.current_server_proxy.get_job_info(job_id)
|
||||
path = os.path.dirname(job_info['output_path'])
|
||||
launch_url(path)
|
||||
|
||||
if sys.platform.startswith('darwin'):
|
||||
subprocess.run(['open', path])
|
||||
elif sys.platform.startswith('win32'):
|
||||
os.startfile(path)
|
||||
elif sys.platform.startswith('linux'):
|
||||
subprocess.run(['xdg-open', path])
|
||||
else:
|
||||
raise OSError("Unsupported operating system")
|
||||
|
||||
def new_job(self) -> None:
|
||||
|
||||
|
||||
1
src/ui/widgets/dialog.py
Normal file
@@ -0,0 +1 @@
|
||||
''' app/ui/widgets/dialog.py '''
|
||||
@@ -9,7 +9,6 @@ from PyQt6.QtGui import QPixmap
|
||||
from PyQt6.QtWidgets import QStatusBar, QLabel
|
||||
|
||||
from src.api.server_proxy import RenderServerProxy
|
||||
from src.engines.engine_manager import EngineManager
|
||||
from src.utilities.misc_helper import resources_dir
|
||||
|
||||
|
||||
@@ -29,26 +28,17 @@ class StatusBar(QStatusBar):
|
||||
proxy = RenderServerProxy(socket.gethostname())
|
||||
proxy.start_background_update()
|
||||
image_names = {'Ready': 'GreenCircle.png', 'Offline': "RedSquare.png"}
|
||||
last_update = None
|
||||
|
||||
# Check for status change every 1s on background thread
|
||||
while True:
|
||||
try:
|
||||
# update status label - get download status
|
||||
new_status = proxy.status()
|
||||
if EngineManager.download_tasks:
|
||||
if len(EngineManager.download_tasks) == 1:
|
||||
task = EngineManager.download_tasks[0]
|
||||
new_status = f"{new_status} | Downloading {task.engine.capitalize()} {task.version}..."
|
||||
else:
|
||||
new_status = f"{new_status} | Downloading {len(EngineManager.download_tasks)} engines"
|
||||
self.messageLabel.setText(new_status)
|
||||
|
||||
# update status image
|
||||
new_status = proxy.status()
|
||||
if new_status is not last_update:
|
||||
new_image_name = image_names.get(new_status, 'Synchronize.png')
|
||||
new_image_path = os.path.join(resources_dir(), new_image_name)
|
||||
self.label.setPixmap((QPixmap(new_image_path).scaled(16, 16, Qt.AspectRatioMode.KeepAspectRatio)))
|
||||
except RuntimeError: # ignore runtime errors during shutdown
|
||||
pass
|
||||
image_path = os.path.join(resources_dir(), 'icons', new_image_name)
|
||||
self.label.setPixmap((QPixmap(image_path).scaled(16, 16, Qt.AspectRatioMode.KeepAspectRatio)))
|
||||
self.messageLabel.setText(new_status)
|
||||
last_update = new_status
|
||||
time.sleep(1)
|
||||
|
||||
background_thread = threading.Thread(target=background_update,)
|
||||
@@ -57,7 +47,7 @@ class StatusBar(QStatusBar):
|
||||
|
||||
# Create a label that holds an image
|
||||
self.label = QLabel()
|
||||
image_path = os.path.join(os.path.dirname(os.path.dirname(os.path.dirname(__file__))), 'resources',
|
||||
image_path = os.path.join(os.path.dirname(os.path.dirname(os.path.dirname(__file__))), 'resources', 'icons',
|
||||
'RedSquare.png')
|
||||
pixmap = (QPixmap(image_path).scaled(16, 16, Qt.AspectRatioMode.KeepAspectRatio))
|
||||
self.label.setPixmap(pixmap)
|
||||
|
||||
@@ -1,78 +0,0 @@
|
||||
import concurrent.futures
|
||||
import os
|
||||
import time
|
||||
import logging
|
||||
|
||||
logger = logging.getLogger()
|
||||
|
||||
|
||||
def cpu_workload(n):
|
||||
# Simple arithmetic operation for workload
|
||||
while n > 0:
|
||||
n -= 1
|
||||
return n
|
||||
|
||||
|
||||
def cpu_benchmark(duration_seconds=10):
|
||||
# Determine the number of available CPU cores
|
||||
num_cores = os.cpu_count()
|
||||
|
||||
# Calculate workload per core, assuming a large number for the workload
|
||||
workload_per_core = 10000000
|
||||
|
||||
# Record start time
|
||||
start_time = time.time()
|
||||
|
||||
# Use ProcessPoolExecutor to utilize all CPU cores
|
||||
with concurrent.futures.ProcessPoolExecutor() as executor:
|
||||
# Launching tasks for each core
|
||||
futures = [executor.submit(cpu_workload, workload_per_core) for _ in range(num_cores)]
|
||||
|
||||
# Wait for all futures to complete, with a timeout to limit the benchmark duration
|
||||
concurrent.futures.wait(futures, timeout=duration_seconds)
|
||||
|
||||
# Record end time
|
||||
end_time = time.time()
|
||||
|
||||
# Calculate the total number of operations (workload) done by all cores
|
||||
total_operations = workload_per_core * num_cores
|
||||
# Calculate the total time taken
|
||||
total_time = end_time - start_time
|
||||
# Calculate operations per second as the score
|
||||
score = total_operations / total_time
|
||||
score = score * 0.0001
|
||||
|
||||
return int(score)
|
||||
|
||||
|
||||
def disk_io_benchmark(file_size_mb=100, filename='benchmark_test_file'):
|
||||
write_speed = None
|
||||
read_speed = None
|
||||
|
||||
# Measure write speed
|
||||
start_time = time.time()
|
||||
with open(filename, 'wb') as f:
|
||||
f.write(os.urandom(file_size_mb * 1024 * 1024)) # Write random bytes to file
|
||||
end_time = time.time()
|
||||
write_time = end_time - start_time
|
||||
write_speed = file_size_mb / write_time
|
||||
|
||||
# Measure read speed
|
||||
start_time = time.time()
|
||||
with open(filename, 'rb') as f:
|
||||
content = f.read()
|
||||
end_time = time.time()
|
||||
read_time = end_time - start_time
|
||||
read_speed = file_size_mb / read_time
|
||||
|
||||
# Cleanup
|
||||
os.remove(filename)
|
||||
|
||||
logger.debug(f"Disk Write Speed: {write_speed:.2f} MB/s")
|
||||
logger.debug(f"Disk Read Speed: {read_speed:.2f} MB/s")
|
||||
return write_speed, read_speed
|
||||
|
||||
|
||||
if __name__ == '__main__':
|
||||
print(cpu_benchmark())
|
||||
print(disk_io_benchmark())
|
||||
@@ -1,6 +1,5 @@
|
||||
import os
|
||||
import yaml
|
||||
from src.utilities.misc_helper import current_system_os, copy_directory_contents
|
||||
|
||||
|
||||
class Config:
|
||||
@@ -10,7 +9,7 @@ class Config:
|
||||
max_content_path = 100000000
|
||||
server_log_level = 'debug'
|
||||
log_buffer_length = 250
|
||||
worker_process_timeout = 120
|
||||
subjob_connection_timeout = 120
|
||||
flask_log_level = 'error'
|
||||
flask_debug_enable = False
|
||||
queue_eval_seconds = 1
|
||||
@@ -28,47 +27,10 @@ class Config:
|
||||
cls.max_content_path = cfg.get('max_content_path', cls.max_content_path)
|
||||
cls.server_log_level = cfg.get('server_log_level', cls.server_log_level)
|
||||
cls.log_buffer_length = cfg.get('log_buffer_length', cls.log_buffer_length)
|
||||
cls.worker_process_timeout = cfg.get('worker_process_timeout', cls.worker_process_timeout)
|
||||
cls.subjob_connection_timeout = cfg.get('subjob_connection_timeout', cls.subjob_connection_timeout)
|
||||
cls.flask_log_level = cfg.get('flask_log_level', cls.flask_log_level)
|
||||
cls.flask_debug_enable = cfg.get('flask_debug_enable', cls.flask_debug_enable)
|
||||
cls.queue_eval_seconds = cfg.get('queue_eval_seconds', cls.queue_eval_seconds)
|
||||
cls.port_number = cfg.get('port_number', cls.port_number)
|
||||
cls.enable_split_jobs = cfg.get('enable_split_jobs', cls.enable_split_jobs)
|
||||
cls.download_timeout_seconds = cfg.get('download_timeout_seconds', cls.download_timeout_seconds)
|
||||
|
||||
@classmethod
|
||||
def config_dir(cls):
|
||||
# Set up the config path
|
||||
if current_system_os() == 'macos':
|
||||
local_config_path = os.path.expanduser('~/Library/Application Support/Zordon')
|
||||
elif current_system_os() == 'windows':
|
||||
local_config_path = os.path.join(os.environ['APPDATA'], 'Zordon')
|
||||
else:
|
||||
local_config_path = os.path.expanduser('~/.config/Zordon')
|
||||
return local_config_path
|
||||
|
||||
@classmethod
|
||||
def setup_config_dir(cls):
|
||||
# Set up the config path
|
||||
local_config_dir = cls.config_dir()
|
||||
if os.path.exists(local_config_dir):
|
||||
return
|
||||
|
||||
try:
|
||||
# Create the local configuration directory
|
||||
os.makedirs(local_config_dir)
|
||||
|
||||
# Determine the template path
|
||||
resource_environment_path = os.environ.get('RESOURCEPATH')
|
||||
if resource_environment_path:
|
||||
template_path = os.path.join(resource_environment_path, 'config')
|
||||
else:
|
||||
template_path = os.path.join(
|
||||
os.path.dirname(os.path.dirname(os.path.dirname(os.path.abspath(__file__)))), 'config')
|
||||
|
||||
# Copy contents from the template to the local configuration directory
|
||||
copy_directory_contents(template_path, local_config_dir)
|
||||
|
||||
except Exception as e:
|
||||
print(f"An error occurred while setting up the config directory: {e}")
|
||||
raise
|
||||
|
||||
@@ -4,10 +4,9 @@ from src.engines.ffmpeg.ffmpeg_engine import FFMPEG
|
||||
|
||||
def image_sequence_to_video(source_glob_pattern, output_path, framerate=24, encoder="prores_ks", profile=4,
|
||||
start_frame=1):
|
||||
subprocess.run([FFMPEG.default_renderer_path(), "-framerate", str(framerate), "-start_number",
|
||||
str(start_frame), "-i", f"{source_glob_pattern}", "-c:v", encoder, "-profile:v", str(profile),
|
||||
'-pix_fmt', 'yuva444p10le', output_path], stdout=subprocess.DEVNULL, stderr=subprocess.DEVNULL,
|
||||
check=True)
|
||||
subprocess.run([FFMPEG.default_renderer_path(), "-framerate", str(framerate), "-start_number", str(start_frame), "-i",
|
||||
f"{source_glob_pattern}", "-c:v", encoder, "-profile:v", str(profile), '-pix_fmt', 'yuva444p10le',
|
||||
output_path], stdout=subprocess.DEVNULL, stderr=subprocess.DEVNULL, check=True)
|
||||
|
||||
|
||||
def save_first_frame(source_path, dest_path, max_width=1280):
|
||||
|
||||
@@ -1,9 +1,7 @@
|
||||
import logging
|
||||
import os
|
||||
import platform
|
||||
import shutil
|
||||
import socket
|
||||
import string
|
||||
import subprocess
|
||||
from datetime import datetime
|
||||
|
||||
@@ -11,27 +9,14 @@ logger = logging.getLogger()
|
||||
|
||||
|
||||
def launch_url(url):
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
if shutil.which('xdg-open'):
|
||||
opener = 'xdg-open'
|
||||
elif shutil.which('open'):
|
||||
opener = 'open'
|
||||
elif shutil.which('cmd'):
|
||||
opener = 'start'
|
||||
if subprocess.run(['which', 'xdg-open'], capture_output=True).returncode == 0:
|
||||
subprocess.run(['xdg-open', url]) # linux
|
||||
elif subprocess.run(['which', 'open'], capture_output=True).returncode == 0:
|
||||
subprocess.run(['open', url]) # macos
|
||||
elif subprocess.run(['which', 'start'], capture_output=True).returncode == 0:
|
||||
subprocess.run(['start', url]) # windows - need to validate this works
|
||||
else:
|
||||
error_message = f"No valid launchers found to launch URL: {url}"
|
||||
logger.error(error_message)
|
||||
raise OSError(error_message)
|
||||
|
||||
try:
|
||||
if opener == 'start':
|
||||
# For Windows, use 'cmd /c start'
|
||||
subprocess.run(['cmd', '/c', 'start', url], shell=False)
|
||||
else:
|
||||
subprocess.run([opener, url])
|
||||
except Exception as e:
|
||||
logger.error(f"Failed to launch URL: {url}. Error: {e}")
|
||||
logger.error(f"No valid launchers found to launch url: {url}")
|
||||
|
||||
|
||||
def file_exists_in_mounts(filepath):
|
||||
@@ -49,9 +34,9 @@ def file_exists_in_mounts(filepath):
|
||||
path = os.path.normpath(path)
|
||||
components = []
|
||||
while True:
|
||||
path, comp = os.path.split(path)
|
||||
if comp:
|
||||
components.append(comp)
|
||||
path, component = os.path.split(path)
|
||||
if component:
|
||||
components.append(component)
|
||||
else:
|
||||
if path:
|
||||
components.append(path)
|
||||
@@ -77,17 +62,20 @@ def file_exists_in_mounts(filepath):
|
||||
|
||||
def get_time_elapsed(start_time=None, end_time=None):
|
||||
|
||||
from string import Template
|
||||
|
||||
class DeltaTemplate(Template):
|
||||
delimiter = "%"
|
||||
|
||||
def strfdelta(tdelta, fmt='%H:%M:%S'):
|
||||
days = tdelta.days
|
||||
d = {"D": tdelta.days}
|
||||
hours, rem = divmod(tdelta.seconds, 3600)
|
||||
minutes, seconds = divmod(rem, 60)
|
||||
|
||||
# Using f-strings for formatting
|
||||
formatted_str = fmt.replace('%D', f'{days}')
|
||||
formatted_str = formatted_str.replace('%H', f'{hours:02d}')
|
||||
formatted_str = formatted_str.replace('%M', f'{minutes:02d}')
|
||||
formatted_str = formatted_str.replace('%S', f'{seconds:02d}')
|
||||
return formatted_str
|
||||
d["H"] = '{:02d}'.format(hours)
|
||||
d["M"] = '{:02d}'.format(minutes)
|
||||
d["S"] = '{:02d}'.format(seconds)
|
||||
t = DeltaTemplate(fmt)
|
||||
return t.substitute(**d)
|
||||
|
||||
# calculate elapsed time
|
||||
elapsed_time = None
|
||||
@@ -105,7 +93,7 @@ def get_time_elapsed(start_time=None, end_time=None):
|
||||
def get_file_size_human(file_path):
|
||||
size_in_bytes = os.path.getsize(file_path)
|
||||
|
||||
# Convert size to a human-readable format
|
||||
# Convert size to a human readable format
|
||||
if size_in_bytes < 1024:
|
||||
return f"{size_in_bytes} B"
|
||||
elif size_in_bytes < 1024 ** 2:
|
||||
@@ -139,24 +127,15 @@ def current_system_cpu():
|
||||
|
||||
|
||||
def resources_dir():
|
||||
resource_environment_path = os.environ.get('RESOURCEPATH', None)
|
||||
if resource_environment_path: # running inside resource bundle
|
||||
return os.path.join(resource_environment_path, 'resources')
|
||||
else:
|
||||
return os.path.join(os.path.dirname(os.path.dirname(os.path.dirname(os.path.abspath(__file__)))), 'resources')
|
||||
resources_directory = os.path.join(os.path.dirname(os.path.dirname(os.path.dirname(os.path.abspath(__file__)))),
|
||||
'resources')
|
||||
return resources_directory
|
||||
|
||||
|
||||
def copy_directory_contents(src_dir, dst_dir):
|
||||
"""
|
||||
Copy the contents of the source directory (src_dir) to the destination directory (dst_dir).
|
||||
"""
|
||||
for item in os.listdir(src_dir):
|
||||
src_path = os.path.join(src_dir, item)
|
||||
dst_path = os.path.join(dst_dir, item)
|
||||
if os.path.isdir(src_path):
|
||||
shutil.copytree(src_path, dst_path, dirs_exist_ok=True)
|
||||
else:
|
||||
shutil.copy2(src_path, dst_path)
|
||||
def config_dir():
|
||||
config_directory = os.path.join(os.path.dirname(os.path.dirname(os.path.dirname(os.path.abspath(__file__)))),
|
||||
'config')
|
||||
return config_directory
|
||||
|
||||
|
||||
def is_localhost(comparison_hostname):
|
||||
@@ -167,19 +146,3 @@ def is_localhost(comparison_hostname):
|
||||
return comparison_hostname == local_hostname
|
||||
except AttributeError:
|
||||
return False
|
||||
|
||||
|
||||
def num_to_alphanumeric(num):
|
||||
# List of possible alphanumeric characters
|
||||
characters = string.ascii_letters + string.digits
|
||||
|
||||
# Make sure number is positive
|
||||
num = abs(num)
|
||||
|
||||
# Convert number to alphanumeric
|
||||
result = ""
|
||||
while num > 0:
|
||||
num, remainder = divmod(num, len(characters))
|
||||
result += characters[remainder]
|
||||
|
||||
return result[::-1] # Reverse the result to get the correct alphanumeric string
|
||||
|
||||
@@ -2,8 +2,7 @@ import logging
|
||||
import socket
|
||||
|
||||
from pubsub import pub
|
||||
from zeroconf import Zeroconf, ServiceInfo, ServiceBrowser, ServiceStateChange, NonUniqueNameException, \
|
||||
NotRunningException
|
||||
from zeroconf import Zeroconf, ServiceInfo, ServiceBrowser, ServiceStateChange, NonUniqueNameException
|
||||
|
||||
logger = logging.getLogger()
|
||||
|
||||
@@ -23,23 +22,17 @@ class ZeroconfServer:
|
||||
cls.service_type = service_type
|
||||
cls.server_name = server_name
|
||||
cls.server_port = server_port
|
||||
try: # Stop any previously running instances
|
||||
socket.gethostbyname(socket.gethostname())
|
||||
except socket.gaierror:
|
||||
cls.stop()
|
||||
|
||||
@classmethod
|
||||
def start(cls, listen_only=False):
|
||||
if not cls.service_type:
|
||||
raise RuntimeError("The 'configure' method must be run before starting the zeroconf server")
|
||||
logger.debug("Starting zeroconf service")
|
||||
if not listen_only:
|
||||
cls._register_service()
|
||||
cls._browse_services()
|
||||
|
||||
@classmethod
|
||||
def stop(cls):
|
||||
logger.debug("Stopping zeroconf service")
|
||||
cls._unregister_service()
|
||||
cls.zeroconf.close()
|
||||
|
||||
@@ -59,7 +52,7 @@ class ZeroconfServer:
|
||||
cls.service_info = info
|
||||
cls.zeroconf.register_service(info)
|
||||
logger.info(f"Registered zeroconf service: {cls.service_info.name}")
|
||||
except (NonUniqueNameException, socket.gaierror) as e:
|
||||
except NonUniqueNameException as e:
|
||||
logger.error(f"Error establishing zeroconf: {e}")
|
||||
|
||||
@classmethod
|
||||
@@ -76,49 +69,40 @@ class ZeroconfServer:
|
||||
|
||||
@classmethod
|
||||
def _on_service_discovered(cls, zeroconf, service_type, name, state_change):
|
||||
try:
|
||||
info = zeroconf.get_service_info(service_type, name)
|
||||
hostname = name.split(f'.{cls.service_type}')[0]
|
||||
logger.debug(f"Zeroconf: {hostname} {state_change}")
|
||||
if service_type == cls.service_type:
|
||||
if state_change == ServiceStateChange.Added or state_change == ServiceStateChange.Updated:
|
||||
cls.client_cache[hostname] = info
|
||||
else:
|
||||
cls.client_cache.pop(hostname)
|
||||
pub.sendMessage('zeroconf_state_change', hostname=hostname, state_change=state_change)
|
||||
except NotRunningException:
|
||||
pass
|
||||
info = zeroconf.get_service_info(service_type, name)
|
||||
logger.debug(f"Zeroconf: {name} {state_change}")
|
||||
if service_type == cls.service_type:
|
||||
if state_change == ServiceStateChange.Added or state_change == ServiceStateChange.Updated:
|
||||
cls.client_cache[name] = info
|
||||
else:
|
||||
cls.client_cache.pop(name)
|
||||
pub.sendMessage('zeroconf_state_change', hostname=name, state_change=state_change, info=info)
|
||||
|
||||
@classmethod
|
||||
def found_hostnames(cls):
|
||||
fetched_hostnames = [x.split(f'.{cls.service_type}')[0] for x in cls.client_cache.keys()]
|
||||
local_hostname = socket.gethostname()
|
||||
|
||||
# Define a sort key function
|
||||
def sort_key(hostname):
|
||||
# Return 0 if it's the local hostname so it comes first, else return 1
|
||||
return False if hostname == local_hostname else True
|
||||
|
||||
# Sort the list with the local hostname first
|
||||
sorted_hostnames = sorted(cls.client_cache.keys(), key=sort_key)
|
||||
sorted_hostnames = sorted(fetched_hostnames, key=sort_key)
|
||||
return sorted_hostnames
|
||||
|
||||
@classmethod
|
||||
def get_hostname_properties(cls, hostname):
|
||||
server_info = cls.client_cache.get(hostname).properties
|
||||
new_key = hostname + '.' + cls.service_type
|
||||
server_info = cls.client_cache.get(new_key).properties
|
||||
decoded_server_info = {key.decode('utf-8'): value.decode('utf-8') for key, value in server_info.items()}
|
||||
return decoded_server_info
|
||||
|
||||
|
||||
# Example usage:
|
||||
if __name__ == "__main__":
|
||||
import time
|
||||
|
||||
logging.basicConfig(level=logging.DEBUG)
|
||||
ZeroconfServer.configure("_zordon._tcp.local.", "foobar.local", 8080)
|
||||
try:
|
||||
ZeroconfServer.start()
|
||||
while True:
|
||||
time.sleep(0.1)
|
||||
except KeyboardInterrupt:
|
||||
pass
|
||||
input("Server running - Press enter to end")
|
||||
finally:
|
||||
ZeroconfServer.stop()
|
||||
|
||||
|
Before Width: | Height: | Size: 1.7 KiB After Width: | Height: | Size: 1.7 KiB |
BIN
src/web/static/images/desktop.png
Normal file
|
After Width: | Height: | Size: 1.1 KiB |
|
Before Width: | Height: | Size: 995 B After Width: | Height: | Size: 995 B |
|
Before Width: | Height: | Size: 81 KiB After Width: | Height: | Size: 81 KiB |
BIN
src/web/static/images/logo.png
Normal file
|
After Width: | Height: | Size: 2.6 KiB |
|
Before Width: | Height: | Size: 2.6 KiB After Width: | Height: | Size: 2.6 KiB |
|
Before Width: | Height: | Size: 2.1 KiB After Width: | Height: | Size: 2.1 KiB |
|
Before Width: | Height: | Size: 66 KiB After Width: | Height: | Size: 66 KiB |
64
src/web/static/js/job_table.js
Normal file
@@ -0,0 +1,64 @@
|
||||
const grid = new gridjs.Grid({
|
||||
columns: [
|
||||
{ data: (row) => row.id,
|
||||
name: 'Thumbnail',
|
||||
formatter: (cell) => gridjs.html(`<img src="/api/job/${cell}/thumbnail?video_ok" style='width: 200px; min-width: 120px;'>`),
|
||||
sort: {enabled: false}
|
||||
},
|
||||
{ id: 'name',
|
||||
name: 'Name',
|
||||
data: (row) => row.name,
|
||||
formatter: (name, row) => gridjs.html(`<a href="/ui/job/${row.cells[0].data}/full_details">${name}</a>`)
|
||||
},
|
||||
{ id: 'renderer', data: (row) => `${row.renderer}-${row.renderer_version}`, name: 'Renderer' },
|
||||
{ id: 'priority', name: 'Priority' },
|
||||
{ id: 'status',
|
||||
name: 'Status',
|
||||
data: (row) => row,
|
||||
formatter: (cell, row) => gridjs.html(`
|
||||
<span class="tag ${(cell.status == 'running') ? 'is-hidden' : ''} ${(cell.status == 'cancelled') ?
|
||||
'is-warning' : (cell.status == 'error') ? 'is-danger' : (cell.status == 'not_started') ?
|
||||
'is-light' : 'is-primary'}">${cell.status}</span>
|
||||
<progress class="progress is-primary ${(cell.status != 'running') ? 'is-hidden': ''}"
|
||||
value="${(parseFloat(cell.percent_complete) * 100.0)}" max="100">${cell.status}</progress>
|
||||
`)},
|
||||
{ id: 'time_elapsed', name: 'Time Elapsed' },
|
||||
{ data: (row) => row.total_frames ?? 'N/A', name: 'Frame Count' },
|
||||
{ id: 'client', name: 'Client'},
|
||||
{ data: (row) => row.last_output ?? 'N/A',
|
||||
name: 'Last Output',
|
||||
formatter: (output, row) => gridjs.html(`<a href="/api/job/${row.cells[0].data}/logs">${output}</a>`)
|
||||
},
|
||||
{ data: (row) => row,
|
||||
name: 'Commands',
|
||||
formatter: (cell, row) => gridjs.html(`
|
||||
<div class="field has-addons" style='white-space: nowrap; display: inline-block;'>
|
||||
<button class="button is-info" onclick="window.location.href='/ui/job/${row.cells[0].data}/full_details';">
|
||||
<span class="icon"><i class="fa-solid fa-info"></i></span>
|
||||
</button>
|
||||
<button class="button is-link" onclick="window.location.href='/api/job/${row.cells[0].data}/logs';">
|
||||
<span class="icon"><i class="fa-regular fa-file-lines"></i></span>
|
||||
</button>
|
||||
<button class="button is-warning is-active ${(cell.status != 'running') ? 'is-hidden': ''}" onclick="window.location.href='/api/job/${row.cells[0].data}/cancel?confirm=True&redirect=True';">
|
||||
<span class="icon"><i class="fa-solid fa-x"></i></span>
|
||||
</button>
|
||||
<button class="button is-success ${(cell.status != 'completed') ? 'is-hidden': ''}" onclick="window.location.href='/api/job/${row.cells[0].data}/download_all';">
|
||||
<span class="icon"><i class="fa-solid fa-download"></i></span>
|
||||
<span>${cell.file_count}</span>
|
||||
</button>
|
||||
<button class="button is-danger" onclick="window.location.href='/api/job/${row.cells[0].data}/delete?confirm=True&redirect=True'">
|
||||
<span class="icon"><i class="fa-regular fa-trash-can"></i></span>
|
||||
</button>
|
||||
</div>
|
||||
`),
|
||||
sort: false
|
||||
},
|
||||
{ id: 'owner', name: 'Owner' }
|
||||
],
|
||||
autoWidth: true,
|
||||
server: {
|
||||
url: '/api/jobs',
|
||||
then: results => results['jobs'],
|
||||
},
|
||||
sort: true,
|
||||
}).render(document.getElementById('table'));
|
||||
44
src/web/static/js/modals.js
Normal file
@@ -0,0 +1,44 @@
|
||||
document.addEventListener('DOMContentLoaded', () => {
|
||||
// Functions to open and close a modal
|
||||
function openModal($el) {
|
||||
$el.classList.add('is-active');
|
||||
}
|
||||
|
||||
function closeModal($el) {
|
||||
$el.classList.remove('is-active');
|
||||
}
|
||||
|
||||
function closeAllModals() {
|
||||
(document.querySelectorAll('.modal') || []).forEach(($modal) => {
|
||||
closeModal($modal);
|
||||
});
|
||||
}
|
||||
|
||||
// Add a click event on buttons to open a specific modal
|
||||
(document.querySelectorAll('.js-modal-trigger') || []).forEach(($trigger) => {
|
||||
const modal = $trigger.dataset.target;
|
||||
const $target = document.getElementById(modal);
|
||||
|
||||
$trigger.addEventListener('click', () => {
|
||||
openModal($target);
|
||||
});
|
||||
});
|
||||
|
||||
// Add a click event on various child elements to close the parent modal
|
||||
(document.querySelectorAll('.modal-background, .modal-close, .modal-card-head .delete, .modal-card-foot .button') || []).forEach(($close) => {
|
||||
const $target = $close.closest('.modal');
|
||||
|
||||
$close.addEventListener('click', () => {
|
||||
closeModal($target);
|
||||
});
|
||||
});
|
||||
|
||||
// Add a keyboard event to close all modals
|
||||
document.addEventListener('keydown', (event) => {
|
||||
const e = event || window.event;
|
||||
|
||||
if (e.keyCode === 27) { // Escape key
|
||||
closeAllModals();
|
||||
}
|
||||
});
|
||||
});
|
||||
48
src/web/templates/details.html
Normal file
@@ -0,0 +1,48 @@
|
||||
{% extends 'layout.html' %}
|
||||
|
||||
{% block body %}
|
||||
<div class="container" style="text-align:center; width: 100%">
|
||||
<br>
|
||||
{% if media_url: %}
|
||||
<video width="1280" height="720" controls>
|
||||
<source src="{{media_url}}" type="video/mp4">
|
||||
Your browser does not support the video tag.
|
||||
</video>
|
||||
{% elif job_status == 'Running': %}
|
||||
<div style="width: 100%; height: 720px; position: relative; background: black; text-align: center; color: white;">
|
||||
<img src="/static/images/gears.png" style="vertical-align: middle; width: auto; height: auto; position:absolute; margin: auto; top: 0; bottom: 0; left: 0; right: 0;">
|
||||
<span style="height: auto; position:absolute; margin: auto; top: 58%; left: 0; right: 0; color: white; width: 60%">
|
||||
<progress class="progress is-primary" value="{{job.worker_data()['percent_complete'] * 100}}" max="100" style="margin-top: 6px;" id="progress-bar">Rendering</progress>
|
||||
Rendering in Progress - <span id="percent-complete">{{(job.worker_data()['percent_complete'] * 100) | int}}%</span>
|
||||
<br>Time Elapsed: <span id="time-elapsed">{{job.worker_data()['time_elapsed']}}</span>
|
||||
</span>
|
||||
<script>
|
||||
var startingStatus = '{{job.status.value}}';
|
||||
function update_job() {
|
||||
$.getJSON('/api/job/{{job.id}}', function(data) {
|
||||
document.getElementById('progress-bar').value = (data.percent_complete * 100);
|
||||
document.getElementById('percent-complete').innerHTML = (data.percent_complete * 100).toFixed(0) + '%';
|
||||
document.getElementById('time-elapsed').innerHTML = data.time_elapsed;
|
||||
if (data.status != startingStatus){
|
||||
clearInterval(renderingTimer);
|
||||
window.location.reload(true);
|
||||
};
|
||||
});
|
||||
}
|
||||
if (startingStatus == 'running'){
|
||||
var renderingTimer = setInterval(update_job, 1000);
|
||||
};
|
||||
</script>
|
||||
</div>
|
||||
{% else %}
|
||||
<div style="width: 100%; height: 720px; position: relative; background: black;">
|
||||
<img src="/static/images/{{job_status}}.png" style="vertical-align: middle; width: auto; height: auto; position:absolute; margin: auto; top: 0; bottom: 0; left: 0; right: 0;">
|
||||
<span style="height: auto; position:absolute; margin: auto; top: 58%; left: 0; right: 0; color: white;">
|
||||
{{job_status}}
|
||||
</span>
|
||||
</div>
|
||||
{% endif %}
|
||||
<br>
|
||||
{{detail_table|safe}}
|
||||
</div>
|
||||
{% endblock %}
|
||||
8
src/web/templates/index.html
Normal file
@@ -0,0 +1,8 @@
|
||||
{% extends 'layout.html' %}
|
||||
|
||||
{% block body %}
|
||||
<div class="container is-fluid" style="padding-top: 20px;">
|
||||
<div id="table" class="table"></div>
|
||||
</div>
|
||||
<script src="/static/js/job_table.js"></script>
|
||||
{% endblock %}
|
||||
236
src/web/templates/layout.html
Normal file
@@ -0,0 +1,236 @@
|
||||
<!DOCTYPE html>
|
||||
<html>
|
||||
<head>
|
||||
<meta charset="utf-8">
|
||||
<meta name="viewport" content="width=device-width, initial-scale=1">
|
||||
<title>Zordon Dashboard</title>
|
||||
<link rel="stylesheet" href="https://cdn.jsdelivr.net/npm/bulma@0.9.4/css/bulma.min.css">
|
||||
<link rel="stylesheet" href="https://cdnjs.cloudflare.com/ajax/libs/font-awesome/4.7.0/css/font-awesome.min.css">
|
||||
<script src="https://cdn.jsdelivr.net/npm/jquery/dist/jquery.min.js"></script>
|
||||
<script src="https://cdn.jsdelivr.net/npm/gridjs/dist/gridjs.umd.js"></script>
|
||||
<link href="https://unpkg.com/gridjs/dist/theme/mermaid.min.css" rel="stylesheet" />
|
||||
<script src="https://kit.fontawesome.com/698705d14d.js" crossorigin="anonymous"></script>
|
||||
<script type="text/javascript" src="/static/js/modals.js"></script>
|
||||
</head>
|
||||
<body onload="rendererChanged(document.getElementById('renderer'))">
|
||||
|
||||
<nav class="navbar is-dark" role="navigation" aria-label="main navigation">
|
||||
<div class="navbar-brand">
|
||||
<a class="navbar-item" href="/">
|
||||
<img src="/static/images/logo.png">
|
||||
</a>
|
||||
</div>
|
||||
|
||||
<div id="navbarBasicExample" class="navbar-menu">
|
||||
<div class="navbar-start">
|
||||
<a class="navbar-item" href="/">
|
||||
Home
|
||||
</a>
|
||||
</div>
|
||||
|
||||
<div class="navbar-end">
|
||||
<div class="navbar-item">
|
||||
<button class="button is-primary js-modal-trigger" data-target="add-job-modal">
|
||||
<span class="icon">
|
||||
<i class="fa-solid fa-upload"></i>
|
||||
</span>
|
||||
<span>Submit Job</span>
|
||||
</button>
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
</nav>
|
||||
|
||||
{% block body %}
|
||||
{% endblock %}
|
||||
|
||||
<div id="add-job-modal" class="modal">
|
||||
<!-- Start Add Form -->
|
||||
<form id="submit_job" action="/api/add_job?redirect=True" method="POST" enctype="multipart/form-data">
|
||||
<div class="modal-background"></div>
|
||||
<div class="modal-card">
|
||||
<header class="modal-card-head">
|
||||
<p class="modal-card-title">Submit New Job</p>
|
||||
<button class="delete" aria-label="close" type="button"></button>
|
||||
</header>
|
||||
<section class="modal-card-body">
|
||||
<!-- File Uploader -->
|
||||
|
||||
<label class="label">Upload File</label>
|
||||
<div id="file-uploader" class="file has-name is-fullwidth">
|
||||
<label class="file-label">
|
||||
<input class="file-input is-small" type="file" name="file">
|
||||
<span class="file-cta">
|
||||
<span class="file-icon">
|
||||
<i class="fas fa-upload"></i>
|
||||
</span>
|
||||
<span class="file-label">
|
||||
Choose a file…
|
||||
</span>
|
||||
</span>
|
||||
<span class="file-name">
|
||||
No File Uploaded
|
||||
</span>
|
||||
</label>
|
||||
</div>
|
||||
<br>
|
||||
<script>
|
||||
const fileInput = document.querySelector('#file-uploader input[type=file]');
|
||||
fileInput.onchange = () => {
|
||||
if (fileInput.files.length > 0) {
|
||||
const fileName = document.querySelector('#file-uploader .file-name');
|
||||
fileName.textContent = fileInput.files[0].name;
|
||||
}
|
||||
}
|
||||
|
||||
const presets = {
|
||||
{% for preset in preset_list: %}
|
||||
{{preset}}: {
|
||||
name: '{{preset_list[preset]['name']}}',
|
||||
renderer: '{{preset_list[preset]['renderer']}}',
|
||||
args: '{{preset_list[preset]['args']}}',
|
||||
},
|
||||
{% endfor %}
|
||||
};
|
||||
|
||||
function rendererChanged(ddl1) {
|
||||
|
||||
var renderers = {
|
||||
{% for renderer in renderer_info: %}
|
||||
{% if renderer_info[renderer]['supported_export_formats']: %}
|
||||
{{renderer}}: [
|
||||
{% for format in renderer_info[renderer]['supported_export_formats']: %}
|
||||
'{{format}}',
|
||||
{% endfor %}
|
||||
],
|
||||
{% endif %}
|
||||
{% endfor %}
|
||||
};
|
||||
|
||||
var selectedRenderer = ddl1.value;
|
||||
|
||||
var ddl3 = document.getElementById('preset_list');
|
||||
ddl3.options.length = 0;
|
||||
createOption(ddl3, '-Presets-', '');
|
||||
for (var preset_name in presets) {
|
||||
if (presets[preset_name]['renderer'] == selectedRenderer) {
|
||||
createOption(ddl3, presets[preset_name]['name'], preset_name);
|
||||
};
|
||||
};
|
||||
document.getElementById('raw_args').value = "";
|
||||
|
||||
var ddl2 = document.getElementById('export_format');
|
||||
ddl2.options.length = 0;
|
||||
var options = renderers[selectedRenderer];
|
||||
for (i = 0; i < options.length; i++) {
|
||||
createOption(ddl2, options[i], options[i]);
|
||||
};
|
||||
}
|
||||
|
||||
function createOption(ddl, text, value) {
|
||||
var opt = document.createElement('option');
|
||||
opt.value = value;
|
||||
opt.text = text;
|
||||
ddl.options.add(opt);
|
||||
}
|
||||
|
||||
function addPresetTextToInput(presetfield, textfield) {
|
||||
var p = presets[presetfield.value];
|
||||
textfield.value = p['args'];
|
||||
}
|
||||
|
||||
</script>
|
||||
|
||||
<!-- Renderer & Priority -->
|
||||
<div class="field is-grouped">
|
||||
<p class="control">
|
||||
<label class="label">Renderer</label>
|
||||
<span class="select">
|
||||
<select id="renderer" name="renderer" onchange="rendererChanged(this)">
|
||||
{% for renderer in renderer_info: %}
|
||||
<option name="renderer" value="{{renderer}}">{{renderer}}</option>
|
||||
{% endfor %}
|
||||
</select>
|
||||
</span>
|
||||
</p>
|
||||
<p class="control">
|
||||
<label class="label">Client</label>
|
||||
<span class="select">
|
||||
<select name="client">
|
||||
<option name="client" value="">First Available</option>
|
||||
{% for client in render_clients: %}
|
||||
<option name="client" value="{{client}}">{{client}}</option>
|
||||
{% endfor %}
|
||||
</select>
|
||||
</span>
|
||||
</p>
|
||||
<p class="control">
|
||||
<label class="label">Priority</label>
|
||||
<span class="select">
|
||||
<select name="priority">
|
||||
<option name="priority" value="1">1</option>
|
||||
<option name="priority" value="2" selected="selected">2</option>
|
||||
<option name="priority" value="3">3</option>
|
||||
</select>
|
||||
</span>
|
||||
</p>
|
||||
</div>
|
||||
|
||||
<!-- Output Path -->
|
||||
<label class="label">Output</label>
|
||||
<div class="field has-addons">
|
||||
<div class="control is-expanded">
|
||||
<input class="input is-small" type="text" placeholder="Output Name" name="output_path" value="output.mp4">
|
||||
</div>
|
||||
<p class="control">
|
||||
<span class="select is-small">
|
||||
<select id="export_format" name="export_format">
|
||||
<option value="ar">option</option>
|
||||
</select>
|
||||
</span>
|
||||
</p>
|
||||
</div>
|
||||
|
||||
<!-- Resolution -->
|
||||
<!-- <label class="label">Resolution</label>-->
|
||||
<!-- <div class="field is-grouped">-->
|
||||
<!-- <p class="control">-->
|
||||
<!-- <input class="input" type="text" placeholder="auto" maxlength="5" size="8" name="AnyRenderer-arg_x_resolution">-->
|
||||
<!-- </p>-->
|
||||
<!-- <label class="label"> x </label>-->
|
||||
<!-- <p class="control">-->
|
||||
<!-- <input class="input" type="text" placeholder="auto" maxlength="5" size="8" name="AnyRenderer-arg_y_resolution">-->
|
||||
<!-- </p>-->
|
||||
<!-- <label class="label"> @ </label>-->
|
||||
<!-- <p class="control">-->
|
||||
<!-- <input class="input" type="text" placeholder="auto" maxlength="3" size="5" name="AnyRenderer-arg_frame_rate">-->
|
||||
<!-- </p>-->
|
||||
<!-- <label class="label"> fps </label>-->
|
||||
<!-- </div>-->
|
||||
|
||||
<label class="label">Command Line Arguments</label>
|
||||
<div class="field has-addons">
|
||||
<p class="control">
|
||||
<span class="select is-small">
|
||||
<select id="preset_list" onchange="addPresetTextToInput(this, document.getElementById('raw_args'))">
|
||||
<option value="preset-placeholder">presets</option>
|
||||
</select>
|
||||
</span>
|
||||
</p>
|
||||
<p class="control is-expanded">
|
||||
<input class="input is-small" type="text" placeholder="Args" id="raw_args" name="raw_args">
|
||||
</p>
|
||||
</div>
|
||||
|
||||
<!-- End Add Form -->
|
||||
</section>
|
||||
<footer class="modal-card-foot">
|
||||
<input class="button is-link" type="submit"/>
|
||||
<button class="button" type="button">Cancel</button>
|
||||
</footer>
|
||||
</div>
|
||||
</form>
|
||||
</div>
|
||||
|
||||
</body>
|
||||
</html>
|
||||
62
src/web/templates/upload.html
Normal file
@@ -0,0 +1,62 @@
|
||||
<html>
|
||||
<script src="https://ajax.googleapis.com/ajax/libs/jquery/1.11.1/jquery.min.js"></script>
|
||||
<script>
|
||||
$(function() {
|
||||
$('#renderer').change(function() {
|
||||
$('.render_settings').hide();
|
||||
$('#' + $(this).val()).show();
|
||||
});
|
||||
});
|
||||
|
||||
|
||||
</script>
|
||||
<body>
|
||||
<h3>Upload a file</h3>
|
||||
|
||||
<div>
|
||||
<form action="/add_job" method="POST"
|
||||
enctype="multipart/form-data">
|
||||
<div>
|
||||
<input type="file" name="file"/><br>
|
||||
</div>
|
||||
|
||||
<input type="hidden" id="origin" name="origin" value="html">
|
||||
|
||||
<div id="client">
|
||||
Render Client:
|
||||
<select name="client">
|
||||
{% for client in render_clients %}
|
||||
<option value="{{client}}">{{client}}</option>
|
||||
{% endfor %}
|
||||
</select>
|
||||
</div>
|
||||
<div id="priority">
|
||||
Priority:
|
||||
<select name="priority">
|
||||
<option value="1">1</option>
|
||||
<option value="2" selected>2</option>
|
||||
<option value="3">3</option>
|
||||
</select>
|
||||
</div>
|
||||
<div>
|
||||
<label for="renderer">Renderer:</label>
|
||||
<select id="renderer" name="renderer">
|
||||
{% for renderer in supported_renderers %}
|
||||
<option value="{{renderer}}">{{renderer}}</option>
|
||||
{% endfor %}
|
||||
</select>
|
||||
</div>
|
||||
<div id="blender" class="render_settings" style="display:none">
|
||||
Engine:
|
||||
<select name="blender+engine">
|
||||
<option value="CYCLES">Cycles</option>
|
||||
<option value="BLENDER_EEVEE">Eevee</option>
|
||||
</select>
|
||||
</div>
|
||||
<br>
|
||||
|
||||
<input type="submit"/>
|
||||
</form>
|
||||
</div>
|
||||
</body>
|
||||
</html>
|
||||