Initial commit

Co-Authored-By: mikaeltellhed <2311083+mikaeltellhed@users.noreply.github.com>
This commit is contained in:
Eric Tuvesson
2023-12-04 10:20:38 +01:00
commit 663c0a2e39
43 changed files with 39568 additions and 0 deletions

16
.dockerignore Normal file
View File

@@ -0,0 +1,16 @@
# See https://help.github.com/articles/ignoring-files/ for more about ignoring files.
# dependencies
/packages/noodl-cloudservice/node_modules
/packages/noodl-cloudservice-docker/node_modules
# misc
.DS_Store
.env.local
.env.development.local
.env.test.local
.env.production.local
npm-debug.log*
yarn-debug.log*
yarn-error.log*

120
.gitignore vendored Normal file
View File

@@ -0,0 +1,120 @@
# Logs
logs
*.log
npm-debug.log*
yarn-debug.log*
yarn-error.log*
lerna-debug.log*
.pnpm-debug.log*
# Diagnostic reports (https://nodejs.org/api/report.html)
report.[0-9]*.[0-9]*.[0-9]*.[0-9]*.json
# Runtime data
pids
*.pid
*.seed
*.pid.lock
# Directory for instrumented libs generated by jscoverage/JSCover
lib-cov
# Coverage directory used by tools like istanbul
coverage
*.lcov
# nyc test coverage
.nyc_output
# Grunt intermediate storage (https://gruntjs.com/creating-plugins#storing-task-files)
.grunt
# Bower dependency directory (https://bower.io/)
bower_components
# node-waf configuration
.lock-wscript
# Compiled binary addons (https://nodejs.org/api/addons.html)
build/Release
# Dependency directories
node_modules/
jspm_packages/
# Snowpack dependency directory (https://snowpack.dev/)
web_modules/
# TypeScript cache
*.tsbuildinfo
# Optional npm cache directory
.npm
# Optional eslint cache
.eslintcache
# Microbundle cache
.rpt2_cache/
.rts2_cache_cjs/
.rts2_cache_es/
.rts2_cache_umd/
# Optional REPL history
.node_repl_history
# Output of 'npm pack'
*.tgz
# Yarn Integrity file
.yarn-integrity
# dotenv environment variables file
.env
.env.test
.env.production
# parcel-bundler cache (https://parceljs.org/)
.cache
.parcel-cache
# Next.js build output
.next
out
# Gatsby files
.cache/
# Comment in the public line in if your project uses Gatsby and not Next.js
# https://nextjs.org/blog/next-9-1#public-directory-support
# public
# vuepress build output
.vuepress/dist
# Serverless directories
.serverless/
# FuseBox cache
.fusebox/
# DynamoDB Local files
.dynamodb/
# TernJS port file
.tern-port
# Stores VSCode versions used for testing VSCode extensions
.vscode-test
# yarn v2
.yarn/cache
.yarn/unplugged
.yarn/build-state.yml
.yarn/install-state.gz
.pnp.*
# MS Visio (Diagram Editor)
~$$architecture.~vsdx
.DS_Store
.env*

132
CODE_OF_CONDUCT.md Normal file
View File

@@ -0,0 +1,132 @@
# Contributor Covenant Code of Conduct
## Our Pledge
We as members, contributors, and leaders pledge to make participation in our
community a harassment-free experience for everyone, regardless of age, body
size, visible or invisible disability, ethnicity, sex characteristics, gender
identity and expression, level of experience, education, socio-economic status,
nationality, personal appearance, race, caste, color, religion, or sexual
identity and orientation.
We pledge to act and interact in ways that contribute to an open, welcoming,
diverse, inclusive, and healthy community.
## Our Standards
Examples of behavior that contributes to a positive environment for our
community include:
- Demonstrating empathy and kindness toward other people
- Being respectful of differing opinions, viewpoints, and experiences
- Giving and gracefully accepting constructive feedback
- Accepting responsibility and apologizing to those affected by our mistakes,
and learning from the experience
- Focusing on what is best not just for us as individuals, but for the overall
community
Examples of unacceptable behavior include:
- The use of sexualized language or imagery, and sexual attention or advances of
any kind
- Trolling, insulting or derogatory comments, and personal or political attacks
- Public or private harassment
- Publishing others' private information, such as a physical or email address,
without their explicit permission
- Other conduct which could reasonably be considered inappropriate in a
professional setting
## Enforcement Responsibilities
Community leaders are responsible for clarifying and enforcing our standards of
acceptable behavior and will take appropriate and fair corrective action in
response to any behavior that they deem inappropriate, threatening, offensive,
or harmful.
Community leaders have the right and responsibility to remove, edit, or reject
comments, commits, code, wiki edits, issues, and other contributions that are
not aligned to this Code of Conduct, and will communicate reasons for moderation
decisions when appropriate.
## Scope
This Code of Conduct applies within all community spaces, and also applies when
an individual is officially representing the community in public spaces.
Examples of representing our community include using an official e-mail address,
posting via an official social media account, or acting as an appointed
representative at an online or offline event.
## Enforcement
Instances of abusive, harassing, or otherwise unacceptable behavior may be
reported to the community leaders responsible for enforcement at
[INSERT CONTACT METHOD].
All complaints will be reviewed and investigated promptly and fairly.
All community leaders are obligated to respect the privacy and security of the
reporter of any incident.
## Enforcement Guidelines
Community leaders will follow these Community Impact Guidelines in determining
the consequences for any action they deem in violation of this Code of Conduct:
### 1. Correction
**Community Impact**: Use of inappropriate language or other behavior deemed
unprofessional or unwelcome in the community.
**Consequence**: A private, written warning from community leaders, providing
clarity around the nature of the violation and an explanation of why the
behavior was inappropriate. A public apology may be requested.
### 2. Warning
**Community Impact**: A violation through a single incident or series of
actions.
**Consequence**: A warning with consequences for continued behavior. No
interaction with the people involved, including unsolicited interaction with
those enforcing the Code of Conduct, for a specified period of time. This
includes avoiding interactions in community spaces as well as external channels
like social media. Violating these terms may lead to a temporary or permanent
ban.
### 3. Temporary Ban
**Community Impact**: A serious violation of community standards, including
sustained inappropriate behavior.
**Consequence**: A temporary ban from any sort of interaction or public
communication with the community for a specified period of time. No public or
private interaction with the people involved, including unsolicited interaction
with those enforcing the Code of Conduct, is allowed during this period.
Violating these terms may lead to a permanent ban.
### 4. Permanent Ban
**Community Impact**: Demonstrating a pattern of violation of community
standards, including sustained inappropriate behavior, harassment of an
individual, or aggression toward or disparagement of classes of individuals.
**Consequence**: A permanent ban from any sort of public interaction within the
community.
## Attribution
This Code of Conduct is adapted from the [Contributor Covenant][homepage],
version 2.1, available at
[https://www.contributor-covenant.org/version/2/1/code_of_conduct.html][v2.1].
Community Impact Guidelines were inspired by
[Mozilla's code of conduct enforcement ladder][Mozilla CoC].
For answers to common questions about this code of conduct, see the FAQ at
[https://www.contributor-covenant.org/faq][FAQ]. Translations are available at
[https://www.contributor-covenant.org/translations][translations].
[homepage]: https://www.contributor-covenant.org
[v2.1]: https://www.contributor-covenant.org/version/2/1/code_of_conduct.html
[Mozilla CoC]: https://github.com/mozilla/diversity
[FAQ]: https://www.contributor-covenant.org/faq
[translations]: https://www.contributor-covenant.org/translations

16
Dockerfile Normal file
View File

@@ -0,0 +1,16 @@
FROM nikolaik/python-nodejs:python3.8-nodejs16
# Copy over the local NPM package
# this is why the Dockerfile is in the root folder
WORKDIR /usr/src/noodl-cloudservice
COPY ./packages/noodl-cloudservice .
RUN npm install
WORKDIR /usr/src/app
COPY packages/noodl-cloudservice-docker .
RUN npm install
EXPOSE 3000
CMD [ "node", "./src/index.js" ]

20
LICENSE.md Normal file
View File

@@ -0,0 +1,20 @@
Copyright (c) 2023 Future Platforms AB
Permission is hereby granted, free of charge, to any person obtaining
a copy of this software and associated documentation files (the
"Software"), to deal in the Software without restriction, including
without limitation the rights to use, copy, modify, merge, publish,
distribute, sublicense, and/or sell copies of the Software, and to
permit persons to whom the Software is furnished to do so, subject to
the following conditions:
The above copyright notice and this permission notice shall be
included in all copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE
LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION
OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION
WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.

35
README.md Normal file
View File

@@ -0,0 +1,35 @@
# Noodl Cloud Service
Welcome to the Noodl Cloud Service project! In this repository, you can find the Self-Hosted Noodl Cloud Service.
This contains an NPM package making it easy to create a custom version on top of the Noodl Cloud Service.
There is also a Docker image making it easy to set up and host the normal Self-Hosted Noodl Cloud Service.
## Getting Started
This project is using `isolated-vm` to execute each Cloud Function in isolation from the other Cloud Functions.
Download [Docker Desktop](https://www.docker.com/products/docker-desktop) for Mac or Windows. [Docker Compose](https://docs.docker.com/compose) will be automatically installed. On Linux, make sure you have the latest version of [Compose](https://docs.docker.com/compose/install/).
Run in this directory to build and run the Cloud Service with a MongoDB instance:
```shell
docker compose up
```
The Cloud Service will be running at [http://localhost:3000](http://localhost:3000).
## About Noodl
Noodl is a low-code platform where designers and developers build custom applications and experiences. Designed as a visual programming environment, it aims to expedite your development process. It promotes the swift and efficient creation of applications, requiring minimal coding knowledge.
## License
Please note that this project is released with a [Contributor Code of Conduct](CODE_OF_CONDUCT.md). By participating in this project you agree to abide by its terms.
This project is licensed under the MIT License - see the [LICENSE.md](LICENSE.md) file for details.
## Contact
If you have any questions, concerns, or feedback, please open a discussion in the [discussions tracker](https://github.com/noodlapp/noodl-cloudservice/discussions) or join our Discord channel and we'll be happy to assist you!

36
docker-compose.yml Normal file
View File

@@ -0,0 +1,36 @@
version: '3'
services:
mongodb:
restart: unless-stopped
image: mongo:latest
container_name: noodlapp-mongodb
ports:
- "27017:27017"
volumes:
- mongodb-data:/data/noodlapp-db
environment:
MONGO_INITDB_ROOT_USERNAME: yourusername
MONGO_INITDB_ROOT_PASSWORD: yourpassword
MONGO_INITDB_DATABASE: noodlapp
cloudservice:
restart: unless-stopped
build:
context: .
dockerfile: Dockerfile
container_name: cloudservice
environment:
NODE_ENV: production
PORT: 3000
MASTER_KEY: mymasterkey
APP_ID: myappid
DATABASE_URI: mongodb://yourusername:yourpassword@mongodb:27017/noodlapp?authSource=admin
PUBLIC_SERVER_URL: http://localhost:3000
ports:
- "3000:3000"
links:
- mongodb
volumes:
mongodb-data:

View File

@@ -0,0 +1,16 @@
# Noodl Cloud Services Docker
This package contains the docker image of the Noodl Self Hosted Cloud Service.
## Health Endpoints
```
# The application is up and running.
/health/live
# The application is ready to serve requests.
/health/ready
# Accumulating all health check procedures in the application.
/health
```

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,20 @@
{
"name": "@noodl/cloudservice-docker",
"version": "1.0.0",
"description": "Low-code for when experience matter",
"author": "Noodl <info@noodl.net>",
"homepage": "https://noodl.net",
"license": "MIT",
"scripts": {
"start": "node ./src/index.js",
"test": "./node_modules/.bin/jasmine"
},
"dependencies": {
"@noodl/cloudservice": "file:../noodl-cloudservice",
"cors": "^2.8.5",
"express": "^4.17.1"
},
"devDependencies": {
"jasmine": "^4.0.2"
}
}

View File

@@ -0,0 +1,11 @@
{
"spec_dir": "spec",
"spec_files": [
"**/*[sS]pec.js"
],
"helpers": [
"helpers/**/*.js"
],
"stopSpecOnExpectationFailure": false,
"random": false
}

View File

@@ -0,0 +1,57 @@
const { createNoodlServer } = require("@noodl/cloudservice");
const express = require("express");
const cors = require("cors");
// Get environment variable that is a number, if not return undefined
function _getNumberEnv(_value) {
const val = Number(_value);
if (isNaN(val)) return undefined;
else return val;
}
const port = Number(process.env.PORT || 3000);
const databaseURI = String(process.env.DATABASE_URI);
const masterKey = String(process.env.MASTER_KEY);
const appId = String(process.env.APP_ID);
const server = express();
server.use(
cors({
// Set the browser cache time for preflight responses
maxAge: 86400,
})
);
server.use(
express.urlencoded({
extended: true,
})
);
server.use(
express.json({
limit: "2mb",
})
);
const noodlServer = createNoodlServer({
port,
databaseURI,
masterKey,
appId,
functionOptions: {
timeOut: _getNumberEnv(process.env.CLOUD_FUNCTIONS_TIMEOUT),
memoryLimit: _getNumberEnv(process.env.CLOUD_FUNCTIONS_MEMORY_LIMIT),
},
parseOptions: {
maxUploadSize: process.env.MAX_UPLOAD_SIZE || "20mb",
// set or override any of the Parse settings
},
});
server.use("/", noodlServer.middleware);
server.listen(port, () => {
console.log(`Noodl Parse Server listening at http://localhost:${port}`);
});

View File

@@ -0,0 +1,43 @@
# Noodl Cloud Service
Welcome to the Noodl Cloud Service project!
## About Noodl
Noodl is the low-code platform where designers and developers build custom applications and experiences. Designed as a visual programming environment, it aims to expedite your development process. It promotes swift and efficient creation of applications, requiring minimal coding knowledge.
## Getting started
```js
const express = require('express')
const { createNoodlServer } = require("@noodl/cloudservices");
const noodlServer = createNoodlServer({
port: 3000,
databaseURI: "insert",
masterKey: "insert",
appId: "insert",
parseOptions: {
// set or override any of the Parse settings
//
// A custom file adaptor can be set here:
// filesAdapter ...
}
});
const server = express();
server.use("/", noodlServer.middleware);
server.listen(port, () => {
console.log(`Noodl Cloud Service listening at http://localhost:${port}`);
});
```
## License
Please note that this project is released with a [Contributor Code of Conduct](../../CODE_OF_CONDUCT.md). By participating in this project you agree to abide by its terms.
This project is licensed under the MIT License - see the [LICENSE.md](../../LICENSE.md) file for details.
## Contact
If you have any questions, concerns, or feedback, please open a discussion in the [discussions tracker](https://github.com/noodlapp/noodl-cloudservice/discussions) or join our Discord channel and we'll be happy to assist you!

12111
packages/noodl-cloudservice/package-lock.json generated Normal file

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,22 @@
{
"name": "@noodl/cloudservice",
"version": "1.0.0",
"description": "Low-code for when experience matter",
"author": "Noodl <info@noodl.net>",
"homepage": "https://noodl.net",
"license": "MIT",
"main": "./src/index.js",
"scripts": {
"test": "./node_modules/.bin/jasmine"
},
"dependencies": {
"isolated-vm": "^4.4.2",
"node-fetch": "2.6.7",
"parse-server": "^4.10.4",
"parse-server-gcs-adapter": "git+https://github.com/noodlapp/noodl-parse-server-gcs-adapter.git",
"winston-mongodb": "^5.1.0"
},
"devDependencies": {
"jasmine": "^4.0.2"
}
}

View File

@@ -0,0 +1,408 @@
const fetch = require("node-fetch");
const ivm = require("isolated-vm");
const fs = require("fs");
// Create a snapshot of a given runtime if needed
// of serve from the cache
const snapshots = {};
async function getRuntimeSnapshot(url) {
if (snapshots[url]) {
try {
await snapshots[url];
} catch (e) {
console.log(`Disposing runtime snapshot due to error in create: `, e);
delete snapshots[url];
}
}
if (snapshots[url]) return snapshots[url];
else
return (snapshots[url] = (async () => {
console.log("- Loading runtime script");
const res = await fetch(url);
const script = await res.text();
return ivm.Isolate.createSnapshot([
{
code: `var _noodl_handleReq, _noodl_api_response,_noodl_process_jobs;`,
}, // Must declare, otherwise we will get error when trying to set as global from function
{ code: script },
]);
})());
}
const _defaultRuntime = process.env.NOODL_DEFAULT_CLOUD_RUNTIME;
// Create an isolated context for a specific environment
async function createContext(env) {
if (env.version === undefined) {
throw Error("No version specified when creating context.");
}
const timeOut = 15;
const memoryLimit = env.memoryLimit || 128;
// Load custom code
console.log("Creating context for version " + env.version);
console.log("- Loading cloud deploy");
const res = await fetch(
env.backendEndpoint +
'/classes/Ndl_CF?where={"version":"' +
env.version +
'"}',
{
headers: {
"X-Parse-Application-Id": env.appId,
"X-Parse-Master-Key": env.masterKey,
},
}
);
const data = await res.json();
let code = "",
cloudRuntime;
if (data.results && data.results.length > 0) {
data.results.sort((a, b) => a._created_at - b._created_at);
cloudRuntime = data.results[0].runtime;
data.results.forEach((d) => {
code += d.code;
});
} else {
throw Error(
`No cloud functions found for env ${env.appId} and version ${env.version}.`
);
}
console.log("- Starting up isolate");
let runtime = cloudRuntime || _defaultRuntime;
if (!runtime.endsWith(".js")) runtime = runtime + ".js";
console.log("- Using runtime: " + runtime);
const snapshot = await getRuntimeSnapshot(
(process.env.NOODL_CLOUD_RUNTIMES_LOCATION ||
"https://runtimes.noodl.cloud") +
"/" +
runtime
);
const isolate = new ivm.Isolate({ memoryLimit, snapshot });
const context = await isolate.createContext();
const jail = context.global;
// Bootstrap message handler
jail.setSync("global", context.global.derefInto());
// ---------------- API ----------------
let ongoingAPICalls = 0;
const maxOngoingAPICalls = 100;
function _internalServerError(message) {
Object.keys(responseHandlers).forEach((k) => {
if (typeof responseHandlers[k] === "function") {
responseHandlers[k]({
statusCode: 500,
body: JSON.stringify({ error: message || "Internal server error" }),
});
delete responseHandlers[k];
}
});
}
async function _eval(script) {
if (isolate.isDisposed) return;
try {
await context.eval(script, { timeout: timeOut * 1000 });
} catch (e) {
console.log("_eval", e);
if (
e.message ===
"Isolate was disposed during execution due to memory limit"
) {
// Isolate was disposed, return out of memory error for all pending requests
_internalServerError("Out of memory");
}
}
if (isolate.isDisposed) {
// The isolate was disposed, end all currently pending requests
_internalServerError();
}
}
function _api_respond(token, res) {
ongoingAPICalls--;
if (ongoingAPICalls < 0) ongoingAPICalls = 0;
if (token !== undefined)
_eval("_noodl_api_response('" + token + "'," + JSON.stringify(res) + ")");
}
// Loggers
const logger = env.logger;
const apiFunctions = {
log: function (token, args) {
logger.log(
args.level || "info",
typeof args === "string" ? args : args.message
);
_api_respond(token);
},
fetch: function (token, args) {
fetch(args.url, args)
.then((r) => {
r.text()
.then((text) => {
_api_respond(token, {
ok: r.ok,
redirected: r.redirected,
statusText: r.statusText,
status: r.status,
headers: r.headers.raw(),
body: text,
});
})
.catch((e) => {
_api_respond(token, { error: e.message || true });
});
})
.catch((e) => {
_api_respond(token, { error: e.message || true });
});
},
setTimeout: function (token, millis) {
setTimeout(() => {
_api_respond(token);
}, millis);
},
};
jail.setSync("_noodl_api_call", function (functionName, token, args) {
ongoingAPICalls++;
if (!apiFunctions[functionName]) {
_api_respond(token, { error: "No such API function" });
return;
}
if (ongoingAPICalls >= maxOngoingAPICalls) {
// Protect against user code flooding API calls
_api_respond(token, { error: "Too many API calls" });
console.log("Warning too many concurrent ongoing api calls...");
return;
}
//console.log('API Call: ' + functionName + ' with args ', args)
try {
const _args = JSON.parse(JSON.stringify(args)); // extra safe
apiFunctions[functionName](token, _args);
} catch (e) {
console.log("Warning failed to execute api function: ", e);
_api_respond(token, { error: "Failed to execute API call" });
}
});
// event queue
let hasScheduledProcessJobs = false;
jail.setSync("_noodl_request_process_jobs", function () {
if (hasScheduledProcessJobs) return;
hasScheduledProcessJobs = true;
setImmediate(() => {
hasScheduledProcessJobs = false;
_eval("_noodl_process_jobs()");
});
});
// Some cloud services related stuff
jail.setSync(
"_noodl_cloudservices",
{
masterKey: env.masterKey,
endpoint: env.backendEndpoint,
appId: env.appId,
},
{ copy: true }
);
// Result from request
const responseHandlers = {};
jail.setSync("_noodl_response", function (token, args) {
if (typeof responseHandlers[token] === "function") {
responseHandlers[token](args);
delete responseHandlers[token];
}
});
try {
const script = await isolate.compileScript(code);
await script.run(context, {
timeout: timeOut * 1000, // 15 s to initialize
});
} catch (e) {
console.log("Failed when compiling and running cloud function code");
isolate.dispose();
throw e;
}
function _checkMemUsage() {
if (isolate.isDisposed) return; // Ignore already disposed isolate
const heap = isolate.getHeapStatisticsSync();
const memUsage = heap.total_heap_size / (1024 * 1024);
if (memUsage > memoryLimit * 0.8) {
// Mem usage has exceeded 80% of limit
// discard the context, a new context will be created for new incoming requests
// and this one will be cleaned up
const uri = env.appId + "/" + env.version;
if (!_context.markedToBeDiscarded) {
// Make sure it has not already been marked
_context.markedToBeDiscarded = true;
console.log(
`Marking context ${uri} as to be discarded due to memory limit, will be discarded in 2 mins.`
);
contextCache[uri + "/discarded/" + Date.now()] =
Promise.resolve(_context);
_context.ttl = Date.now() + 2 * 60 * 1000; // Kill in 3 minutes
delete contextCache[uri];
}
}
}
async function handleRequest(options) {
return new Promise((resolve, reject) => {
try {
let hasResponded = false;
_context.ttl = Date.now() + 10 * 60 * 1000; // Keep context alive
const token = Math.random().toString(26).slice(2);
const _req = {
function: options.functionId,
headers: options.headers,
body: options.body, // just forward raw body
};
responseHandlers[token] = (_res) => {
if (hasResponded) return;
hasResponded = true;
_checkMemUsage();
resolve(_res);
};
setTimeout(() => {
if (hasResponded) return;
hasResponded = true;
_checkMemUsage();
resolve({
statusCode: 500,
body: JSON.stringify({ error: "timeout" }),
});
}, timeOut * 1000); // Timeout if no reply from function
_eval(`_noodl_handleReq('${token}',${JSON.stringify(_req)})`)
.then(() => {
// All good
})
.catch((e) => {
if (hasResponded) return;
hasResponded = true;
_checkMemUsage();
resolve({
statusCode: 500,
body: JSON.stringify({ error: e.message }),
});
console.log("Error while running function:", e);
});
} catch (e) {
if (hasResponded) return;
hasResponded = true;
_checkMemUsage();
resolve({
statusCode: 500,
body: JSON.stringify({ error: e.message }),
});
console.log("Error while running function:", e);
}
});
}
const _context = {
context,
isolate,
responseHandlers,
version: env.version,
eval: _eval,
handleRequest,
ttl: Date.now() + 10 * 60 * 1000,
};
return _context;
}
const contextCache = {};
async function getCachedContext(env) {
const uri = env.appId + "/" + env.version;
// Check if the isolate have been disposed
if (contextCache[uri]) {
let context;
try {
context = await contextCache[uri];
} catch (e) {
console.log(`Disposing context due to error in create: `, e);
delete contextCache[uri];
}
if (context && context.isolate && context.isolate.isDisposed)
delete contextCache[uri];
}
if (contextCache[uri]) {
return contextCache[uri];
} else {
return (contextCache[uri] = createContext(env));
}
}
let hasScheduledContextCachePurge = false;
function scheduleContextCachePurge() {
if (hasScheduledContextCachePurge) return;
hasScheduledContextCachePurge = true;
setTimeout(() => {
hasScheduledContextCachePurge = false;
Object.keys(contextCache).forEach(async (k) => {
let context;
try {
context = await contextCache[k];
} catch (e) {
// This is a context that have failed to create
// delete it.
console.log(`Disposing isolate ${k} due to error in create: `, e);
delete contextCache[k];
}
if (context && context.isolate.isDisposed) {
console.log(`Disposing isolate ${k} due to "already disposed": `);
delete contextCache[k];
} else if (context && context.ttl < Date.now()) {
console.log(`Disposing isolate ${k} due to inactivity.`);
context.isolate.dispose();
delete contextCache[k];
}
});
}, 5 * 1000);
}
module.exports = {
scheduleContextCachePurge,
getCachedContext,
};

View File

@@ -0,0 +1,14 @@
Parse.Cloud.beforeLogin(async req => {
const {
object: user
} = req;
if (!user) {
return; // No user
}
const disabled = user.get('logInDisabled')
if (!req.master && disabled) {
throw Error('Access denied, log in disabled.')
}
});

View File

@@ -0,0 +1,103 @@
const fetch = require('node-fetch');
// Get the latest version of cloud functions deploy, if not provided in header
async function getLatestVersion({ appId, masterKey }) {
const res = await fetch('http://localhost:' + port + '/classes/Ndl_CF?limit=1&order=-createdAt&keys=version', {
headers: {
'X-Parse-Application-Id': appId,
'X-Parse-Master-Key': masterKey
}
})
if (res.ok) {
const json = await res.json();
if (json.results && json.results.length === 1)
return json.results[0].version;
}
}
let _latestVersionCache;
async function getLatestVersionCached(options) {
if (_latestVersionCache && (_latestVersionCache.ttl === undefined || _latestVersionCache.ttl > Date.now())) {
return _latestVersionCache;
}
try {
const latestVersion = await getLatestVersion(options);
_latestVersionCache = latestVersion;
_latestVersionCache.ttl = Date.now() + 15 * 1000; // Cache for 15s
} catch {
_latestVersionCache = undefined;
}
}
function _randomString(size) {
if (size === 0) {
throw new Error("Zero-length randomString is useless.");
}
const chars =
"ABCDEFGHIJKLMNOPQRSTUVWXYZ" + "abcdefghijklmnopqrstuvwxyz" + "0123456789";
let objectId = "";
for (let i = 0; i < size; ++i) {
objectId += chars[Math.floor((1 + Math.random()) * 0x10000) % chars.length];
}
return objectId;
}
function chunkDeploy(str, size) {
const numChunks = Math.ceil(str.length / size)
const chunks = new Array(numChunks)
for (let i = 0, o = 0; i < numChunks; ++i, o += size) {
chunks[i] = str.substr(o, size)
}
return chunks
}
async function deployFunctions({
port,
appId,
masterKey,
runtime,
data
}) {
const deploy = "const _exportedComponents = " + data
const version = _randomString(16)
// Split deploy into 100kb sizes
const chunks = chunkDeploy(deploy, 100 * 1024);
// Upload all (must be waterfall so they get the right created_at)
const serverUrl = 'http://localhost:' + port;
for (let i = 0; i < chunks.length; i++) {
await fetch(serverUrl + '/classes/Ndl_CF', {
method: 'POST',
body: JSON.stringify({
code: chunks[i],
version,
runtime,
ACL: {
"*": {
read: false,
write: false
}
}
}), // Make it only accessible to masterkey
headers: {
'X-Parse-Application-Id': appId,
'X-Parse-Master-Key': masterKey
}
})
}
return {
version
}
}
module.exports = {
deployFunctions,
getLatestVersion: getLatestVersionCached
};

View File

@@ -0,0 +1,54 @@
const CFContext = require('./cfcontext')
// The logger that is needed by the cloud functions
// it passes the logs to the parse server logger
class FunctionLogger {
constructor(noodlParseServer) {
this.noodlParseServer = noodlParseServer;
}
log(level, message) {
setImmediate(function () {
this.noodlParseServer.logger._log(level, message)
});
}
}
async function executeFunction({
port,
appId,
masterKey,
version,
logger,
headers,
functionId,
body,
timeOut = 15,
memoryLimit = 256
}) {
// Prepare the context
let cachedContext = await CFContext.getCachedContext({
backendEndpoint: 'http://localhost:' + port,
appId,
masterKey,
version,
logger,
timeOut: timeOut * 1000,
memoryLimit,
})
CFContext.scheduleContextCachePurge();
// Execute the request
const response = await cachedContext.handleRequest({
functionId,
headers,
body: JSON.stringify(body),
})
return response
}
module.exports = {
FunctionLogger,
executeFunction
};

View File

@@ -0,0 +1,132 @@
const { createNoodlParseServer } = require("./parse");
const { executeFunction } = require("./function");
const { deployFunctions, getLatestVersionCached } = require("./function-deploy");
const { Logger } = require("./logger");
function createMiddleware(noodlServer) {
return async function middleware(req, res, next) {
if (req.url.startsWith('/functions/') && req.method === 'POST') {
try {
const path = req.url;
const functionId = decodeURIComponent(path.split('/')[2]);
if (functionId === undefined)
return next()
console.log('Running cloud function ' + functionId);
let version = req.headers['x-noodl-cloud-version']
if (version === undefined) {
version = await getLatestVersionCached(noodlServer.options)
}
// Execute the request
const cfResponse = await executeFunction({
port: noodlServer.options.port,
appId: noodlServer.options.appId,
masterKey: noodlServer.options.masterKey,
version,
logger: new Logger(noodlServer),
headers: req.headers,
functionId,
body: req.body,
timeOut: noodlServer.functionOptions.timeOut,
memoryLimit: noodlServer.functionOptions.memoryLimit,
})
if (cfResponse.headers) {
res.status(cfResponse.statusCode)
.set(cfResponse.headers)
.send(cfResponse.body)
} else {
res.status(cfResponse.statusCode)
.set({ 'Content-Type': 'application/json' })
.send(cfResponse.body)
}
} catch (e) {
console.log('Something went wrong when running function', e)
res.status(400).json({
error: "Something when wrong..."
})
}
} else if (req.url.startsWith('/functions-admin')) {
if (req.headers['x-parse-master-key'] !== noodlServer.options.masterKey) {
return res.status(401).json({
message: 'Not authorized'
})
}
if (req.headers['x-parse-application-id'] !== noodlServer.options.appId) {
return res.status(401).json({
message: 'Not authorized'
})
}
// Deploy a new version
if (req.method === 'POST' && req.url === "/functions-admin/deploy") {
if (!req.body || typeof req.body.deploy !== "string" || typeof req.body.runtime !== "string") {
return res.status(400).json({
message: 'Must supply deploy and runtime'
})
}
console.log('Uploading deploy...')
const { version } = await deployFunctions({
port: noodlServer.options.port,
appId: noodlServer.options.appId,
masterKey: noodlServer.options.masterKey,
runtime: req.body.runtime,
data: req.body.deploy
})
console.log('Upload completed, version: ' + version)
res.json({
status: 'success',
version
})
} else if (req.method === 'GET' && req.url === "/functions-admin/info") {
res.json({
version: '1.0'
})
} else res.status(400).json({
message: 'Function not supported'
})
} else {
next()
}
}
}
/**
*
* @param {{
* port: number;
* databaseURI: string;
* masterKey: string;
* appId: string;
* functionOptions: { timeOut: number; memoryLimit: number; };
* parseOptions?: unknown;
* }} options
*/
function createNoodlServer(options) {
const noodlServer = createNoodlParseServer(options)
const cfMiddleware = createMiddleware(noodlServer);
// Combine the Noodl Cloud Function middleware with the Parse middleware into one middleware.
const middleware = (req, res, next) => {
cfMiddleware(req, res, () => {
noodlServer.server.app(req, res, next);
});
};
return {
noodlServer,
middleware
}
}
module.exports = {
createNoodlServer
};

View File

@@ -0,0 +1,17 @@
// The logger that is needed by the cloud functions
// it passes the logs to the parse server logger
class Logger {
constructor(noodlServer) {
this.noodlServer = noodlServer;
}
log(level, message) {
setImmediate(() => {
this.noodlServer.logger._log(level, message);
});
}
}
module.exports = {
Logger,
};

View File

@@ -0,0 +1,138 @@
const Winston = require('winston')
require('winston-mongodb');
// This stuff is needed to get the mongo-db transport working
// https://github.com/winstonjs/winston/issues/1130
function clone(obj) {
var copy = Array.isArray(obj) ? [] : {};
for (var i in obj) {
if (Array.isArray(obj[i])) {
copy[i] = obj[i].slice(0);
} else if (obj[i] instanceof Buffer) {
copy[i] = obj[i].slice(0);
} else if (typeof obj[i] != 'function') {
copy[i] = obj[i] instanceof Object ? clone(obj[i]) : obj[i];
} else if (typeof obj[i] === 'function') {
copy[i] = obj[i];
}
}
return copy;
}
require("winston/lib/winston/common").clone = clone;
let Transport = require("winston-transport");
Transport.prototype.normalizeQuery = function (options) { //
options = options || {};
// limit
options.rows = options.rows || options.limit || 10;
// starting row offset
options.start = options.start || 0;
// now
options.until = options.until || new Date;
if (typeof options.until !== 'object') {
options.until = new Date(options.until);
}
// now - 24
options.from = options.from || (options.until - (24 * 60 * 60 * 1000));
if (typeof options.from !== 'object') {
options.from = new Date(options.from);
}
// 'asc' or 'desc'
options.order = options.order || 'desc';
// which fields to select
options.fields = options.fields;
return options;
};
Transport.prototype.formatResults = function (results, options) {
return results;
};
// Create a logger that will push to mongodb
class WinstonLoggerAdapter {
constructor(options) {
const info = new Winston.transports.MongoDB({
db: options.databaseURI,
level: 'info',
collection: '_ndl_logs_info',
capped: true,
cappedSize: 2000000, // 2mb size
})
info.name = 'logs-info'
const error = new Winston.transports.MongoDB({
db: options.databaseURI,
level: 'error',
collection: '_ndl_logs_error',
capped: true,
cappedSize: 2000000, // 2mb size
})
error.name = 'logs-error'
this.logger = Winston.createLogger({
transports: [
info,
error
]
})
}
log() {
// Logs from parse are simply passed to console
console.log.apply(this, arguments);
}
// This function is used by cloud functions to actually push to log
_log() {
// Logs from parse are simply passed to console
console.log.apply(this, arguments);
return this.logger.log.apply(this.logger, arguments);
}
// custom query as winston is currently limited
query(options, callback = () => {}) {
if (!options) {
options = {};
}
// defaults to 7 days prior
const from = options.from || new Date(Date.now() - 7 * MILLISECONDS_IN_A_DAY);
const until = options.until || new Date();
const limit = options.size || 10;
const order = options.order || 'desc';
const level = options.level || 'info';
const queryOptions = {
from,
until,
limit,
order,
};
return new Promise((resolve, reject) => {
this.logger.query(queryOptions, (err, res) => {
if (err) {
callback(err);
return reject(err);
}
const _res = level === 'error' ? res['logs-error'] : res['logs-info'];
_res.forEach(r => delete r.meta)
callback(_res);
resolve(_res);
});
});
}
}
module.exports = {
LoggerAdapter: WinstonLoggerAdapter
}

View File

@@ -0,0 +1,115 @@
const path = require('path');
const ParseServer = require('parse-server').default;
const {
LoggerAdapter
} = require('./mongodb');
/**
*
* @param {{
* port: number;
* databaseURI: string;
* masterKey: string;
* appId: string;
* parseOptions?: unknown;
* }} param0
* @returns {{
* server: ParseServer;
* logger: LoggerAdapter;
* }}
*/
function createNoodlParseServer({
port = 3000,
databaseURI,
masterKey,
appId,
functionOptions,
parseOptions = {},
}) {
const serverURL = `http://localhost:${port}/`;
const logger = new LoggerAdapter({
databaseURI
})
// Create files adapter
let filesAdapter;
if (process.env.S3_BUCKET) {
console.log('Using AWS S3 file storage with bucket ' + process.env.S3_BUCKET)
if (!process.env.S3_SECRET_KEY || !process.env.S3_BUCKET) {
throw Error("You must provide S3_SECRET_KEY and S3_ACCESS_KEY environment variables in addition to S3_BUCKET for S3 file storage.")
}
const S3Adapter = require('parse-server').S3Adapter;
filesAdapter = new S3Adapter(
process.env.S3_ACCESS_KEY,
process.env.S3_SECRET_KEY,
process.env.S3_BUCKET, {
region: process.env.S3_REGION,
bucketPrefix: process.env.S3_BUCKET_PREFIX,
directAccess: process.env.S3_DIRECT_ACCESS === 'true'
}
)
} else if (process.env.GCS_BUCKET) {
const GCSAdapter = require('parse-server-gcs-adapter');
if (!process.env.GCP_PROJECT_ID || !process.env.GCP_CLIENT_EMAIL || !process.env.GCP_PRIVATE_KEY) {
throw Error("You must provide GCP_PROJECT_ID, GCP_CLIENT_EMAIL, GCP_PRIVATE_KEY environment variables in addition to GCS_BUCKET for GCS file storage.")
}
console.log('Using GCS file storage with bucket ' + process.env.GCS_BUCKET)
filesAdapter = new GCSAdapter(
process.env.GCP_PROJECT_ID, { // Credentials
client_email: process.env.GCP_CLIENT_EMAIL,
private_key: process.env.GCP_PRIVATE_KEY.replace(/\\n/gm, '\n')
},
process.env.GCS_BUCKET, {
directAccess: process.env.GCS_DIRECT_ACCESS === 'true',
bucketPrefix: process.env.GCS_BUCKET_PREFIX
}
);
}
const server = new ParseServer({
databaseURI,
cloud: path.resolve(__dirname, './cloud.js'),
push: false,
appId,
masterKey,
serverURL,
appName: "Noodl App",
// allowCustomObjectId is needed for Noodl's cached model writes
allowCustomObjectId: true,
loggerAdapter: logger,
// We do this just to get the right behaviour for emailVerified (no emails are sent)
publicServerURL: process.env.PUBLIC_SERVER_URL || 'https://you-need-to-set-public-server-env-to-support-files',
verifyUserEmails: true,
emailAdapter: { // null email adapter
sendMail: () => {},
sendVerificationEmail: () => {},
sendPasswordResetEmail: () => {}
},
filesAdapter,
...parseOptions,
});
return {
functionOptions: {
timeOut: functionOptions.timeOut || 15,
memoryLimit: functionOptions.memoryLimit || 256
},
options: {
port,
appId,
masterKey,
},
server,
logger,
};
}
module.exports = {
createNoodlParseServer
}

45
tests/README.md Normal file
View File

@@ -0,0 +1,45 @@
# Tests
In the `project` folder we have a Noodl project that includes a few tests for Cloud Functions. To run the tests, you have to open the project and deploy it to your Cloud Service and frontend.
## Data
Here is a step by step guide on how to setup the data required to be able to run the tests.
1. Import the schema
2. Create a `Test` record where the `Text` column is `wagga` (required for "Simple Function node" test)
3. Create 2 configs parameters:
If the Cloud Service is running on "localhost":
- name: `TestParameter`, type: `String`, value: `woff`, master key only: `false`
- name: `TestProtected`, type: `String`, value: `buff`, master key only: `true`
Otherwise:
- name: `TestParameter`, type: `String`, value: `wagga`, master key only: `false`
- name: `TestProtected`, type: `String`, value: `buff`, master key only: `true`
4. Create a `Test` record where the columns are (required for Test Record API):
```
ANumber: 15
ADate: 2022-11-07T10:23:52.301Z
AString: Test
ABoolean: true
AnObjetc: {"hej":"ho"}
AnArray: ["a", "b"]
Text: fetch-test
```
5. Import TestQuery data
6. Change the `Parent` pointer of "Lisa" to "Marge" (might have to empty the field before updating it)
7. Add "Lisa", "Bart" and "Maggie" as Children to "Homer"
8. Disable Class Level Protection (CLP) for Create on the User class. (required for Disable sign up)
9. Create a user with:
```
username: test
password: test
```

6
tests/TestQuery.csv Normal file
View File

@@ -0,0 +1,6 @@
Sex,Parent,Name,Age,Children
Female,nBZhDO0R27,Lisa,11,
Female,,Marge,40,
Male,,Homer,42,
Male,,Bart,13,
Female,,Maggie,2,
1 Sex Parent Name Age Children
2 Female nBZhDO0R27 Lisa 11
3 Female Marge 40
4 Male Homer 42
5 Male Bart 13
6 Female Maggie 2

1
tests/project/.gitattributes vendored Normal file
View File

@@ -0,0 +1 @@
project.json merge=noodl

4
tests/project/.gitignore vendored Normal file
View File

@@ -0,0 +1,4 @@
project-tmp.json
.DS_Store
__MACOSX
project-tmp.json*

View File

@@ -0,0 +1,202 @@
Apache License
Version 2.0, January 2004
http://www.apache.org/licenses/
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
1. Definitions.
"License" shall mean the terms and conditions for use, reproduction,
and distribution as defined by Sections 1 through 9 of this document.
"Licensor" shall mean the copyright owner or entity authorized by
the copyright owner that is granting the License.
"Legal Entity" shall mean the union of the acting entity and all
other entities that control, are controlled by, or are under common
control with that entity. For the purposes of this definition,
"control" means (i) the power, direct or indirect, to cause the
direction or management of such entity, whether by contract or
otherwise, or (ii) ownership of fifty percent (50%) or more of the
outstanding shares, or (iii) beneficial ownership of such entity.
"You" (or "Your") shall mean an individual or Legal Entity
exercising permissions granted by this License.
"Source" form shall mean the preferred form for making modifications,
including but not limited to software source code, documentation
source, and configuration files.
"Object" form shall mean any form resulting from mechanical
transformation or translation of a Source form, including but
not limited to compiled object code, generated documentation,
and conversions to other media types.
"Work" shall mean the work of authorship, whether in Source or
Object form, made available under the License, as indicated by a
copyright notice that is included in or attached to the work
(an example is provided in the Appendix below).
"Derivative Works" shall mean any work, whether in Source or Object
form, that is based on (or derived from) the Work and for which the
editorial revisions, annotations, elaborations, or other modifications
represent, as a whole, an original work of authorship. For the purposes
of this License, Derivative Works shall not include works that remain
separable from, or merely link (or bind by name) to the interfaces of,
the Work and Derivative Works thereof.
"Contribution" shall mean any work of authorship, including
the original version of the Work and any modifications or additions
to that Work or Derivative Works thereof, that is intentionally
submitted to Licensor for inclusion in the Work by the copyright owner
or by an individual or Legal Entity authorized to submit on behalf of
the copyright owner. For the purposes of this definition, "submitted"
means any form of electronic, verbal, or written communication sent
to the Licensor or its representatives, including but not limited to
communication on electronic mailing lists, source code control systems,
and issue tracking systems that are managed by, or on behalf of, the
Licensor for the purpose of discussing and improving the Work, but
excluding communication that is conspicuously marked or otherwise
designated in writing by the copyright owner as "Not a Contribution."
"Contributor" shall mean Licensor and any individual or Legal Entity
on behalf of whom a Contribution has been received by Licensor and
subsequently incorporated within the Work.
2. Grant of Copyright License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
copyright license to reproduce, prepare Derivative Works of,
publicly display, publicly perform, sublicense, and distribute the
Work and such Derivative Works in Source or Object form.
3. Grant of Patent License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
(except as stated in this section) patent license to make, have made,
use, offer to sell, sell, import, and otherwise transfer the Work,
where such license applies only to those patent claims licensable
by such Contributor that are necessarily infringed by their
Contribution(s) alone or by combination of their Contribution(s)
with the Work to which such Contribution(s) was submitted. If You
institute patent litigation against any entity (including a
cross-claim or counterclaim in a lawsuit) alleging that the Work
or a Contribution incorporated within the Work constitutes direct
or contributory patent infringement, then any patent licenses
granted to You under this License for that Work shall terminate
as of the date such litigation is filed.
4. Redistribution. You may reproduce and distribute copies of the
Work or Derivative Works thereof in any medium, with or without
modifications, and in Source or Object form, provided that You
meet the following conditions:
(a) You must give any other recipients of the Work or
Derivative Works a copy of this License; and
(b) You must cause any modified files to carry prominent notices
stating that You changed the files; and
(c) You must retain, in the Source form of any Derivative Works
that You distribute, all copyright, patent, trademark, and
attribution notices from the Source form of the Work,
excluding those notices that do not pertain to any part of
the Derivative Works; and
(d) If the Work includes a "NOTICE" text file as part of its
distribution, then any Derivative Works that You distribute must
include a readable copy of the attribution notices contained
within such NOTICE file, excluding those notices that do not
pertain to any part of the Derivative Works, in at least one
of the following places: within a NOTICE text file distributed
as part of the Derivative Works; within the Source form or
documentation, if provided along with the Derivative Works; or,
within a display generated by the Derivative Works, if and
wherever such third-party notices normally appear. The contents
of the NOTICE file are for informational purposes only and
do not modify the License. You may add Your own attribution
notices within Derivative Works that You distribute, alongside
or as an addendum to the NOTICE text from the Work, provided
that such additional attribution notices cannot be construed
as modifying the License.
You may add Your own copyright statement to Your modifications and
may provide additional or different license terms and conditions
for use, reproduction, or distribution of Your modifications, or
for any such Derivative Works as a whole, provided Your use,
reproduction, and distribution of the Work otherwise complies with
the conditions stated in this License.
5. Submission of Contributions. Unless You explicitly state otherwise,
any Contribution intentionally submitted for inclusion in the Work
by You to the Licensor shall be under the terms and conditions of
this License, without any additional terms or conditions.
Notwithstanding the above, nothing herein shall supersede or modify
the terms of any separate license agreement you may have executed
with Licensor regarding such Contributions.
6. Trademarks. This License does not grant permission to use the trade
names, trademarks, service marks, or product names of the Licensor,
except as required for reasonable and customary use in describing the
origin of the Work and reproducing the content of the NOTICE file.
7. Disclaimer of Warranty. Unless required by applicable law or
agreed to in writing, Licensor provides the Work (and each
Contributor provides its Contributions) on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
implied, including, without limitation, any warranties or conditions
of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
PARTICULAR PURPOSE. You are solely responsible for determining the
appropriateness of using or redistributing the Work and assume any
risks associated with Your exercise of permissions under this License.
8. Limitation of Liability. In no event and under no legal theory,
whether in tort (including negligence), contract, or otherwise,
unless required by applicable law (such as deliberate and grossly
negligent acts) or agreed to in writing, shall any Contributor be
liable to You for damages, including any direct, indirect, special,
incidental, or consequential damages of any character arising as a
result of this License or out of the use or inability to use the
Work (including but not limited to damages for loss of goodwill,
work stoppage, computer failure or malfunction, or any and all
other commercial damages or losses), even if such Contributor
has been advised of the possibility of such damages.
9. Accepting Warranty or Additional Liability. While redistributing
the Work or Derivative Works thereof, You may choose to offer,
and charge a fee for, acceptance of support, warranty, indemnity,
or other liability obligations and/or rights consistent with this
License. However, in accepting such obligations, You may act only
on Your own behalf and on Your sole responsibility, not on behalf
of any other Contributor, and only if You agree to indemnify,
defend, and hold each Contributor harmless for any liability
incurred by, or claims asserted against, such Contributor by reason
of your accepting any such warranty or additional liability.
END OF TERMS AND CONDITIONS
APPENDIX: How to apply the Apache License to your work.
To apply the Apache License to your work, attach the following
boilerplate notice, with the fields enclosed by brackets "[]"
replaced with your own identifying information. (Don't include
the brackets!) The text should be enclosed in the appropriate
comment syntax for the file format. We also recommend that a
file or class name and description of purpose be included on the
same "printed page" as the copyright notice for easier
identification within third-party archives.
Copyright [yyyy] [name of copyright owner]
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.

Binary file not shown.

Binary file not shown.

Binary file not shown.

Binary file not shown.

Binary file not shown.

Binary file not shown.

Binary file not shown.

Binary file not shown.

Binary file not shown.

Binary file not shown.

Binary file not shown.

Binary file not shown.

File diff suppressed because it is too large Load Diff

21978
tests/project/project.json Normal file

File diff suppressed because it is too large Load Diff

521
tests/schema.json Normal file
View File

@@ -0,0 +1,521 @@
[
{
"className": "_Role",
"fields": {
"objectId": {
"type": "String"
},
"createdAt": {
"type": "Date"
},
"updatedAt": {
"type": "Date"
},
"ACL": {
"type": "ACL"
},
"name": {
"type": "String"
},
"users": {
"type": "Relation",
"targetClass": "_User"
},
"roles": {
"type": "Relation",
"targetClass": "_Role"
}
},
"classLevelPermissions": {
"find": {
"*": true
},
"count": {
"*": true
},
"get": {
"*": true
},
"create": {
"*": true
},
"update": {
"*": true
},
"delete": {
"*": true
},
"addField": {
"*": true
},
"protectedFields": {
"*": []
}
},
"indexes": {
"_id_": {
"_id": 1
},
"name_1": {
"name": 1
}
}
},
{
"className": "_User",
"fields": {
"objectId": {
"type": "String"
},
"createdAt": {
"type": "Date"
},
"updatedAt": {
"type": "Date"
},
"ACL": {
"type": "ACL"
},
"username": {
"type": "String"
},
"password": {
"type": "String"
},
"email": {
"type": "String"
},
"emailVerified": {
"type": "Boolean"
},
"authData": {
"type": "Object"
},
"code": {
"type": "Number"
},
"error": {
"type": "String"
},
"sessionToken": {
"type": "String"
},
"logInDisabled": {
"type": "Boolean"
}
},
"classLevelPermissions": {
"find": {},
"count": {},
"get": {},
"create": {},
"update": {},
"delete": {},
"addField": {},
"protectedFields": {
"*": []
}
},
"indexes": {
"_id_": {
"_id": 1
},
"username_1": {
"username": 1
},
"case_insensitive_email": {
"email": 1
},
"case_insensitive_username": {
"username": 1
},
"email_1": {
"email": 1
}
}
},
{
"className": "Ndl_CF",
"fields": {
"objectId": {
"type": "String"
},
"createdAt": {
"type": "Date"
},
"updatedAt": {
"type": "Date"
},
"ACL": {
"type": "ACL"
},
"code": {
"type": "String",
"required": false
},
"version": {
"type": "String"
},
"runtime": {
"type": "String"
}
},
"classLevelPermissions": {
"find": {
"*": true
},
"count": {
"*": true
},
"get": {
"*": true
},
"create": {
"*": true
},
"update": {
"*": true
},
"delete": {
"*": true
},
"addField": {
"*": true
},
"protectedFields": {
"*": []
}
},
"indexes": {
"_id_": {
"_id": 1
}
}
},
{
"className": "Test",
"fields": {
"objectId": {
"type": "String"
},
"createdAt": {
"type": "Date"
},
"updatedAt": {
"type": "Date"
},
"ACL": {
"type": "ACL"
},
"AnObject": {
"type": "Object",
"required": false
},
"Text": {
"type": "String",
"required": false
},
"ANumber": {
"type": "Number",
"required": false
},
"ADate": {
"type": "Date",
"required": false
},
"AString": {
"type": "String",
"required": false
},
"ABoolean": {
"type": "Boolean",
"required": false
},
"AnArray": {
"type": "Array",
"required": false
},
"ABool": {
"type": "Boolean"
},
"ARelation": {
"type": "Relation",
"targetClass": "Test"
}
},
"classLevelPermissions": {
"find": {
"*": true
},
"count": {
"*": true
},
"get": {
"*": true
},
"create": {
"*": true
},
"update": {
"*": true
},
"delete": {
"*": true
},
"addField": {
"*": true
},
"protectedFields": {
"*": []
}
},
"indexes": {
"_id_": {
"_id": 1
}
}
},
{
"className": "TestQuery",
"fields": {
"objectId": {
"type": "String"
},
"createdAt": {
"type": "Date"
},
"updatedAt": {
"type": "Date"
},
"ACL": {
"type": "ACL"
},
"Name": {
"type": "String",
"required": false
},
"Age": {
"type": "Number",
"required": false
},
"Parent": {
"type": "Pointer",
"targetClass": "TestQuery",
"required": false
},
"Children": {
"type": "Relation",
"targetClass": "TestQuery"
},
"Sex": {
"type": "String",
"required": false
}
},
"classLevelPermissions": {
"find": {
"*": true
},
"count": {
"*": true
},
"get": {
"*": true
},
"create": {
"*": true
},
"update": {
"*": true
},
"delete": {
"*": true
},
"addField": {
"*": true
},
"protectedFields": {
"*": []
}
},
"indexes": {
"_id_": {
"_id": 1
}
}
},
{
"className": "Post",
"fields": {
"objectId": {
"type": "String"
},
"createdAt": {
"type": "Date"
},
"updatedAt": {
"type": "Date"
},
"ACL": {
"type": "ACL"
},
"Title": {
"type": "String",
"required": false
}
},
"classLevelPermissions": {
"find": {
"*": true
},
"count": {
"*": true
},
"get": {
"*": true
},
"create": {
"*": true
},
"update": {
"*": true
},
"delete": {
"*": true
},
"addField": {
"*": true
},
"protectedFields": {
"*": []
}
},
"indexes": {
"_id_": {
"_id": 1
}
}
},
{
"className": "Group",
"fields": {
"objectId": {
"type": "String"
},
"createdAt": {
"type": "Date"
},
"updatedAt": {
"type": "Date"
},
"ACL": {
"type": "ACL"
},
"posts": {
"type": "Relation",
"targetClass": "Post"
},
"Name": {
"type": "String"
}
},
"classLevelPermissions": {
"find": {
"*": true
},
"count": {
"*": true
},
"get": {
"*": true
},
"create": {
"*": true
},
"update": {
"*": true
},
"delete": {
"*": true
},
"addField": {
"*": true
},
"protectedFields": {
"*": []
}
},
"indexes": {
"_id_": {
"_id": 1
}
}
},
{
"className": "_Session",
"fields": {
"objectId": {
"type": "String"
},
"createdAt": {
"type": "Date"
},
"updatedAt": {
"type": "Date"
},
"ACL": {
"type": "ACL"
},
"restricted": {
"type": "Boolean"
},
"user": {
"type": "Pointer",
"targetClass": "_User"
},
"installationId": {
"type": "String"
},
"sessionToken": {
"type": "String"
},
"expiresAt": {
"type": "Date"
},
"createdWith": {
"type": "Object"
}
},
"classLevelPermissions": {
"find": {
"*": true
},
"count": {
"*": true
},
"get": {
"*": true
},
"create": {
"*": true
},
"update": {
"*": true
},
"delete": {
"*": true
},
"addField": {
"*": true
},
"protectedFields": {
"*": []
}
},
"indexes": {
"_id_": {
"_id": 1
}
}
}
]