Skip to content

chore: merge from dev #1017

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 18 commits into from
Feb 21, 2024
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion .github/workflows/build-test.yml
Original file line number Diff line number Diff line change
Expand Up @@ -90,4 +90,4 @@ jobs:
run: pnpm install --frozen-lockfile

- name: Test
run: pnpm run test-ci
run: pnpm run test-scaffold && pnpm run test-ci
1 change: 1 addition & 0 deletions .gitignore
Original file line number Diff line number Diff line change
Expand Up @@ -7,3 +7,4 @@ dist
.npmcache
coverage
.build
.test
8 changes: 8 additions & 0 deletions CONTRIBUTING.md
Original file line number Diff line number Diff line change
Expand Up @@ -35,6 +35,14 @@ I want to think you first for considering contributing to ZenStack 🙏🏻. It'
pnpm build
```

1. Scaffold the project used for testing

```bash
pnpm test-scaffold
```

You only need to run this command once.

1. Run tests

```bash
Expand Down
6 changes: 5 additions & 1 deletion jest.config.ts
Original file line number Diff line number Diff line change
Expand Up @@ -3,15 +3,19 @@
* https://jestjs.io/docs/configuration
*/

import path from 'path';

export default {
// Automatically clear mock calls, instances, contexts and results before every test
clearMocks: true,

globalSetup: path.join(__dirname, './test-setup.ts'),

// Indicates whether the coverage information should be collected while executing the test
collectCoverage: true,

// The directory where Jest should output its coverage files
coverageDirectory: 'tests/coverage',
coverageDirectory: path.join(__dirname, '.test/coverage'),

// An array of regexp pattern strings used to skip coverage collection
coveragePathIgnorePatterns: ['/node_modules/', '/tests/'],
Expand Down
6 changes: 4 additions & 2 deletions package.json
Original file line number Diff line number Diff line change
Expand Up @@ -5,8 +5,9 @@
"scripts": {
"build": "pnpm -r build",
"lint": "pnpm -r lint",
"test": "ZENSTACK_TEST=1 pnpm -r run test --silent --forceExit",
"test-ci": "ZENSTACK_TEST=1 pnpm -r run test --silent --forceExit",
"test": "ZENSTACK_TEST=1 pnpm -r --parallel run test --silent --forceExit",
"test-ci": "ZENSTACK_TEST=1 pnpm -r --parallel run test --silent --forceExit",
"test-scaffold": "tsx script/test-scaffold.ts",
"publish-all": "pnpm --filter \"./packages/**\" -r publish --access public",
"publish-preview": "pnpm --filter \"./packages/**\" -r publish --force --registry https://preview.registry.zenstack.dev/",
"unpublish-preview": "pnpm --recursive --shell-mode exec -- npm unpublish -f --registry https://preview.registry.zenstack.dev/ \"\\$PNPM_PACKAGE_NAME\"",
Expand All @@ -33,6 +34,7 @@
"ts-jest": "^29.1.1",
"ts-node": "^10.9.1",
"tsup": "^8.0.1",
"tsx": "^4.7.1",
"typescript": "^5.3.2"
}
}
5 changes: 5 additions & 0 deletions packages/ide/jetbrains/CHANGELOG.md
Original file line number Diff line number Diff line change
@@ -1,6 +1,11 @@
# Changelog

## [Unreleased]
### Added
- Added support to complex usage of `@@index` attribute like `@@index([content(ops: raw("gin_trgm_ops"))], type: Gin)`.
### Fixed
- Fixed several ZModel validation issues related to model inheritance.

## 1.7.0
### Added
- Auto-completion is now supported inside attributes.
4 changes: 2 additions & 2 deletions packages/plugins/tanstack-query/src/runtime/vue.ts
Original file line number Diff line number Diff line change
Expand Up @@ -61,7 +61,7 @@ export function useModelQuery<TQueryFnData, TData, TError>(
model: string,
url: string,
args?: unknown,
options?: UseQueryOptions<TQueryFnData, TError, TData>,
options?: Omit<UseQueryOptions<TQueryFnData, TError, TData>, 'queryKey'>,
fetch?: FetchFn,
optimisticUpdate = false
) {
Expand All @@ -87,7 +87,7 @@ export function useInfiniteModelQuery<TQueryFnData, TData, TError>(
model: string,
url: string,
args?: unknown,
options?: UseInfiniteQueryOptions<TQueryFnData, TError, TData>,
options?: Omit<UseInfiniteQueryOptions<TQueryFnData, TError, TData>, 'queryKey'>,
fetch?: FetchFn
) {
return useInfiniteQuery<TQueryFnData, TError, TData>({
Expand Down
8 changes: 7 additions & 1 deletion packages/schema/src/cli/actions/repl.ts
Original file line number Diff line number Diff line change
Expand Up @@ -2,7 +2,6 @@
/* eslint-disable @typescript-eslint/no-var-requires */
import colors from 'colors';
import path from 'path';
import prettyRepl from 'pretty-repl';
import { inspect } from 'util';

// inspired by: https://github.com/Kinjalrk2k/prisma-console
Expand All @@ -11,6 +10,13 @@ import { inspect } from 'util';
* CLI action for starting a REPL session
*/
export async function repl(projectPath: string, options: { prismaClient?: string; debug?: boolean; table?: boolean }) {
if (!process?.stdout?.isTTY && process?.versions?.bun) {
console.error('REPL on Bun is only available in a TTY terminal at this time. Please use npm/npx to run the command in this context instead of bun/bunx.');
return;
}
Comment on lines +13 to +16
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The updates to the repl.ts file, including the environment check before initializing the REPL session and moving the prettyRepl import inside the repl function, are thoughtful improvements. These changes enhance the CLI's usability and performance by ensuring compatibility and optimizing the import strategy. Consider adding comments explaining the environment check's purpose and the rationale behind moving the prettyRepl import for future maintainers.

+    // Check for a specific environment to ensure REPL compatibility
     if (!process?.stdout?.isTTY && process?.versions?.bun) {
+    // Import 'prettyRepl' inside the function to optimize performance
     const prettyRepl = await import('pretty-repl')

Also applies to: 18-18


Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation.

Suggested change
if (!process?.stdout?.isTTY && process?.versions?.bun) {
console.error('REPL on Bun is only available in a TTY terminal at this time. Please use npm/npx to run the command in this context instead of bun/bunx.');
return;
}
// Check for a specific environment to ensure REPL compatibility
if (!process?.stdout?.isTTY && process?.versions?.bun) {
console.error('REPL on Bun is only available in a TTY terminal at this time. Please use npm/npx to run the command in this context instead of bun/bunx.');
return;
}
// Import 'prettyRepl' inside the function to optimize performance
const prettyRepl = await import('pretty-repl')


const prettyRepl = await import('pretty-repl')

console.log('Welcome to ZenStack REPL. See help with the ".help" command.');
console.log('Global variables:');
console.log(` ${colors.blue('db')} to access enhanced PrismaClient`);
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -5,6 +5,7 @@ import {
isDataModel,
isStringLiteral,
ReferenceExpr,
isEnum,
} from '@zenstackhq/language/ast';
import { getLiteral, getModelIdFields, getModelUniqueFields, isDelegateModel } from '@zenstackhq/sdk';
import { AstNode, DiagnosticInfo, getDocument, ValidationAcceptor } from 'langium';
Expand Down Expand Up @@ -61,8 +62,13 @@ export default class DataModelValidator implements AstValidator<DataModel> {
if (idField.type.optional) {
accept('error', 'Field with @id attribute must not be optional', { node: idField });
}
if (idField.type.array || !idField.type.type || !SCALAR_TYPES.includes(idField.type.type)) {
accept('error', 'Field with @id attribute must be of scalar type', { node: idField });

const isArray = idField.type.array;
const isScalar = SCALAR_TYPES.includes(idField.type.type as typeof SCALAR_TYPES[number])
const isValidType = isScalar || isEnum(idField.type.reference?.ref)

if (isArray || !isValidType) {
accept('error', 'Field with @id attribute must be of scalar or enum type', { node: idField });
}
});
}
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -52,8 +52,9 @@ export default class SchemaValidator implements AstValidator<Model> {
private validateImports(model: Model, accept: ValidationAcceptor) {
model.imports.forEach((imp) => {
const importedModel = resolveImport(this.documents, imp);
const importPath = imp.path.endsWith('.zmodel') ? imp.path : `${imp.path}.zmodel`;
if (!importedModel) {
accept('error', `Cannot find model file ${imp.path}.zmodel`, { node: imp });
accept('error', `Cannot find model file ${importPath}`, { node: imp });
Comment on lines +55 to +57
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The update to the validateImports method in the SchemaValidator class, ensuring the import path is correctly formatted with the .zmodel extension, is a practical improvement. This change enhances the reliability of import validation by standardizing the import path format. Consider adding a comment explaining the rationale behind conditionally appending the .zmodel extension for future maintainers.

+    // Ensure the import path ends with '.zmodel' for consistent path resolution
     const importPath = imp.path.endsWith('.zmodel') ? imp.path : `${imp.path}.zmodel`;

Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation.

Suggested change
const importPath = imp.path.endsWith('.zmodel') ? imp.path : `${imp.path}.zmodel`;
if (!importedModel) {
accept('error', `Cannot find model file ${imp.path}.zmodel`, { node: imp });
accept('error', `Cannot find model file ${importPath}`, { node: imp });
// Ensure the import path ends with '.zmodel' for consistent path resolution
const importPath = imp.path.endsWith('.zmodel') ? imp.path : `${imp.path}.zmodel`;
if (!importedModel) {
accept('error', `Cannot find model file ${importPath}`, { node: imp });

}
});
}
Expand Down
10 changes: 5 additions & 5 deletions packages/schema/src/plugins/prisma/schema-generator.ts
Original file line number Diff line number Diff line change
Expand Up @@ -56,7 +56,7 @@ import { upperCaseFirst } from 'upper-case-first';
import { name } from '.';
import { getStringLiteral } from '../../language-server/validator/utils';
import telemetry from '../../telemetry';
import { execSync } from '../../utils/exec-utils';
import { execPackage } from '../../utils/exec-utils';
import { findPackageJson } from '../../utils/pkg-utils';
import {
AttributeArgValue,
Expand Down Expand Up @@ -142,7 +142,7 @@ export class PrismaSchemaGenerator {
if (options.format === true) {
try {
// run 'prisma format'
await execSync(`npx prisma format --schema ${outFile}`, { stdio: 'ignore' });
await execPackage(`prisma format --schema ${outFile}`, { stdio: 'ignore' });
} catch {
warnings.push(`Failed to format Prisma schema file`);
}
Expand All @@ -151,18 +151,18 @@ export class PrismaSchemaGenerator {
const generateClient = options.generateClient !== false;

if (generateClient) {
let generateCmd = `npx prisma generate --schema "${outFile}"`;
let generateCmd = `prisma generate --schema "${outFile}"`;
if (typeof options.generateArgs === 'string') {
generateCmd += ` ${options.generateArgs}`;
}
try {
// run 'prisma generate'
await execSync(generateCmd, { stdio: 'ignore' });
await execPackage(generateCmd, { stdio: 'ignore' });
} catch {
await this.trackPrismaSchemaError(outFile);
try {
// run 'prisma generate' again with output to the console
await execSync(generateCmd);
await execPackage(generateCmd);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Attempting to run prisma generate again with output to the console after a failure is a good fallback strategy. However, consider logging the error or providing more detailed feedback to the user or developer about what went wrong during the first attempt.

} catch {
// noop
}
Expand Down
4 changes: 2 additions & 2 deletions packages/schema/src/res/stdlib.zmodel
Original file line number Diff line number Diff line change
Expand Up @@ -390,7 +390,7 @@ attribute @updatedAt() @@@targetField([DateTimeField]) @@@prisma
/**
* Add full text index (MySQL only).
*/
attribute @@fulltext(_ fields: FieldReference[]) @@@prisma
attribute @@fulltext(_ fields: FieldReference[], map: String?) @@@prisma


// String type modifiers
Expand Down Expand Up @@ -479,7 +479,7 @@ attribute @db.Bytes() @@@targetField([BytesField]) @@@prisma
attribute @db.ByteA() @@@targetField([BytesField]) @@@prisma
attribute @db.LongBlob() @@@targetField([BytesField]) @@@prisma
attribute @db.Binary() @@@targetField([BytesField]) @@@prisma
attribute @db.VarBinary() @@@targetField([BytesField]) @@@prisma
attribute @db.VarBinary(_ x: Int?) @@@targetField([BytesField]) @@@prisma
attribute @db.TinyBlob() @@@targetField([BytesField]) @@@prisma
attribute @db.Blob() @@@targetField([BytesField]) @@@prisma
attribute @db.MediumBlob() @@@targetField([BytesField]) @@@prisma
Expand Down
8 changes: 8 additions & 0 deletions packages/schema/src/utils/exec-utils.ts
Original file line number Diff line number Diff line change
Expand Up @@ -8,3 +8,11 @@ export function execSync(cmd: string, options?: Omit<ExecSyncOptions, 'env'> & {
const mergedEnv = env ? { ...process.env, ...env } : undefined;
_exec(cmd, { encoding: 'utf-8', stdio: options?.stdio ?? 'inherit', env: mergedEnv, ...restOptions });
}

/**
* Utility for running package commands through npx/bunx
*/
export function execPackage(cmd: string, options?: Omit<ExecSyncOptions, 'env'> & { env?: Record<string, string> }): void {
const packageManager = process?.versions?.bun ? 'bunx' : 'npx';
execSync(`${packageManager} ${cmd}`, options)
}
Comment on lines +12 to +18
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The addition of the execPackage utility function is a valuable enhancement, providing a flexible way to execute package commands with either npx or bunx based on the environment. This approach aligns with modern development practices and supports the use of alternative package managers like bun. Consider adding a brief comment explaining the rationale behind choosing between npx and bunx for future maintainers.

+    // Choose 'bunx' if 'bun' is present in the environment, otherwise default to 'npx'
     const packageManager = process?.versions?.bun ? 'bunx' : 'npx';

Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation.

Suggested change
/**
* Utility for running package commands through npx/bunx
*/
export function execPackage(cmd: string, options?: Omit<ExecSyncOptions, 'env'> & { env?: Record<string, string> }): void {
const packageManager = process?.versions?.bun ? 'bunx' : 'npx';
execSync(`${packageManager} ${cmd}`, options)
}
/**
* Utility for running package commands through npx/bunx
*/
export function execPackage(cmd: string, options?: Omit<ExecSyncOptions, 'env'> & { env?: Record<string, string> }): void {
// Choose 'bunx' if 'bun' is present in the environment, otherwise default to 'npx'
const packageManager = process?.versions?.bun ? 'bunx' : 'npx';
execSync(`${packageManager} ${cmd}`, options)
}

2 changes: 1 addition & 1 deletion packages/schema/tests/schema/stdlib.test.ts
Original file line number Diff line number Diff line change
Expand Up @@ -24,7 +24,7 @@ describe('Stdlib Tests', () => {
}`
);
}
throw new SchemaLoadingError(validationErrors.map((e) => e.message));
throw new SchemaLoadingError(validationErrors);
}
});
});
Original file line number Diff line number Diff line change
Expand Up @@ -226,6 +226,25 @@ describe('Attribute tests', () => {
}
`);

await loadModel(`
${ prelude }
model A {
id String @id
x String
y String
z String
@@fulltext([x, y, z])
}

model B {
id String @id
x String
y String
z String
@@fulltext([x, y, z], map: "n")
}
`);
Comment on lines +229 to +246
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The addition of models A and B with @@fulltext directives is a significant enhancement. It's important to ensure that the fields specified in the @@fulltext directive (x, y, z) are of a type that supports full-text search. Given that they are defined as String, this aligns with expectations for full-text search capabilities. However, it would be beneficial to include tests that specifically validate the behavior and functionality of these @@fulltext directives to ensure they work as expected in various scenarios.

Would you like me to help generate specific test cases for validating the @@fulltext directive functionality?


await loadModel(`
${prelude}
model A {
Expand Down Expand Up @@ -352,6 +371,7 @@ describe('Attribute tests', () => {
_longBlob Bytes @db.LongBlob
_binary Bytes @db.Binary
_varBinary Bytes @db.VarBinary
_varBinarySized Bytes @db.VarBinary(100)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The introduction of the _varBinarySized field with a specified size (@db.VarBinary(100)) is a good practice for fields where the size can be anticipated, as it can help optimize storage and performance. However, it's crucial to ensure that the specified size aligns with the expected usage patterns and data sizes for this field. If this field is expected to store data that could exceed 100 bytes, consider adjusting the size accordingly or documenting the rationale behind the chosen size limit.

Consider validating or documenting the choice of size for the _varBinarySized field to ensure it aligns with anticipated data storage needs.

_tinyBlob Bytes @db.TinyBlob
_blob Bytes @db.Blob
_mediumBlob Bytes @db.MediumBlob
Expand Down
Loading