Many hyperlinks are disabled.
Use anonymous login
to enable hyperlinks.
Difference From trunk To release
2024-04-18
| ||
17:14 | Update the built-in SQLite to the latest pre-release of version 3.46.0, including the bug fix for the use of VALUES-as-coroutine with an OUTER JOIN. ... (Leaf check-in: 8be14e39 user: drh tags: trunk) | |
17:00 | Typo fix and add specific Apache version number to the notes about the Content-Length change. ... (check-in: d0f42889 user: stephan tags: trunk) | |
2023-11-02
| ||
19:37 | For the "fossil sync" command if the -v option is repeated, then the HTTP_VERBOSE flag is set on the http_exchange() call, resulting in additional debugging output for the wire protocol. ... (check-in: 80896224 user: drh tags: trunk) | |
12:44 | Check if markdown paragraphs contains lists. Fixes issue reported in b598ac56defddb2a. ... (Closed-Leaf check-in: 25028896 user: preben tags: markdown-multiple-sublists) | |
2023-11-01
| ||
18:56 | Version 2.23 ... (check-in: 47362306 user: drh tags: trunk, release, version-2.23) | |
14:13 | Update the built-in SQLite to version 3.44.0. ... (check-in: 72e14351 user: drh tags: trunk) | |
Changes to Dockerfile.
︙ | ︙ | |||
79 80 81 82 83 84 85 | ## --------------------------------------------------------------------- ## RUN! ## --------------------------------------------------------------------- ENV PATH "/bin" EXPOSE 8080/tcp USER fossil | | | < | 79 80 81 82 83 84 85 86 87 88 89 90 | ## --------------------------------------------------------------------- ## RUN! ## --------------------------------------------------------------------- ENV PATH "/bin" EXPOSE 8080/tcp USER fossil ENTRYPOINT [ "fossil", "server", "museum/repo.fossil" ] CMD [ \ "--create", \ "--jsmode", "bundled", \ "--user", "admin" ] |
Changes to VERSION.
|
| | | 1 | 2.23 |
Changes to auto.def.
︙ | ︙ | |||
453 454 455 456 457 458 459 | } } } if {$found} { define FOSSIL_ENABLE_SSL define-append EXTRA_CFLAGS $cflags define-append EXTRA_LDFLAGS $ldflags | < < | 453 454 455 456 457 458 459 460 461 462 463 464 465 466 | } } } if {$found} { define FOSSIL_ENABLE_SSL define-append EXTRA_CFLAGS $cflags define-append EXTRA_LDFLAGS $ldflags if {[info exists ssllibs]} { define-append LIBS $ssllibs } else { define-append LIBS -lssl -lcrypto } if {[info exists ::zlib_lib]} { define-append LIBS $::zlib_lib |
︙ | ︙ | |||
655 656 657 658 659 660 661 | } set version $tclconfig(TCL_VERSION)$tclconfig(TCL_PATCH_LEVEL) msg-result "Found Tcl $version at $tclconfig(TCL_PREFIX)" if {!$tclprivatestubs} { define-append LIBS $libs } define-append EXTRA_CFLAGS $cflags | < | 653 654 655 656 657 658 659 660 661 662 663 664 665 666 | } set version $tclconfig(TCL_VERSION)$tclconfig(TCL_PATCH_LEVEL) msg-result "Found Tcl $version at $tclconfig(TCL_PREFIX)" if {!$tclprivatestubs} { define-append LIBS $libs } define-append EXTRA_CFLAGS $cflags if {[info exists zlibpath] && $zlibpath eq "tree"} { # # NOTE: When using zlib in the source tree, prevent Tcl from # pulling in the system one. # set tclconfig(TCL_LD_FLAGS) [string map [list -lz ""] \ $tclconfig(TCL_LD_FLAGS)] |
︙ | ︙ |
Changes to autosetup/autosetup-find-tclsh.
1 2 3 4 5 6 7 8 9 10 11 12 | #!/bin/sh # Looks for a suitable tclsh or jimsh in the PATH # If not found, builds a bootstrap jimsh from source # Prefer $autosetup_tclsh if is set in the environment d=`dirname "$0"` { "$d/jimsh0" "$d/autosetup-test-tclsh"; } 2>/dev/null && exit 0 PATH="$PATH:$d"; export PATH for tclsh in $autosetup_tclsh jimsh tclsh tclsh8.5 tclsh8.6; do { $tclsh "$d/autosetup-test-tclsh"; } 2>/dev/null && exit 0 done echo 1>&2 "No installed jimsh or tclsh, building local bootstrap jimsh0" for cc in ${CC_FOR_BUILD:-cc} gcc; do | | | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 | #!/bin/sh # Looks for a suitable tclsh or jimsh in the PATH # If not found, builds a bootstrap jimsh from source # Prefer $autosetup_tclsh if is set in the environment d=`dirname "$0"` { "$d/jimsh0" "$d/autosetup-test-tclsh"; } 2>/dev/null && exit 0 PATH="$PATH:$d"; export PATH for tclsh in $autosetup_tclsh jimsh tclsh tclsh8.5 tclsh8.6; do { $tclsh "$d/autosetup-test-tclsh"; } 2>/dev/null && exit 0 done echo 1>&2 "No installed jimsh or tclsh, building local bootstrap jimsh0" for cc in ${CC_FOR_BUILD:-cc} gcc; do { $cc -o "$d/jimsh0" "$d/jimsh0.c"; } 2>/dev/null || continue "$d/jimsh0" "$d/autosetup-test-tclsh" && exit 0 done echo 1>&2 "No working C compiler found. Tried ${CC_FOR_BUILD:-cc} and gcc." echo false |
Changes to extsrc/pikchr.c.
1 | /* This file is automatically generated by Lemon from input grammar | | < | 1 2 3 4 5 6 7 8 9 | /* This file is automatically generated by Lemon from input grammar ** source file "pikchr.y". */ /* ** Zero-Clause BSD license: ** ** Copyright (C) 2020-09-01 by D. Richard Hipp <drh@sqlite.org> ** ** Permission to use, copy, modify, and/or distribute this software for ** any purpose with or without fee is hereby granted. |
︙ | ︙ | |||
318 319 320 321 322 323 324 | PPoint with; /* Position constraint from WITH clause */ char eWith; /* Type of heading point on WITH clause */ char cw; /* True for clockwise arc */ char larrow; /* Arrow at beginning (<- or <->) */ char rarrow; /* Arrow at end (-> or <->) */ char bClose; /* True if "close" is seen */ char bChop; /* True if "chop" is seen */ | < | 317 318 319 320 321 322 323 324 325 326 327 328 329 330 | PPoint with; /* Position constraint from WITH clause */ char eWith; /* Type of heading point on WITH clause */ char cw; /* True for clockwise arc */ char larrow; /* Arrow at beginning (<- or <->) */ char rarrow; /* Arrow at end (-> or <->) */ char bClose; /* True if "close" is seen */ char bChop; /* True if "chop" is seen */ unsigned char nTxt; /* Number of text values */ unsigned mProp; /* Masks of properties set so far */ unsigned mCalc; /* Values computed from other constraints */ PToken aTxt[5]; /* Text with .eCode holding TP flags */ int iLayer; /* Rendering order */ int inDir, outDir; /* Entry and exit directions */ int nPath; /* Number of path points */ |
︙ | ︙ | |||
490 491 492 493 494 495 496 | static void pik_behind(Pik*,PObj*); static PObj *pik_assert(Pik*,PNum,PToken*,PNum); static PObj *pik_position_assert(Pik*,PPoint*,PToken*,PPoint*); static PNum pik_dist(PPoint*,PPoint*); static void pik_add_macro(Pik*,PToken *pId,PToken *pCode); | | | 488 489 490 491 492 493 494 495 496 497 498 499 500 501 502 | static void pik_behind(Pik*,PObj*); static PObj *pik_assert(Pik*,PNum,PToken*,PNum); static PObj *pik_position_assert(Pik*,PPoint*,PToken*,PPoint*); static PNum pik_dist(PPoint*,PPoint*); static void pik_add_macro(Pik*,PToken *pId,PToken *pCode); #line 521 "pikchr.c" /**************** End of %include directives **********************************/ /* These constants specify the various numeric values for terminal symbols. ***************** Begin token definitions *************************************/ #ifndef T_ID #define T_ID 1 #define T_EDGEPT 2 #define T_OF 3 |
︙ | ︙ | |||
634 635 636 637 638 639 640 | ** zero the stack is dynamically sized using realloc() ** pik_parserARG_SDECL A static variable declaration for the %extra_argument ** pik_parserARG_PDECL A parameter declaration for the %extra_argument ** pik_parserARG_PARAM Code to pass %extra_argument as a subroutine parameter ** pik_parserARG_STORE Code to store %extra_argument into yypParser ** pik_parserARG_FETCH Code to extract %extra_argument from yypParser ** pik_parserCTX_* As pik_parserARG_ except for %extra_context | < < < < < | 632 633 634 635 636 637 638 639 640 641 642 643 644 645 646 647 648 649 650 651 652 653 654 655 656 657 658 | ** zero the stack is dynamically sized using realloc() ** pik_parserARG_SDECL A static variable declaration for the %extra_argument ** pik_parserARG_PDECL A parameter declaration for the %extra_argument ** pik_parserARG_PARAM Code to pass %extra_argument as a subroutine parameter ** pik_parserARG_STORE Code to store %extra_argument into yypParser ** pik_parserARG_FETCH Code to extract %extra_argument from yypParser ** pik_parserCTX_* As pik_parserARG_ except for %extra_context ** YYERRORSYMBOL is the code number of the error symbol. If not ** defined, then do no error processing. ** YYNSTATE the combined number of states. ** YYNRULE the number of rules in the grammar ** YYNTOKEN Number of terminal symbols ** YY_MAX_SHIFT Maximum value for shift actions ** YY_MIN_SHIFTREDUCE Minimum value for shift-reduce actions ** YY_MAX_SHIFTREDUCE Maximum value for shift-reduce actions ** YY_ERROR_ACTION The yy_action[] code for syntax error ** YY_ACCEPT_ACTION The yy_action[] code for accept ** YY_NO_ACTION The yy_action[] code for no-op ** YY_MIN_REDUCE Minimum value for reduce actions ** YY_MAX_REDUCE Maximum value for reduce actions */ #ifndef INTERFACE # define INTERFACE 1 #endif /************* Begin control #defines *****************************************/ #define YYCODETYPE unsigned char #define YYNOCODE 136 |
︙ | ︙ | |||
679 680 681 682 683 684 685 | #define YYSTACKDEPTH 100 #endif #define pik_parserARG_SDECL #define pik_parserARG_PDECL #define pik_parserARG_PARAM #define pik_parserARG_FETCH #define pik_parserARG_STORE | < < < < < < < < < < < < < < < < < < < < < | 672 673 674 675 676 677 678 679 680 681 682 683 684 685 686 687 688 689 690 691 692 693 694 695 696 697 698 699 700 701 702 703 704 705 706 707 708 709 710 711 712 713 714 715 716 717 718 | #define YYSTACKDEPTH 100 #endif #define pik_parserARG_SDECL #define pik_parserARG_PDECL #define pik_parserARG_PARAM #define pik_parserARG_FETCH #define pik_parserARG_STORE #define pik_parserCTX_SDECL Pik *p; #define pik_parserCTX_PDECL ,Pik *p #define pik_parserCTX_PARAM ,p #define pik_parserCTX_FETCH Pik *p=yypParser->p; #define pik_parserCTX_STORE yypParser->p=p; #define YYFALLBACK 1 #define YYNSTATE 164 #define YYNRULE 156 #define YYNRULE_WITH_ACTION 116 #define YYNTOKEN 100 #define YY_MAX_SHIFT 163 #define YY_MIN_SHIFTREDUCE 287 #define YY_MAX_SHIFTREDUCE 442 #define YY_ERROR_ACTION 443 #define YY_ACCEPT_ACTION 444 #define YY_NO_ACTION 445 #define YY_MIN_REDUCE 446 #define YY_MAX_REDUCE 601 /************* End control #defines *******************************************/ #define YY_NLOOKAHEAD ((int)(sizeof(yy_lookahead)/sizeof(yy_lookahead[0]))) /* Define the yytestcase() macro to be a no-op if is not already defined ** otherwise. ** ** Applications can choose to define yytestcase() in the %include section ** to a macro that can assist in verifying code coverage. For production ** code the yytestcase() macro should be turned off. But it is useful ** for testing. */ #ifndef yytestcase # define yytestcase(X) #endif /* Next are the tables used to determine what action to take based on the ** current state and lookahead token. These tables are used to implement ** functions that take a state number and lookahead value and return an ** action integer. ** ** Suppose the action integer is N. Then the action is determined as |
︙ | ︙ | |||
1276 1277 1278 1279 1280 1281 1282 | int yyhwm; /* High-water mark of the stack */ #endif #ifndef YYNOERRORRECOVERY int yyerrcnt; /* Shifts left before out of the error */ #endif pik_parserARG_SDECL /* A place to hold %extra_argument */ pik_parserCTX_SDECL /* A place to hold %extra_context */ | > | | | > > > > | 1248 1249 1250 1251 1252 1253 1254 1255 1256 1257 1258 1259 1260 1261 1262 1263 1264 1265 1266 1267 1268 1269 | int yyhwm; /* High-water mark of the stack */ #endif #ifndef YYNOERRORRECOVERY int yyerrcnt; /* Shifts left before out of the error */ #endif pik_parserARG_SDECL /* A place to hold %extra_argument */ pik_parserCTX_SDECL /* A place to hold %extra_context */ #if YYSTACKDEPTH<=0 int yystksz; /* Current side of the stack */ yyStackEntry *yystack; /* The parser's stack */ yyStackEntry yystk0; /* First stack entry */ #else yyStackEntry yystack[YYSTACKDEPTH]; /* The parser's stack */ yyStackEntry *yystackEnd; /* Last entry in the stack */ #endif }; typedef struct yyParser yyParser; #include <assert.h> #ifndef NDEBUG #include <stdio.h> static FILE *yyTraceFILE = 0; |
︙ | ︙ | |||
1622 1623 1624 1625 1626 1627 1628 | /* 153 */ "edge ::= RIGHT", /* 154 */ "edge ::= LEFT", /* 155 */ "object ::= objectname", }; #endif /* NDEBUG */ | | < | | | | | < | < > | | | | | | | < | < > | < < < < < > > | | > > > > > > > > | 1599 1600 1601 1602 1603 1604 1605 1606 1607 1608 1609 1610 1611 1612 1613 1614 1615 1616 1617 1618 1619 1620 1621 1622 1623 1624 1625 1626 1627 1628 1629 1630 1631 1632 1633 1634 1635 1636 1637 1638 1639 1640 1641 1642 1643 1644 1645 1646 1647 1648 1649 1650 1651 1652 1653 1654 1655 1656 1657 1658 1659 1660 1661 1662 1663 1664 1665 1666 1667 1668 1669 1670 1671 1672 1673 1674 1675 1676 1677 1678 1679 1680 | /* 153 */ "edge ::= RIGHT", /* 154 */ "edge ::= LEFT", /* 155 */ "object ::= objectname", }; #endif /* NDEBUG */ #if YYSTACKDEPTH<=0 /* ** Try to increase the size of the parser stack. Return the number ** of errors. Return 0 on success. */ static int yyGrowStack(yyParser *p){ int newSize; int idx; yyStackEntry *pNew; newSize = p->yystksz*2 + 100; idx = p->yytos ? (int)(p->yytos - p->yystack) : 0; if( p->yystack==&p->yystk0 ){ pNew = malloc(newSize*sizeof(pNew[0])); if( pNew ) pNew[0] = p->yystk0; }else{ pNew = realloc(p->yystack, newSize*sizeof(pNew[0])); } if( pNew ){ p->yystack = pNew; p->yytos = &p->yystack[idx]; #ifndef NDEBUG if( yyTraceFILE ){ fprintf(yyTraceFILE,"%sStack grows from %d to %d entries.\n", yyTracePrompt, p->yystksz, newSize); } #endif p->yystksz = newSize; } return pNew==0; } #endif /* Datatype of the argument to the memory allocated passed as the ** second argument to pik_parserAlloc() below. This can be changed by ** putting an appropriate #define in the %include section of the input ** grammar. */ #ifndef YYMALLOCARGTYPE # define YYMALLOCARGTYPE size_t #endif /* Initialize a new parser that has already been allocated. */ void pik_parserInit(void *yypRawParser pik_parserCTX_PDECL){ yyParser *yypParser = (yyParser*)yypRawParser; pik_parserCTX_STORE #ifdef YYTRACKMAXSTACKDEPTH yypParser->yyhwm = 0; #endif #if YYSTACKDEPTH<=0 yypParser->yytos = NULL; yypParser->yystack = NULL; yypParser->yystksz = 0; if( yyGrowStack(yypParser) ){ yypParser->yystack = &yypParser->yystk0; yypParser->yystksz = 1; } #endif #ifndef YYNOERRORRECOVERY yypParser->yyerrcnt = -1; #endif yypParser->yytos = yypParser->yystack; yypParser->yystack[0].stateno = 0; yypParser->yystack[0].major = 0; #if YYSTACKDEPTH>0 yypParser->yystackEnd = &yypParser->yystack[YYSTACKDEPTH-1]; #endif } #ifndef pik_parser_ENGINEALWAYSONSTACK /* ** This function allocates a new parser. ** The only argument is a pointer to a function which works like ** malloc. |
︙ | ︙ | |||
1743 1744 1745 1746 1747 1748 1749 | ** Note: during a reduce, the only symbols destroyed are those ** which appear on the RHS of the rule, but which are *not* used ** inside the C code. */ /********* Begin destructor definitions ***************************************/ case 100: /* statement_list */ { | | | | | | 1722 1723 1724 1725 1726 1727 1728 1729 1730 1731 1732 1733 1734 1735 1736 1737 1738 1739 1740 1741 1742 1743 1744 1745 1746 1747 | ** Note: during a reduce, the only symbols destroyed are those ** which appear on the RHS of the rule, but which are *not* used ** inside the C code. */ /********* Begin destructor definitions ***************************************/ case 100: /* statement_list */ { #line 510 "pikchr.y" pik_elist_free(p,(yypminor->yy235)); #line 1756 "pikchr.c" } break; case 101: /* statement */ case 102: /* unnamed_statement */ case 103: /* basetype */ { #line 512 "pikchr.y" pik_elem_free(p,(yypminor->yy162)); #line 1765 "pikchr.c" } break; /********* End destructor definitions *****************************************/ default: break; /* If no destructor action specified: do nothing */ } } |
︙ | ︙ | |||
1788 1789 1790 1791 1792 1793 1794 | } /* ** Clear all secondary memory allocations from the parser */ void pik_parserFinalize(void *p){ yyParser *pParser = (yyParser*)p; | < < < < | < < < < < < < < < < < < | < | | 1767 1768 1769 1770 1771 1772 1773 1774 1775 1776 1777 1778 1779 1780 1781 1782 1783 | } /* ** Clear all secondary memory allocations from the parser */ void pik_parserFinalize(void *p){ yyParser *pParser = (yyParser*)p; while( pParser->yytos>pParser->yystack ) yy_pop_parser_stack(pParser); #if YYSTACKDEPTH<=0 if( pParser->yystack!=&pParser->yystk0 ) free(pParser->yystack); #endif } #ifndef pik_parser_ENGINEALWAYSONSTACK /* ** Deallocate and destroy a parser. Destructors are called for ** all stack elements before shutting the parser down. |
︙ | ︙ | |||
1989 1990 1991 1992 1993 1994 1995 | fprintf(yyTraceFILE,"%sStack Overflow!\n",yyTracePrompt); } #endif while( yypParser->yytos>yypParser->yystack ) yy_pop_parser_stack(yypParser); /* Here code is inserted which will execute if the parser ** stack every overflows */ /******** Begin %stack_overflow code ******************************************/ | | | | 1951 1952 1953 1954 1955 1956 1957 1958 1959 1960 1961 1962 1963 1964 1965 1966 1967 1968 | fprintf(yyTraceFILE,"%sStack Overflow!\n",yyTracePrompt); } #endif while( yypParser->yytos>yypParser->yystack ) yy_pop_parser_stack(yypParser); /* Here code is inserted which will execute if the parser ** stack every overflows */ /******** Begin %stack_overflow code ******************************************/ #line 544 "pikchr.y" pik_error(p, 0, "parser stack overflow"); #line 1986 "pikchr.c" /******** End %stack_overflow code ********************************************/ pik_parserARG_STORE /* Suppress warning about unused %extra_argument var */ pik_parserCTX_STORE } /* ** Print tracing information for a SHIFT action |
︙ | ︙ | |||
2036 2037 2038 2039 2040 2041 2042 | yypParser->yytos++; #ifdef YYTRACKMAXSTACKDEPTH if( (int)(yypParser->yytos - yypParser->yystack)>yypParser->yyhwm ){ yypParser->yyhwm++; assert( yypParser->yyhwm == (int)(yypParser->yytos - yypParser->yystack) ); } #endif | > > | > > > > | < < > > | 1998 1999 2000 2001 2002 2003 2004 2005 2006 2007 2008 2009 2010 2011 2012 2013 2014 2015 2016 2017 2018 2019 2020 2021 2022 2023 2024 2025 2026 2027 2028 2029 2030 | yypParser->yytos++; #ifdef YYTRACKMAXSTACKDEPTH if( (int)(yypParser->yytos - yypParser->yystack)>yypParser->yyhwm ){ yypParser->yyhwm++; assert( yypParser->yyhwm == (int)(yypParser->yytos - yypParser->yystack) ); } #endif #if YYSTACKDEPTH>0 if( yypParser->yytos>yypParser->yystackEnd ){ yypParser->yytos--; yyStackOverflow(yypParser); return; } #else if( yypParser->yytos>=&yypParser->yystack[yypParser->yystksz] ){ if( yyGrowStack(yypParser) ){ yypParser->yytos--; yyStackOverflow(yypParser); return; } } #endif if( yyNewState > YY_MAX_SHIFT ){ yyNewState += YY_MIN_REDUCE - YY_MIN_SHIFTREDUCE; } yytos = yypParser->yytos; yytos->stateno = yyNewState; yytos->major = yyMajor; yytos->minor.yy0 = yyMinor; yyTraceShift(yypParser, yyNewState, "Shift"); } /* For rule J, yyRuleInfoLhs[J] contains the symbol on the left-hand side |
︙ | ︙ | |||
2417 2418 2419 2420 2421 2422 2423 | ** { ... } // User supplied code ** #line <lineno> <thisfile> ** break; */ /********** Begin reduce actions **********************************************/ YYMINORTYPE yylhsminor; case 0: /* document ::= statement_list */ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | 2385 2386 2387 2388 2389 2390 2391 2392 2393 2394 2395 2396 2397 2398 2399 2400 2401 2402 2403 2404 2405 2406 2407 2408 2409 2410 2411 2412 2413 2414 2415 2416 2417 2418 2419 2420 2421 2422 2423 2424 2425 2426 2427 2428 2429 2430 2431 2432 2433 2434 2435 2436 2437 2438 2439 2440 2441 2442 2443 2444 2445 2446 2447 2448 2449 2450 2451 2452 2453 2454 2455 2456 2457 2458 2459 2460 2461 2462 2463 2464 2465 2466 2467 2468 2469 2470 2471 2472 2473 2474 2475 2476 2477 2478 2479 2480 2481 2482 2483 2484 2485 2486 2487 2488 2489 2490 2491 2492 2493 2494 2495 2496 2497 2498 2499 2500 2501 2502 2503 2504 2505 2506 2507 2508 2509 2510 2511 2512 2513 2514 2515 2516 2517 2518 2519 2520 2521 2522 2523 2524 2525 2526 2527 2528 2529 2530 2531 2532 2533 2534 2535 2536 2537 2538 2539 2540 2541 2542 2543 2544 2545 2546 2547 2548 2549 2550 2551 2552 2553 2554 2555 2556 2557 2558 2559 2560 2561 2562 2563 2564 2565 2566 2567 2568 2569 2570 2571 2572 2573 2574 2575 2576 2577 2578 2579 2580 2581 2582 2583 2584 2585 2586 2587 2588 2589 2590 2591 2592 2593 2594 2595 2596 2597 2598 2599 2600 2601 2602 2603 2604 2605 2606 2607 2608 2609 2610 2611 2612 2613 2614 2615 2616 2617 2618 2619 2620 2621 2622 2623 2624 2625 2626 2627 2628 2629 2630 2631 2632 2633 2634 2635 2636 2637 2638 2639 2640 2641 2642 2643 2644 2645 2646 2647 2648 2649 2650 2651 2652 2653 2654 2655 2656 2657 2658 2659 2660 2661 2662 2663 2664 2665 2666 2667 2668 2669 2670 2671 2672 2673 2674 2675 2676 2677 2678 2679 2680 2681 2682 2683 2684 2685 2686 2687 2688 2689 2690 2691 2692 2693 2694 2695 2696 2697 2698 2699 2700 2701 2702 2703 2704 2705 2706 2707 2708 2709 2710 2711 2712 2713 2714 2715 2716 2717 2718 2719 2720 2721 2722 2723 2724 2725 2726 2727 2728 2729 2730 2731 2732 2733 2734 2735 2736 2737 2738 2739 2740 2741 2742 2743 2744 2745 2746 2747 2748 2749 2750 2751 2752 2753 2754 2755 2756 2757 2758 2759 2760 2761 2762 2763 2764 2765 2766 2767 2768 2769 2770 2771 2772 2773 2774 2775 2776 2777 2778 2779 2780 2781 2782 2783 2784 2785 2786 2787 2788 2789 2790 2791 2792 2793 2794 2795 2796 2797 2798 2799 2800 2801 2802 2803 2804 2805 2806 2807 2808 2809 2810 2811 2812 2813 2814 2815 2816 2817 2818 2819 2820 2821 2822 2823 2824 2825 2826 2827 2828 2829 2830 2831 2832 2833 2834 2835 2836 2837 2838 2839 2840 2841 2842 2843 2844 2845 2846 2847 2848 2849 2850 2851 2852 2853 2854 2855 2856 2857 2858 2859 2860 2861 2862 2863 2864 2865 2866 2867 2868 2869 2870 2871 2872 2873 2874 2875 2876 2877 2878 2879 2880 2881 2882 2883 2884 2885 2886 2887 2888 2889 2890 2891 2892 2893 2894 2895 2896 2897 2898 2899 2900 2901 2902 2903 2904 2905 2906 2907 2908 2909 2910 2911 2912 2913 2914 2915 2916 2917 2918 2919 2920 2921 2922 2923 2924 2925 2926 2927 2928 2929 2930 2931 2932 2933 2934 2935 2936 2937 2938 2939 2940 2941 2942 2943 2944 2945 2946 2947 2948 2949 2950 2951 2952 2953 2954 2955 2956 2957 2958 2959 2960 2961 2962 2963 2964 2965 2966 2967 2968 2969 2970 2971 2972 2973 2974 2975 2976 2977 2978 2979 2980 2981 2982 2983 2984 2985 2986 2987 2988 2989 2990 2991 2992 2993 2994 2995 2996 2997 2998 2999 3000 3001 3002 3003 3004 3005 3006 3007 | ** { ... } // User supplied code ** #line <lineno> <thisfile> ** break; */ /********** Begin reduce actions **********************************************/ YYMINORTYPE yylhsminor; case 0: /* document ::= statement_list */ #line 548 "pikchr.y" {pik_render(p,yymsp[0].minor.yy235);} #line 2419 "pikchr.c" break; case 1: /* statement_list ::= statement */ #line 551 "pikchr.y" { yylhsminor.yy235 = pik_elist_append(p,0,yymsp[0].minor.yy162); } #line 2424 "pikchr.c" yymsp[0].minor.yy235 = yylhsminor.yy235; break; case 2: /* statement_list ::= statement_list EOL statement */ #line 553 "pikchr.y" { yylhsminor.yy235 = pik_elist_append(p,yymsp[-2].minor.yy235,yymsp[0].minor.yy162); } #line 2430 "pikchr.c" yymsp[-2].minor.yy235 = yylhsminor.yy235; break; case 3: /* statement ::= */ #line 556 "pikchr.y" { yymsp[1].minor.yy162 = 0; } #line 2436 "pikchr.c" break; case 4: /* statement ::= direction */ #line 557 "pikchr.y" { pik_set_direction(p,yymsp[0].minor.yy0.eCode); yylhsminor.yy162=0; } #line 2441 "pikchr.c" yymsp[0].minor.yy162 = yylhsminor.yy162; break; case 5: /* statement ::= lvalue ASSIGN rvalue */ #line 558 "pikchr.y" {pik_set_var(p,&yymsp[-2].minor.yy0,yymsp[0].minor.yy21,&yymsp[-1].minor.yy0); yylhsminor.yy162=0;} #line 2447 "pikchr.c" yymsp[-2].minor.yy162 = yylhsminor.yy162; break; case 6: /* statement ::= PLACENAME COLON unnamed_statement */ #line 560 "pikchr.y" { yylhsminor.yy162 = yymsp[0].minor.yy162; pik_elem_setname(p,yymsp[0].minor.yy162,&yymsp[-2].minor.yy0); } #line 2453 "pikchr.c" yymsp[-2].minor.yy162 = yylhsminor.yy162; break; case 7: /* statement ::= PLACENAME COLON position */ #line 562 "pikchr.y" { yylhsminor.yy162 = pik_elem_new(p,0,0,0); if(yylhsminor.yy162){ yylhsminor.yy162->ptAt = yymsp[0].minor.yy63; pik_elem_setname(p,yylhsminor.yy162,&yymsp[-2].minor.yy0); }} #line 2460 "pikchr.c" yymsp[-2].minor.yy162 = yylhsminor.yy162; break; case 8: /* statement ::= unnamed_statement */ #line 564 "pikchr.y" {yylhsminor.yy162 = yymsp[0].minor.yy162;} #line 2466 "pikchr.c" yymsp[0].minor.yy162 = yylhsminor.yy162; break; case 9: /* statement ::= print prlist */ #line 565 "pikchr.y" {pik_append(p,"<br>\n",5); yymsp[-1].minor.yy162=0;} #line 2472 "pikchr.c" break; case 10: /* statement ::= ASSERT LP expr EQ expr RP */ #line 570 "pikchr.y" {yymsp[-5].minor.yy162=pik_assert(p,yymsp[-3].minor.yy21,&yymsp[-2].minor.yy0,yymsp[-1].minor.yy21);} #line 2477 "pikchr.c" break; case 11: /* statement ::= ASSERT LP position EQ position RP */ #line 572 "pikchr.y" {yymsp[-5].minor.yy162=pik_position_assert(p,&yymsp[-3].minor.yy63,&yymsp[-2].minor.yy0,&yymsp[-1].minor.yy63);} #line 2482 "pikchr.c" break; case 12: /* statement ::= DEFINE ID CODEBLOCK */ #line 573 "pikchr.y" {yymsp[-2].minor.yy162=0; pik_add_macro(p,&yymsp[-1].minor.yy0,&yymsp[0].minor.yy0);} #line 2487 "pikchr.c" break; case 13: /* rvalue ::= PLACENAME */ #line 584 "pikchr.y" {yylhsminor.yy21 = pik_lookup_color(p,&yymsp[0].minor.yy0);} #line 2492 "pikchr.c" yymsp[0].minor.yy21 = yylhsminor.yy21; break; case 14: /* pritem ::= FILL */ case 15: /* pritem ::= COLOR */ yytestcase(yyruleno==15); case 16: /* pritem ::= THICKNESS */ yytestcase(yyruleno==16); #line 589 "pikchr.y" {pik_append_num(p,"",pik_value(p,yymsp[0].minor.yy0.z,yymsp[0].minor.yy0.n,0));} #line 2500 "pikchr.c" break; case 17: /* pritem ::= rvalue */ #line 592 "pikchr.y" {pik_append_num(p,"",yymsp[0].minor.yy21);} #line 2505 "pikchr.c" break; case 18: /* pritem ::= STRING */ #line 593 "pikchr.y" {pik_append_text(p,yymsp[0].minor.yy0.z+1,yymsp[0].minor.yy0.n-2,0);} #line 2510 "pikchr.c" break; case 19: /* prsep ::= COMMA */ #line 594 "pikchr.y" {pik_append(p, " ", 1);} #line 2515 "pikchr.c" break; case 20: /* unnamed_statement ::= basetype attribute_list */ #line 597 "pikchr.y" {yylhsminor.yy162 = yymsp[-1].minor.yy162; pik_after_adding_attributes(p,yylhsminor.yy162);} #line 2520 "pikchr.c" yymsp[-1].minor.yy162 = yylhsminor.yy162; break; case 21: /* basetype ::= CLASSNAME */ #line 599 "pikchr.y" {yylhsminor.yy162 = pik_elem_new(p,&yymsp[0].minor.yy0,0,0); } #line 2526 "pikchr.c" yymsp[0].minor.yy162 = yylhsminor.yy162; break; case 22: /* basetype ::= STRING textposition */ #line 601 "pikchr.y" {yymsp[-1].minor.yy0.eCode = yymsp[0].minor.yy188; yylhsminor.yy162 = pik_elem_new(p,0,&yymsp[-1].minor.yy0,0); } #line 2532 "pikchr.c" yymsp[-1].minor.yy162 = yylhsminor.yy162; break; case 23: /* basetype ::= LB savelist statement_list RB */ #line 603 "pikchr.y" { p->list = yymsp[-2].minor.yy235; yymsp[-3].minor.yy162 = pik_elem_new(p,0,0,yymsp[-1].minor.yy235); if(yymsp[-3].minor.yy162) yymsp[-3].minor.yy162->errTok = yymsp[0].minor.yy0; } #line 2538 "pikchr.c" break; case 24: /* savelist ::= */ #line 608 "pikchr.y" {yymsp[1].minor.yy235 = p->list; p->list = 0;} #line 2543 "pikchr.c" break; case 25: /* relexpr ::= expr */ #line 615 "pikchr.y" {yylhsminor.yy72.rAbs = yymsp[0].minor.yy21; yylhsminor.yy72.rRel = 0;} #line 2548 "pikchr.c" yymsp[0].minor.yy72 = yylhsminor.yy72; break; case 26: /* relexpr ::= expr PERCENT */ #line 616 "pikchr.y" {yylhsminor.yy72.rAbs = 0; yylhsminor.yy72.rRel = yymsp[-1].minor.yy21/100;} #line 2554 "pikchr.c" yymsp[-1].minor.yy72 = yylhsminor.yy72; break; case 27: /* optrelexpr ::= */ #line 618 "pikchr.y" {yymsp[1].minor.yy72.rAbs = 0; yymsp[1].minor.yy72.rRel = 1.0;} #line 2560 "pikchr.c" break; case 28: /* attribute_list ::= relexpr alist */ #line 620 "pikchr.y" {pik_add_direction(p,0,&yymsp[-1].minor.yy72);} #line 2565 "pikchr.c" break; case 29: /* attribute ::= numproperty relexpr */ #line 624 "pikchr.y" { pik_set_numprop(p,&yymsp[-1].minor.yy0,&yymsp[0].minor.yy72); } #line 2570 "pikchr.c" break; case 30: /* attribute ::= dashproperty expr */ #line 625 "pikchr.y" { pik_set_dashed(p,&yymsp[-1].minor.yy0,&yymsp[0].minor.yy21); } #line 2575 "pikchr.c" break; case 31: /* attribute ::= dashproperty */ #line 626 "pikchr.y" { pik_set_dashed(p,&yymsp[0].minor.yy0,0); } #line 2580 "pikchr.c" break; case 32: /* attribute ::= colorproperty rvalue */ #line 627 "pikchr.y" { pik_set_clrprop(p,&yymsp[-1].minor.yy0,yymsp[0].minor.yy21); } #line 2585 "pikchr.c" break; case 33: /* attribute ::= go direction optrelexpr */ #line 628 "pikchr.y" { pik_add_direction(p,&yymsp[-1].minor.yy0,&yymsp[0].minor.yy72);} #line 2590 "pikchr.c" break; case 34: /* attribute ::= go direction even position */ #line 629 "pikchr.y" {pik_evenwith(p,&yymsp[-2].minor.yy0,&yymsp[0].minor.yy63);} #line 2595 "pikchr.c" break; case 35: /* attribute ::= CLOSE */ #line 630 "pikchr.y" { pik_close_path(p,&yymsp[0].minor.yy0); } #line 2600 "pikchr.c" break; case 36: /* attribute ::= CHOP */ #line 631 "pikchr.y" { p->cur->bChop = 1; } #line 2605 "pikchr.c" break; case 37: /* attribute ::= FROM position */ #line 632 "pikchr.y" { pik_set_from(p,p->cur,&yymsp[-1].minor.yy0,&yymsp[0].minor.yy63); } #line 2610 "pikchr.c" break; case 38: /* attribute ::= TO position */ #line 633 "pikchr.y" { pik_add_to(p,p->cur,&yymsp[-1].minor.yy0,&yymsp[0].minor.yy63); } #line 2615 "pikchr.c" break; case 39: /* attribute ::= THEN */ #line 634 "pikchr.y" { pik_then(p, &yymsp[0].minor.yy0, p->cur); } #line 2620 "pikchr.c" break; case 40: /* attribute ::= THEN optrelexpr HEADING expr */ case 42: /* attribute ::= GO optrelexpr HEADING expr */ yytestcase(yyruleno==42); #line 636 "pikchr.y" {pik_move_hdg(p,&yymsp[-2].minor.yy72,&yymsp[-1].minor.yy0,yymsp[0].minor.yy21,0,&yymsp[-3].minor.yy0);} #line 2626 "pikchr.c" break; case 41: /* attribute ::= THEN optrelexpr EDGEPT */ case 43: /* attribute ::= GO optrelexpr EDGEPT */ yytestcase(yyruleno==43); #line 637 "pikchr.y" {pik_move_hdg(p,&yymsp[-1].minor.yy72,0,0,&yymsp[0].minor.yy0,&yymsp[-2].minor.yy0);} #line 2632 "pikchr.c" break; case 44: /* attribute ::= AT position */ #line 642 "pikchr.y" { pik_set_at(p,0,&yymsp[0].minor.yy63,&yymsp[-1].minor.yy0); } #line 2637 "pikchr.c" break; case 45: /* attribute ::= SAME */ #line 644 "pikchr.y" {pik_same(p,0,&yymsp[0].minor.yy0);} #line 2642 "pikchr.c" break; case 46: /* attribute ::= SAME AS object */ #line 645 "pikchr.y" {pik_same(p,yymsp[0].minor.yy162,&yymsp[-2].minor.yy0);} #line 2647 "pikchr.c" break; case 47: /* attribute ::= STRING textposition */ #line 646 "pikchr.y" {pik_add_txt(p,&yymsp[-1].minor.yy0,yymsp[0].minor.yy188);} #line 2652 "pikchr.c" break; case 48: /* attribute ::= FIT */ #line 647 "pikchr.y" {pik_size_to_fit(p,&yymsp[0].minor.yy0,3); } #line 2657 "pikchr.c" break; case 49: /* attribute ::= BEHIND object */ #line 648 "pikchr.y" {pik_behind(p,yymsp[0].minor.yy162);} #line 2662 "pikchr.c" break; case 50: /* withclause ::= DOT_E edge AT position */ case 51: /* withclause ::= edge AT position */ yytestcase(yyruleno==51); #line 656 "pikchr.y" { pik_set_at(p,&yymsp[-2].minor.yy0,&yymsp[0].minor.yy63,&yymsp[-1].minor.yy0); } #line 2668 "pikchr.c" break; case 52: /* numproperty ::= HEIGHT|WIDTH|RADIUS|DIAMETER|THICKNESS */ #line 660 "pikchr.y" {yylhsminor.yy0 = yymsp[0].minor.yy0;} #line 2673 "pikchr.c" yymsp[0].minor.yy0 = yylhsminor.yy0; break; case 53: /* boolproperty ::= CW */ #line 671 "pikchr.y" {p->cur->cw = 1;} #line 2679 "pikchr.c" break; case 54: /* boolproperty ::= CCW */ #line 672 "pikchr.y" {p->cur->cw = 0;} #line 2684 "pikchr.c" break; case 55: /* boolproperty ::= LARROW */ #line 673 "pikchr.y" {p->cur->larrow=1; p->cur->rarrow=0; } #line 2689 "pikchr.c" break; case 56: /* boolproperty ::= RARROW */ #line 674 "pikchr.y" {p->cur->larrow=0; p->cur->rarrow=1; } #line 2694 "pikchr.c" break; case 57: /* boolproperty ::= LRARROW */ #line 675 "pikchr.y" {p->cur->larrow=1; p->cur->rarrow=1; } #line 2699 "pikchr.c" break; case 58: /* boolproperty ::= INVIS */ #line 676 "pikchr.y" {p->cur->sw = -0.00001;} #line 2704 "pikchr.c" break; case 59: /* boolproperty ::= THICK */ #line 677 "pikchr.y" {p->cur->sw *= 1.5;} #line 2709 "pikchr.c" break; case 60: /* boolproperty ::= THIN */ #line 678 "pikchr.y" {p->cur->sw *= 0.67;} #line 2714 "pikchr.c" break; case 61: /* boolproperty ::= SOLID */ #line 679 "pikchr.y" {p->cur->sw = pik_value(p,"thickness",9,0); p->cur->dotted = p->cur->dashed = 0.0;} #line 2720 "pikchr.c" break; case 62: /* textposition ::= */ #line 682 "pikchr.y" {yymsp[1].minor.yy188 = 0;} #line 2725 "pikchr.c" break; case 63: /* textposition ::= textposition CENTER|LJUST|RJUST|ABOVE|BELOW|ITALIC|BOLD|MONO|ALIGNED|BIG|SMALL */ #line 685 "pikchr.y" {yylhsminor.yy188 = (short int)pik_text_position(yymsp[-1].minor.yy188,&yymsp[0].minor.yy0);} #line 2730 "pikchr.c" yymsp[-1].minor.yy188 = yylhsminor.yy188; break; case 64: /* position ::= expr COMMA expr */ #line 688 "pikchr.y" {yylhsminor.yy63.x=yymsp[-2].minor.yy21; yylhsminor.yy63.y=yymsp[0].minor.yy21;} #line 2736 "pikchr.c" yymsp[-2].minor.yy63 = yylhsminor.yy63; break; case 65: /* position ::= place PLUS expr COMMA expr */ #line 690 "pikchr.y" {yylhsminor.yy63.x=yymsp[-4].minor.yy63.x+yymsp[-2].minor.yy21; yylhsminor.yy63.y=yymsp[-4].minor.yy63.y+yymsp[0].minor.yy21;} #line 2742 "pikchr.c" yymsp[-4].minor.yy63 = yylhsminor.yy63; break; case 66: /* position ::= place MINUS expr COMMA expr */ #line 691 "pikchr.y" {yylhsminor.yy63.x=yymsp[-4].minor.yy63.x-yymsp[-2].minor.yy21; yylhsminor.yy63.y=yymsp[-4].minor.yy63.y-yymsp[0].minor.yy21;} #line 2748 "pikchr.c" yymsp[-4].minor.yy63 = yylhsminor.yy63; break; case 67: /* position ::= place PLUS LP expr COMMA expr RP */ #line 693 "pikchr.y" {yylhsminor.yy63.x=yymsp[-6].minor.yy63.x+yymsp[-3].minor.yy21; yylhsminor.yy63.y=yymsp[-6].minor.yy63.y+yymsp[-1].minor.yy21;} #line 2754 "pikchr.c" yymsp[-6].minor.yy63 = yylhsminor.yy63; break; case 68: /* position ::= place MINUS LP expr COMMA expr RP */ #line 695 "pikchr.y" {yylhsminor.yy63.x=yymsp[-6].minor.yy63.x-yymsp[-3].minor.yy21; yylhsminor.yy63.y=yymsp[-6].minor.yy63.y-yymsp[-1].minor.yy21;} #line 2760 "pikchr.c" yymsp[-6].minor.yy63 = yylhsminor.yy63; break; case 69: /* position ::= LP position COMMA position RP */ #line 696 "pikchr.y" {yymsp[-4].minor.yy63.x=yymsp[-3].minor.yy63.x; yymsp[-4].minor.yy63.y=yymsp[-1].minor.yy63.y;} #line 2766 "pikchr.c" break; case 70: /* position ::= LP position RP */ #line 697 "pikchr.y" {yymsp[-2].minor.yy63=yymsp[-1].minor.yy63;} #line 2771 "pikchr.c" break; case 71: /* position ::= expr between position AND position */ #line 699 "pikchr.y" {yylhsminor.yy63 = pik_position_between(yymsp[-4].minor.yy21,yymsp[-2].minor.yy63,yymsp[0].minor.yy63);} #line 2776 "pikchr.c" yymsp[-4].minor.yy63 = yylhsminor.yy63; break; case 72: /* position ::= expr LT position COMMA position GT */ #line 701 "pikchr.y" {yylhsminor.yy63 = pik_position_between(yymsp[-5].minor.yy21,yymsp[-3].minor.yy63,yymsp[-1].minor.yy63);} #line 2782 "pikchr.c" yymsp[-5].minor.yy63 = yylhsminor.yy63; break; case 73: /* position ::= expr ABOVE position */ #line 702 "pikchr.y" {yylhsminor.yy63=yymsp[0].minor.yy63; yylhsminor.yy63.y += yymsp[-2].minor.yy21;} #line 2788 "pikchr.c" yymsp[-2].minor.yy63 = yylhsminor.yy63; break; case 74: /* position ::= expr BELOW position */ #line 703 "pikchr.y" {yylhsminor.yy63=yymsp[0].minor.yy63; yylhsminor.yy63.y -= yymsp[-2].minor.yy21;} #line 2794 "pikchr.c" yymsp[-2].minor.yy63 = yylhsminor.yy63; break; case 75: /* position ::= expr LEFT OF position */ #line 704 "pikchr.y" {yylhsminor.yy63=yymsp[0].minor.yy63; yylhsminor.yy63.x -= yymsp[-3].minor.yy21;} #line 2800 "pikchr.c" yymsp[-3].minor.yy63 = yylhsminor.yy63; break; case 76: /* position ::= expr RIGHT OF position */ #line 705 "pikchr.y" {yylhsminor.yy63=yymsp[0].minor.yy63; yylhsminor.yy63.x += yymsp[-3].minor.yy21;} #line 2806 "pikchr.c" yymsp[-3].minor.yy63 = yylhsminor.yy63; break; case 77: /* position ::= expr ON HEADING EDGEPT OF position */ #line 707 "pikchr.y" {yylhsminor.yy63 = pik_position_at_hdg(yymsp[-5].minor.yy21,&yymsp[-2].minor.yy0,yymsp[0].minor.yy63);} #line 2812 "pikchr.c" yymsp[-5].minor.yy63 = yylhsminor.yy63; break; case 78: /* position ::= expr HEADING EDGEPT OF position */ #line 709 "pikchr.y" {yylhsminor.yy63 = pik_position_at_hdg(yymsp[-4].minor.yy21,&yymsp[-2].minor.yy0,yymsp[0].minor.yy63);} #line 2818 "pikchr.c" yymsp[-4].minor.yy63 = yylhsminor.yy63; break; case 79: /* position ::= expr EDGEPT OF position */ #line 711 "pikchr.y" {yylhsminor.yy63 = pik_position_at_hdg(yymsp[-3].minor.yy21,&yymsp[-2].minor.yy0,yymsp[0].minor.yy63);} #line 2824 "pikchr.c" yymsp[-3].minor.yy63 = yylhsminor.yy63; break; case 80: /* position ::= expr ON HEADING expr FROM position */ #line 713 "pikchr.y" {yylhsminor.yy63 = pik_position_at_angle(yymsp[-5].minor.yy21,yymsp[-2].minor.yy21,yymsp[0].minor.yy63);} #line 2830 "pikchr.c" yymsp[-5].minor.yy63 = yylhsminor.yy63; break; case 81: /* position ::= expr HEADING expr FROM position */ #line 715 "pikchr.y" {yylhsminor.yy63 = pik_position_at_angle(yymsp[-4].minor.yy21,yymsp[-2].minor.yy21,yymsp[0].minor.yy63);} #line 2836 "pikchr.c" yymsp[-4].minor.yy63 = yylhsminor.yy63; break; case 82: /* place ::= edge OF object */ #line 727 "pikchr.y" {yylhsminor.yy63 = pik_place_of_elem(p,yymsp[0].minor.yy162,&yymsp[-2].minor.yy0);} #line 2842 "pikchr.c" yymsp[-2].minor.yy63 = yylhsminor.yy63; break; case 83: /* place2 ::= object */ #line 728 "pikchr.y" {yylhsminor.yy63 = pik_place_of_elem(p,yymsp[0].minor.yy162,0);} #line 2848 "pikchr.c" yymsp[0].minor.yy63 = yylhsminor.yy63; break; case 84: /* place2 ::= object DOT_E edge */ #line 729 "pikchr.y" {yylhsminor.yy63 = pik_place_of_elem(p,yymsp[-2].minor.yy162,&yymsp[0].minor.yy0);} #line 2854 "pikchr.c" yymsp[-2].minor.yy63 = yylhsminor.yy63; break; case 85: /* place2 ::= NTH VERTEX OF object */ #line 730 "pikchr.y" {yylhsminor.yy63 = pik_nth_vertex(p,&yymsp[-3].minor.yy0,&yymsp[-2].minor.yy0,yymsp[0].minor.yy162);} #line 2860 "pikchr.c" yymsp[-3].minor.yy63 = yylhsminor.yy63; break; case 86: /* object ::= nth */ #line 742 "pikchr.y" {yylhsminor.yy162 = pik_find_nth(p,0,&yymsp[0].minor.yy0);} #line 2866 "pikchr.c" yymsp[0].minor.yy162 = yylhsminor.yy162; break; case 87: /* object ::= nth OF|IN object */ #line 743 "pikchr.y" {yylhsminor.yy162 = pik_find_nth(p,yymsp[0].minor.yy162,&yymsp[-2].minor.yy0);} #line 2872 "pikchr.c" yymsp[-2].minor.yy162 = yylhsminor.yy162; break; case 88: /* objectname ::= THIS */ #line 745 "pikchr.y" {yymsp[0].minor.yy162 = p->cur;} #line 2878 "pikchr.c" break; case 89: /* objectname ::= PLACENAME */ #line 746 "pikchr.y" {yylhsminor.yy162 = pik_find_byname(p,0,&yymsp[0].minor.yy0);} #line 2883 "pikchr.c" yymsp[0].minor.yy162 = yylhsminor.yy162; break; case 90: /* objectname ::= objectname DOT_U PLACENAME */ #line 748 "pikchr.y" {yylhsminor.yy162 = pik_find_byname(p,yymsp[-2].minor.yy162,&yymsp[0].minor.yy0);} #line 2889 "pikchr.c" yymsp[-2].minor.yy162 = yylhsminor.yy162; break; case 91: /* nth ::= NTH CLASSNAME */ #line 750 "pikchr.y" {yylhsminor.yy0=yymsp[0].minor.yy0; yylhsminor.yy0.eCode = pik_nth_value(p,&yymsp[-1].minor.yy0); } #line 2895 "pikchr.c" yymsp[-1].minor.yy0 = yylhsminor.yy0; break; case 92: /* nth ::= NTH LAST CLASSNAME */ #line 751 "pikchr.y" {yylhsminor.yy0=yymsp[0].minor.yy0; yylhsminor.yy0.eCode = -pik_nth_value(p,&yymsp[-2].minor.yy0); } #line 2901 "pikchr.c" yymsp[-2].minor.yy0 = yylhsminor.yy0; break; case 93: /* nth ::= LAST CLASSNAME */ #line 752 "pikchr.y" {yymsp[-1].minor.yy0=yymsp[0].minor.yy0; yymsp[-1].minor.yy0.eCode = -1;} #line 2907 "pikchr.c" break; case 94: /* nth ::= LAST */ #line 753 "pikchr.y" {yylhsminor.yy0=yymsp[0].minor.yy0; yylhsminor.yy0.eCode = -1;} #line 2912 "pikchr.c" yymsp[0].minor.yy0 = yylhsminor.yy0; break; case 95: /* nth ::= NTH LB RB */ #line 754 "pikchr.y" {yylhsminor.yy0=yymsp[-1].minor.yy0; yylhsminor.yy0.eCode = pik_nth_value(p,&yymsp[-2].minor.yy0);} #line 2918 "pikchr.c" yymsp[-2].minor.yy0 = yylhsminor.yy0; break; case 96: /* nth ::= NTH LAST LB RB */ #line 755 "pikchr.y" {yylhsminor.yy0=yymsp[-1].minor.yy0; yylhsminor.yy0.eCode = -pik_nth_value(p,&yymsp[-3].minor.yy0);} #line 2924 "pikchr.c" yymsp[-3].minor.yy0 = yylhsminor.yy0; break; case 97: /* nth ::= LAST LB RB */ #line 756 "pikchr.y" {yymsp[-2].minor.yy0=yymsp[-1].minor.yy0; yymsp[-2].minor.yy0.eCode = -1; } #line 2930 "pikchr.c" break; case 98: /* expr ::= expr PLUS expr */ #line 758 "pikchr.y" {yylhsminor.yy21=yymsp[-2].minor.yy21+yymsp[0].minor.yy21;} #line 2935 "pikchr.c" yymsp[-2].minor.yy21 = yylhsminor.yy21; break; case 99: /* expr ::= expr MINUS expr */ #line 759 "pikchr.y" {yylhsminor.yy21=yymsp[-2].minor.yy21-yymsp[0].minor.yy21;} #line 2941 "pikchr.c" yymsp[-2].minor.yy21 = yylhsminor.yy21; break; case 100: /* expr ::= expr STAR expr */ #line 760 "pikchr.y" {yylhsminor.yy21=yymsp[-2].minor.yy21*yymsp[0].minor.yy21;} #line 2947 "pikchr.c" yymsp[-2].minor.yy21 = yylhsminor.yy21; break; case 101: /* expr ::= expr SLASH expr */ #line 761 "pikchr.y" { if( yymsp[0].minor.yy21==0.0 ){ pik_error(p, &yymsp[-1].minor.yy0, "division by zero"); yylhsminor.yy21 = 0.0; } else{ yylhsminor.yy21 = yymsp[-2].minor.yy21/yymsp[0].minor.yy21; } } #line 2956 "pikchr.c" yymsp[-2].minor.yy21 = yylhsminor.yy21; break; case 102: /* expr ::= MINUS expr */ #line 765 "pikchr.y" {yymsp[-1].minor.yy21=-yymsp[0].minor.yy21;} #line 2962 "pikchr.c" break; case 103: /* expr ::= PLUS expr */ #line 766 "pikchr.y" {yymsp[-1].minor.yy21=yymsp[0].minor.yy21;} #line 2967 "pikchr.c" break; case 104: /* expr ::= LP expr RP */ #line 767 "pikchr.y" {yymsp[-2].minor.yy21=yymsp[-1].minor.yy21;} #line 2972 "pikchr.c" break; case 105: /* expr ::= LP FILL|COLOR|THICKNESS RP */ #line 768 "pikchr.y" {yymsp[-2].minor.yy21=pik_get_var(p,&yymsp[-1].minor.yy0);} #line 2977 "pikchr.c" break; case 106: /* expr ::= NUMBER */ #line 769 "pikchr.y" {yylhsminor.yy21=pik_atof(&yymsp[0].minor.yy0);} #line 2982 "pikchr.c" yymsp[0].minor.yy21 = yylhsminor.yy21; break; case 107: /* expr ::= ID */ #line 770 "pikchr.y" {yylhsminor.yy21=pik_get_var(p,&yymsp[0].minor.yy0);} #line 2988 "pikchr.c" yymsp[0].minor.yy21 = yylhsminor.yy21; break; case 108: /* expr ::= FUNC1 LP expr RP */ #line 771 "pikchr.y" {yylhsminor.yy21 = pik_func(p,&yymsp[-3].minor.yy0,yymsp[-1].minor.yy21,0.0);} #line 2994 "pikchr.c" yymsp[-3].minor.yy21 = yylhsminor.yy21; break; case 109: /* expr ::= FUNC2 LP expr COMMA expr RP */ #line 772 "pikchr.y" {yylhsminor.yy21 = pik_func(p,&yymsp[-5].minor.yy0,yymsp[-3].minor.yy21,yymsp[-1].minor.yy21);} #line 3000 "pikchr.c" yymsp[-5].minor.yy21 = yylhsminor.yy21; break; case 110: /* expr ::= DIST LP position COMMA position RP */ #line 773 "pikchr.y" {yymsp[-5].minor.yy21 = pik_dist(&yymsp[-3].minor.yy63,&yymsp[-1].minor.yy63);} #line 3006 "pikchr.c" break; case 111: /* expr ::= place2 DOT_XY X */ #line 774 "pikchr.y" {yylhsminor.yy21 = yymsp[-2].minor.yy63.x;} #line 3011 "pikchr.c" yymsp[-2].minor.yy21 = yylhsminor.yy21; break; case 112: /* expr ::= place2 DOT_XY Y */ #line 775 "pikchr.y" {yylhsminor.yy21 = yymsp[-2].minor.yy63.y;} #line 3017 "pikchr.c" yymsp[-2].minor.yy21 = yylhsminor.yy21; break; case 113: /* expr ::= object DOT_L numproperty */ case 114: /* expr ::= object DOT_L dashproperty */ yytestcase(yyruleno==114); case 115: /* expr ::= object DOT_L colorproperty */ yytestcase(yyruleno==115); #line 776 "pikchr.y" {yylhsminor.yy21=pik_property_of(yymsp[-2].minor.yy162,&yymsp[0].minor.yy0);} #line 3025 "pikchr.c" yymsp[-2].minor.yy21 = yylhsminor.yy21; break; default: /* (116) lvalue ::= ID */ yytestcase(yyruleno==116); /* (117) lvalue ::= FILL */ yytestcase(yyruleno==117); /* (118) lvalue ::= COLOR */ yytestcase(yyruleno==118); /* (119) lvalue ::= THICKNESS */ yytestcase(yyruleno==119); |
︙ | ︙ | |||
3128 3129 3130 3131 3132 3133 3134 | int yymajor, /* The major type of the error token */ pik_parserTOKENTYPE yyminor /* The minor type of the error token */ ){ pik_parserARG_FETCH pik_parserCTX_FETCH #define TOKEN yyminor /************ Begin %syntax_error code ****************************************/ | | | | 3096 3097 3098 3099 3100 3101 3102 3103 3104 3105 3106 3107 3108 3109 3110 3111 3112 3113 3114 3115 3116 3117 3118 | int yymajor, /* The major type of the error token */ pik_parserTOKENTYPE yyminor /* The minor type of the error token */ ){ pik_parserARG_FETCH pik_parserCTX_FETCH #define TOKEN yyminor /************ Begin %syntax_error code ****************************************/ #line 536 "pikchr.y" if( TOKEN.z && TOKEN.z[0] ){ pik_error(p, &TOKEN, "syntax error"); }else{ pik_error(p, 0, "syntax error"); } UNUSED_PARAMETER(yymajor); #line 3136 "pikchr.c" /************ End %syntax_error code ******************************************/ pik_parserARG_STORE /* Suppress warning about unused %extra_argument variable */ pik_parserCTX_STORE } /* ** The following is executed when the parser accepts |
︙ | ︙ | |||
3257 3258 3259 3260 3261 3262 3263 3264 3265 3266 3267 3268 3269 3270 3271 3272 3273 3274 3275 3276 | #ifdef YYTRACKMAXSTACKDEPTH if( (int)(yypParser->yytos - yypParser->yystack)>yypParser->yyhwm ){ yypParser->yyhwm++; assert( yypParser->yyhwm == (int)(yypParser->yytos - yypParser->yystack)); } #endif if( yypParser->yytos>=yypParser->yystackEnd ){ if( yyGrowStack(yypParser) ){ yyStackOverflow(yypParser); break; } } } yyact = yy_reduce(yypParser,yyruleno,yymajor,yyminor pik_parserCTX_PARAM); }else if( yyact <= YY_MAX_SHIFTREDUCE ){ yy_shift(yypParser,yyact,(YYCODETYPE)yymajor,yyminor); #ifndef YYNOERRORRECOVERY yypParser->yyerrcnt--; #endif | > > > > > > > | 3225 3226 3227 3228 3229 3230 3231 3232 3233 3234 3235 3236 3237 3238 3239 3240 3241 3242 3243 3244 3245 3246 3247 3248 3249 3250 3251 | #ifdef YYTRACKMAXSTACKDEPTH if( (int)(yypParser->yytos - yypParser->yystack)>yypParser->yyhwm ){ yypParser->yyhwm++; assert( yypParser->yyhwm == (int)(yypParser->yytos - yypParser->yystack)); } #endif #if YYSTACKDEPTH>0 if( yypParser->yytos>=yypParser->yystackEnd ){ yyStackOverflow(yypParser); break; } #else if( yypParser->yytos>=&yypParser->yystack[yypParser->yystksz-1] ){ if( yyGrowStack(yypParser) ){ yyStackOverflow(yypParser); break; } } #endif } yyact = yy_reduce(yypParser,yyruleno,yymajor,yyminor pik_parserCTX_PARAM); }else if( yyact <= YY_MAX_SHIFTREDUCE ){ yy_shift(yypParser,yyact,(YYCODETYPE)yymajor,yyminor); #ifndef YYNOERRORRECOVERY yypParser->yyerrcnt--; #endif |
︙ | ︙ | |||
3405 3406 3407 3408 3409 3410 3411 | assert( iToken<(int)(sizeof(yyFallback)/sizeof(yyFallback[0])) ); return yyFallback[iToken]; #else (void)iToken; return 0; #endif } | | | 3380 3381 3382 3383 3384 3385 3386 3387 3388 3389 3390 3391 3392 3393 3394 | assert( iToken<(int)(sizeof(yyFallback)/sizeof(yyFallback[0])) ); return yyFallback[iToken]; #else (void)iToken; return 0; #endif } #line 781 "pikchr.y" /* Chart of the 148 official CSS color names with their ** corresponding RGB values thru Color Module Level 4: ** https://developer.mozilla.org/en-US/docs/Web/CSS/color_value ** |
︙ | ︙ | |||
3600 3601 3602 3603 3604 3605 3606 | { "charwid", 0.08 }, { "circlerad", 0.25 }, { "color", 0.0 }, { "cylht", 0.5 }, { "cylrad", 0.075 }, { "cylwid", 0.75 }, { "dashwid", 0.05 }, | < < | 3575 3576 3577 3578 3579 3580 3581 3582 3583 3584 3585 3586 3587 3588 | { "charwid", 0.08 }, { "circlerad", 0.25 }, { "color", 0.0 }, { "cylht", 0.5 }, { "cylrad", 0.075 }, { "cylwid", 0.75 }, { "dashwid", 0.05 }, { "dotrad", 0.015 }, { "ellipseht", 0.5 }, { "ellipsewid", 0.75 }, { "fileht", 0.75 }, { "filerad", 0.15 }, { "filewid", 0.5 }, { "fill", -1.0 }, |
︙ | ︙ | |||
3982 3983 3984 3985 3986 3987 3988 | pik_append_dis(p," r=\"", r, "\""); pik_append_style(p,pObj,2); pik_append(p,"\" />\n", -1); } pik_append_txt(p, pObj, 0); } | < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < | 3955 3956 3957 3958 3959 3960 3961 3962 3963 3964 3965 3966 3967 3968 | pik_append_dis(p," r=\"", r, "\""); pik_append_style(p,pObj,2); pik_append(p,"\" />\n", -1); } pik_append_txt(p, pObj, 0); } /* Methods for the "ellipse" class */ static void ellipseInit(Pik *p, PObj *pObj){ pObj->w = pik_value(p, "ellipsewid",10,0); pObj->h = pik_value(p, "ellipseht",9,0); } |
︙ | ︙ | |||
4421 4422 4423 4424 4425 4426 4427 | /* xNumProp */ 0, /* xCheck */ 0, /* xChop */ boxChop, /* xOffset */ cylinderOffset, /* xFit */ cylinderFit, /* xRender */ cylinderRender }, | < < < < < < < < < < < | 4339 4340 4341 4342 4343 4344 4345 4346 4347 4348 4349 4350 4351 4352 | /* xNumProp */ 0, /* xCheck */ 0, /* xChop */ boxChop, /* xOffset */ cylinderOffset, /* xFit */ cylinderFit, /* xRender */ cylinderRender }, { /* name */ "dot", /* isline */ 0, /* eJust */ 0, /* xInit */ dotInit, /* xNumProp */ dotNumProp, /* xCheck */ dotCheck, /* xChop */ circleChop, |
︙ | ︙ | |||
5199 5200 5201 5202 5203 5204 5205 | int iErrCol; /* Column of the error token on its line */ int iStart; /* Start position of the error context */ int iEnd; /* End position of the error context */ int iLineno; /* Line number of the error */ int iFirstLineno; /* Line number of start of error context */ int i; /* Loop counter */ int iBump = 0; /* Bump the location of the error cursor */ | | | 5106 5107 5108 5109 5110 5111 5112 5113 5114 5115 5116 5117 5118 5119 5120 | int iErrCol; /* Column of the error token on its line */ int iStart; /* Start position of the error context */ int iEnd; /* End position of the error context */ int iLineno; /* Line number of the error */ int iFirstLineno; /* Line number of start of error context */ int i; /* Loop counter */ int iBump = 0; /* Bump the location of the error cursor */ char zLineno[20]; /* Buffer in which to generate line numbers */ iErrPt = (int)(pErr->z - p->sIn.z); if( iErrPt>=(int)p->sIn.n ){ iErrPt = p->sIn.n-1; iBump = 1; }else{ while( iErrPt>0 && (p->sIn.z[iErrPt]=='\n' || p->sIn.z[iErrPt]=='\r') ){ |
︙ | ︙ | |||
6360 6361 6362 6363 6364 6365 6366 | pik_error(0, pFit, "no text to fit to"); return; } if( pObj->type->xFit==0 ) return; pik_bbox_init(&bbox); pik_compute_layout_settings(p); pik_append_txt(p, pObj, &bbox); | < | < < < | | 6267 6268 6269 6270 6271 6272 6273 6274 6275 6276 6277 6278 6279 6280 6281 6282 | pik_error(0, pFit, "no text to fit to"); return; } if( pObj->type->xFit==0 ) return; pik_bbox_init(&bbox); pik_compute_layout_settings(p); pik_append_txt(p, pObj, &bbox); w = (eWhich & 1)!=0 ? (bbox.ne.x - bbox.sw.x) + p->charWidth : 0; if( eWhich & 2 ){ PNum h1, h2; h1 = (bbox.ne.y - pObj->ptAt.y); h2 = (pObj->ptAt.y - bbox.sw.y); h = 2.0*( h1<h2 ? h2 : h1 ) + 0.5*p->charHeight; }else{ h = 0; } |
︙ | ︙ | |||
8238 8239 8240 8241 8242 8243 8244 | return TCL_OK; } #endif /* PIKCHR_TCL */ | | | 8141 8142 8143 8144 8145 8146 8147 8148 | return TCL_OK; } #endif /* PIKCHR_TCL */ #line 8173 "pikchr.c" |
Changes to extsrc/pikchr.js.
1 2 3 4 5 | var initPikchrModule = (() => { var _scriptDir = typeof document !== 'undefined' && document.currentScript ? document.currentScript.src : undefined; return ( | | > | | | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 | var initPikchrModule = (() => { var _scriptDir = typeof document !== 'undefined' && document.currentScript ? document.currentScript.src : undefined; return ( function(config) { var initPikchrModule = config || {}; var Module = typeof initPikchrModule != "undefined" ? initPikchrModule : {}; var readyPromiseResolve, readyPromiseReject; Module["ready"] = new Promise(function(resolve, reject) { readyPromiseResolve = resolve; readyPromiseReject = reject; }); var moduleOverrides = Object.assign({}, Module); var arguments_ = []; |
︙ | ︙ | |||
33 34 35 36 37 38 39 | function locateFile(path) { if (Module["locateFile"]) { return Module["locateFile"](path, scriptDirectory); } return scriptDirectory + path; } | | | | | | | | | | > | > > > | > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > < > > > > > > > > | 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 | function locateFile(path) { if (Module["locateFile"]) { return Module["locateFile"](path, scriptDirectory); } return scriptDirectory + path; } var read_, readAsync, readBinary, setWindowTitle; if (ENVIRONMENT_IS_WEB || ENVIRONMENT_IS_WORKER) { if (ENVIRONMENT_IS_WORKER) { scriptDirectory = self.location.href; } else if (typeof document != "undefined" && document.currentScript) { scriptDirectory = document.currentScript.src; } if (_scriptDir) { scriptDirectory = _scriptDir; } if (scriptDirectory.indexOf("blob:") !== 0) { scriptDirectory = scriptDirectory.substr(0, scriptDirectory.replace(/[?#].*/, "").lastIndexOf("/") + 1); } else { scriptDirectory = ""; } { read_ = url => { var xhr = new XMLHttpRequest(); xhr.open("GET", url, false); xhr.send(null); return xhr.responseText; }; if (ENVIRONMENT_IS_WORKER) { readBinary = url => { var xhr = new XMLHttpRequest(); xhr.open("GET", url, false); xhr.responseType = "arraybuffer"; xhr.send(null); return new Uint8Array(xhr.response); }; } readAsync = (url, onload, onerror) => { var xhr = new XMLHttpRequest(); xhr.open("GET", url, true); xhr.responseType = "arraybuffer"; xhr.onload = () => { if (xhr.status == 200 || xhr.status == 0 && xhr.response) { onload(xhr.response); return; } onerror(); }; xhr.onerror = onerror; xhr.send(null); }; } setWindowTitle = title => document.title = title; } else {} var out = Module["print"] || console.log.bind(console); var err = Module["printErr"] || console.warn.bind(console); Object.assign(Module, moduleOverrides); moduleOverrides = null; if (Module["arguments"]) arguments_ = Module["arguments"]; if (Module["thisProgram"]) thisProgram = Module["thisProgram"]; if (Module["quit"]) quit_ = Module["quit"]; var wasmBinary; if (Module["wasmBinary"]) wasmBinary = Module["wasmBinary"]; var noExitRuntime = Module["noExitRuntime"] || true; if (typeof WebAssembly != "object") { abort("no native wasm support detected"); } var wasmMemory; var ABORT = false; var EXITSTATUS; var UTF8Decoder = typeof TextDecoder != "undefined" ? new TextDecoder("utf8") : undefined; function UTF8ArrayToString(heapOrArray, idx, maxBytesToRead) { var endIdx = idx + maxBytesToRead; var endPtr = idx; while (heapOrArray[endPtr] && !(endPtr >= endIdx)) ++endPtr; if (endPtr - idx > 16 && heapOrArray.buffer && UTF8Decoder) { return UTF8Decoder.decode(heapOrArray.subarray(idx, endPtr)); } var str = ""; while (idx < endPtr) { var u0 = heapOrArray[idx++]; if (!(u0 & 128)) { str += String.fromCharCode(u0); continue; } var u1 = heapOrArray[idx++] & 63; if ((u0 & 224) == 192) { str += String.fromCharCode((u0 & 31) << 6 | u1); continue; } var u2 = heapOrArray[idx++] & 63; if ((u0 & 240) == 224) { u0 = (u0 & 15) << 12 | u1 << 6 | u2; } else { u0 = (u0 & 7) << 18 | u1 << 12 | u2 << 6 | heapOrArray[idx++] & 63; } if (u0 < 65536) { str += String.fromCharCode(u0); } else { var ch = u0 - 65536; str += String.fromCharCode(55296 | ch >> 10, 56320 | ch & 1023); } } return str; } function UTF8ToString(ptr, maxBytesToRead) { return ptr ? UTF8ArrayToString(HEAPU8, ptr, maxBytesToRead) : ""; } function stringToUTF8Array(str, heap, outIdx, maxBytesToWrite) { if (!(maxBytesToWrite > 0)) return 0; var startIdx = outIdx; var endIdx = outIdx + maxBytesToWrite - 1; for (var i = 0; i < str.length; ++i) { var u = str.charCodeAt(i); if (u >= 55296 && u <= 57343) { var u1 = str.charCodeAt(++i); u = 65536 + ((u & 1023) << 10) | u1 & 1023; } if (u <= 127) { if (outIdx >= endIdx) break; heap[outIdx++] = u; } else if (u <= 2047) { if (outIdx + 1 >= endIdx) break; heap[outIdx++] = 192 | u >> 6; heap[outIdx++] = 128 | u & 63; } else if (u <= 65535) { if (outIdx + 2 >= endIdx) break; heap[outIdx++] = 224 | u >> 12; heap[outIdx++] = 128 | u >> 6 & 63; heap[outIdx++] = 128 | u & 63; } else { if (outIdx + 3 >= endIdx) break; heap[outIdx++] = 240 | u >> 18; heap[outIdx++] = 128 | u >> 12 & 63; heap[outIdx++] = 128 | u >> 6 & 63; heap[outIdx++] = 128 | u & 63; } } heap[outIdx] = 0; return outIdx - startIdx; } function stringToUTF8(str, outPtr, maxBytesToWrite) { return stringToUTF8Array(str, HEAPU8, outPtr, maxBytesToWrite); } var HEAP8, HEAPU8, HEAP16, HEAPU16, HEAP32, HEAPU32, HEAPF32, HEAPF64; function updateMemoryViews() { var b = wasmMemory.buffer; Module["HEAP8"] = HEAP8 = new Int8Array(b); Module["HEAP16"] = HEAP16 = new Int16Array(b); Module["HEAP32"] = HEAP32 = new Int32Array(b); Module["HEAPU8"] = HEAPU8 = new Uint8Array(b); Module["HEAPU16"] = HEAPU16 = new Uint16Array(b); Module["HEAPU32"] = HEAPU32 = new Uint32Array(b); Module["HEAPF32"] = HEAPF32 = new Float32Array(b); Module["HEAPF64"] = HEAPF64 = new Float64Array(b); } var INITIAL_MEMORY = Module["INITIAL_MEMORY"] || 16777216; var wasmTable; var __ATPRERUN__ = []; var __ATINIT__ = []; var __ATPOSTRUN__ = []; var runtimeInitialized = false; function keepRuntimeAlive() { return noExitRuntime; } function preRun() { if (Module["preRun"]) { if (typeof Module["preRun"] == "function") Module["preRun"] = [ Module["preRun"] ]; while (Module["preRun"].length) { addOnPreRun(Module["preRun"].shift()); } |
︙ | ︙ | |||
177 178 179 180 181 182 183 | var runDependencyWatcher = null; var dependenciesFulfilled = null; function addRunDependency(id) { runDependencies++; | > | > > | > | > | > | < | < | > | > | | | | | | | > > | | > | | | | | < < < < | < < < < | | | < < < < | < | < < < | < < < | | | > | > | < | | | | | > > | | | > | > > > > > > > | | < | < < > | | > > > > | < | > | < | < < < < < < < < | < | | < < < < > | < | < | < < | < < > | < < | > > > | | < < < < < | | < | < | < | < < | < < < < < < < | < < | < < < | | | | < < < < < < | < < < < < < | < < < < < | > | < < < | | | > | < > | > > < | < < | < < > > | > > > | > > | < < | | > > | > | < < < < < < < < | < < < < > > > < > > | < < < < > | < | < | | > > | | < < < < > > | < < < > | < < > > | < | > | > > | | > > > | < < | | < < | < | | < < < < < | > | < < < < | < < | < < < > | < < < < < < < < < < < < < < < < < < | > | > | | < | < < < < < > | < | | < < < < | > > | | 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324 325 326 327 328 329 330 331 332 333 334 335 336 337 338 339 340 341 342 343 344 345 346 347 348 349 350 351 352 353 354 355 356 357 358 359 360 361 362 363 364 365 366 367 368 369 370 371 372 373 374 375 376 377 378 379 380 381 382 383 384 385 386 387 388 389 390 391 392 393 394 395 396 397 398 399 400 401 402 403 404 405 406 407 408 409 410 411 412 413 414 415 416 417 418 419 420 421 422 423 424 425 426 427 428 429 430 431 432 433 434 435 436 437 438 439 440 441 442 443 444 445 446 447 448 449 450 451 452 453 454 455 456 457 458 459 460 461 462 463 464 465 466 467 468 469 470 471 472 473 474 475 476 477 478 479 480 481 482 483 484 485 486 487 488 489 490 491 492 493 494 495 496 497 498 499 500 501 502 503 504 505 506 507 508 509 510 511 512 513 514 515 516 517 518 519 520 521 522 523 524 525 526 527 528 529 530 531 532 533 534 535 536 537 538 539 540 541 542 543 544 545 546 547 548 549 550 551 552 553 554 555 556 557 558 559 560 561 562 563 564 565 566 567 568 569 570 571 572 | var runDependencyWatcher = null; var dependenciesFulfilled = null; function addRunDependency(id) { runDependencies++; if (Module["monitorRunDependencies"]) { Module["monitorRunDependencies"](runDependencies); } } function removeRunDependency(id) { runDependencies--; if (Module["monitorRunDependencies"]) { Module["monitorRunDependencies"](runDependencies); } if (runDependencies == 0) { if (runDependencyWatcher !== null) { clearInterval(runDependencyWatcher); runDependencyWatcher = null; } if (dependenciesFulfilled) { var callback = dependenciesFulfilled; dependenciesFulfilled = null; callback(); } } } function abort(what) { if (Module["onAbort"]) { Module["onAbort"](what); } what = "Aborted(" + what + ")"; err(what); ABORT = true; EXITSTATUS = 1; what += ". Build with -sASSERTIONS for more info."; var e = new WebAssembly.RuntimeError(what); readyPromiseReject(e); throw e; } var dataURIPrefix = "data:application/octet-stream;base64,"; function isDataURI(filename) { return filename.startsWith(dataURIPrefix); } var wasmBinaryFile; wasmBinaryFile = "pikchr.wasm"; if (!isDataURI(wasmBinaryFile)) { wasmBinaryFile = locateFile(wasmBinaryFile); } function getBinary(file) { try { if (file == wasmBinaryFile && wasmBinary) { return new Uint8Array(wasmBinary); } if (readBinary) { return readBinary(file); } throw "both async and sync fetching of the wasm failed"; } catch (err) { abort(err); } } function getBinaryPromise() { if (!wasmBinary && (ENVIRONMENT_IS_WEB || ENVIRONMENT_IS_WORKER)) { if (typeof fetch == "function") { return fetch(wasmBinaryFile, { credentials: "same-origin" }).then(function(response) { if (!response["ok"]) { throw "failed to load wasm binary file at '" + wasmBinaryFile + "'"; } return response["arrayBuffer"](); }).catch(function() { return getBinary(wasmBinaryFile); }); } } return Promise.resolve().then(function() { return getBinary(wasmBinaryFile); }); } function createWasm() { var info = { "a": asmLibraryArg }; function receiveInstance(instance, module) { var exports = instance.exports; Module["asm"] = exports; wasmMemory = Module["asm"]["d"]; updateMemoryViews(); wasmTable = Module["asm"]["g"]; addOnInit(Module["asm"]["e"]); removeRunDependency("wasm-instantiate"); } addRunDependency("wasm-instantiate"); function receiveInstantiationResult(result) { receiveInstance(result["instance"]); } function instantiateArrayBuffer(receiver) { return getBinaryPromise().then(function(binary) { return WebAssembly.instantiate(binary, info); }).then(function(instance) { return instance; }).then(receiver, function(reason) { err("failed to asynchronously prepare wasm: " + reason); abort(reason); }); } function instantiateAsync() { if (!wasmBinary && typeof WebAssembly.instantiateStreaming == "function" && !isDataURI(wasmBinaryFile) && typeof fetch == "function") { return fetch(wasmBinaryFile, { credentials: "same-origin" }).then(function(response) { var result = WebAssembly.instantiateStreaming(response, info); return result.then(receiveInstantiationResult, function(reason) { err("wasm streaming compile failed: " + reason); err("falling back to ArrayBuffer instantiation"); return instantiateArrayBuffer(receiveInstantiationResult); }); }); } else { return instantiateArrayBuffer(receiveInstantiationResult); } } if (Module["instantiateWasm"]) { try { var exports = Module["instantiateWasm"](info, receiveInstance); return exports; } catch (e) { err("Module.instantiateWasm callback failed with error: " + e); readyPromiseReject(e); } } instantiateAsync().catch(readyPromiseReject); return {}; } var tempDouble; var tempI64; function ExitStatus(status) { this.name = "ExitStatus"; this.message = "Program terminated with exit(" + status + ")"; this.status = status; } function callRuntimeCallbacks(callbacks) { while (callbacks.length > 0) { callbacks.shift()(Module); } } function getValue(ptr, type = "i8") { if (type.endsWith("*")) type = "*"; switch (type) { case "i1": return HEAP8[ptr >> 0]; case "i8": return HEAP8[ptr >> 0]; case "i16": return HEAP16[ptr >> 1]; case "i32": return HEAP32[ptr >> 2]; case "i64": return HEAP32[ptr >> 2]; case "float": return HEAPF32[ptr >> 2]; case "double": return HEAPF64[ptr >> 3]; case "*": return HEAPU32[ptr >> 2]; default: abort("invalid type for getValue: " + type); } return null; } function setValue(ptr, value, type = "i8") { if (type.endsWith("*")) type = "*"; switch (type) { case "i1": HEAP8[ptr >> 0] = value; break; case "i8": HEAP8[ptr >> 0] = value; break; case "i16": HEAP16[ptr >> 1] = value; break; case "i32": HEAP32[ptr >> 2] = value; break; case "i64": tempI64 = [ value >>> 0, (tempDouble = value, +Math.abs(tempDouble) >= 1 ? tempDouble > 0 ? (Math.min(+Math.floor(tempDouble / 4294967296), 4294967295) | 0) >>> 0 : ~~+Math.ceil((tempDouble - +(~~tempDouble >>> 0)) / 4294967296) >>> 0 : 0) ], HEAP32[ptr >> 2] = tempI64[0], HEAP32[ptr + 4 >> 2] = tempI64[1]; break; case "float": HEAPF32[ptr >> 2] = value; break; case "double": HEAPF64[ptr >> 3] = value; break; case "*": HEAPU32[ptr >> 2] = value; break; default: abort("invalid type for setValue: " + type); } } function ___assert_fail(condition, filename, line, func) { abort("Assertion failed: " + UTF8ToString(condition) + ", at: " + [ filename ? UTF8ToString(filename) : "unknown filename", line, func ? UTF8ToString(func) : "unknown function" ]); } function abortOnCannotGrowMemory(requestedSize) { abort("OOM"); } function _emscripten_resize_heap(requestedSize) { var oldSize = HEAPU8.length; requestedSize = requestedSize >>> 0; abortOnCannotGrowMemory(requestedSize); } var SYSCALLS = { varargs: undefined, get: function() { SYSCALLS.varargs += 4; var ret = HEAP32[SYSCALLS.varargs - 4 >> 2]; return ret; }, getStr: function(ptr) { var ret = UTF8ToString(ptr); return ret; } }; function _proc_exit(code) { EXITSTATUS = code; if (!keepRuntimeAlive()) { if (Module["onExit"]) Module["onExit"](code); ABORT = true; } quit_(code, new ExitStatus(code)); } function exitJS(status, implicit) { EXITSTATUS = status; _proc_exit(status); } var _exit = exitJS; function getCFunc(ident) { var func = Module["_" + ident]; return func; } function writeArrayToMemory(array, buffer) { HEAP8.set(array, buffer); } function ccall(ident, returnType, argTypes, args, opts) { var toC = { "string": str => { var ret = 0; if (str !== null && str !== undefined && str !== 0) { var len = (str.length << 2) + 1; ret = stackAlloc(len); stringToUTF8(str, ret, len); } return ret; }, "array": arr => { var ret = stackAlloc(arr.length); writeArrayToMemory(arr, ret); return ret; |
︙ | ︙ | |||
605 606 607 608 609 610 611 | var ret = func.apply(null, cArgs); function onDone(ret) { if (stack !== 0) stackRestore(stack); return convertReturnValue(ret); } ret = onDone(ret); return ret; | < | | < < < | > | < | | > | | | | | > > | > > | > > | | < > | > > | > | 596 597 598 599 600 601 602 603 604 605 606 607 608 609 610 611 612 613 614 615 616 617 618 619 620 621 622 623 624 625 626 627 628 629 630 631 632 633 634 635 636 637 638 639 640 641 642 643 644 645 646 647 648 649 650 651 652 653 654 655 656 657 658 659 660 661 662 663 664 665 666 667 668 669 670 | var ret = func.apply(null, cArgs); function onDone(ret) { if (stack !== 0) stackRestore(stack); return convertReturnValue(ret); } ret = onDone(ret); return ret; } function cwrap(ident, returnType, argTypes, opts) { argTypes = argTypes || []; var numericArgs = argTypes.every(type => type === "number" || type === "boolean"); var numericRet = returnType !== "string"; if (numericRet && numericArgs && !opts) { return getCFunc(ident); } return function() { return ccall(ident, returnType, argTypes, arguments, opts); }; } var asmLibraryArg = { "a": ___assert_fail, "b": _emscripten_resize_heap, "c": _exit }; var asm = createWasm(); var ___wasm_call_ctors = Module["___wasm_call_ctors"] = function() { return (___wasm_call_ctors = Module["___wasm_call_ctors"] = Module["asm"]["e"]).apply(null, arguments); }; var _pikchr = Module["_pikchr"] = function() { return (_pikchr = Module["_pikchr"] = Module["asm"]["f"]).apply(null, arguments); }; var stackSave = Module["stackSave"] = function() { return (stackSave = Module["stackSave"] = Module["asm"]["h"]).apply(null, arguments); }; var stackRestore = Module["stackRestore"] = function() { return (stackRestore = Module["stackRestore"] = Module["asm"]["i"]).apply(null, arguments); }; var stackAlloc = Module["stackAlloc"] = function() { return (stackAlloc = Module["stackAlloc"] = Module["asm"]["j"]).apply(null, arguments); }; Module["stackSave"] = stackSave; Module["stackRestore"] = stackRestore; Module["cwrap"] = cwrap; Module["setValue"] = setValue; Module["getValue"] = getValue; var calledRun; dependenciesFulfilled = function runCaller() { if (!calledRun) run(); if (!calledRun) dependenciesFulfilled = runCaller; }; function run(args) { args = args || arguments_; if (runDependencies > 0) { return; } preRun(); if (runDependencies > 0) { return; } |
︙ | ︙ | |||
700 701 702 703 704 705 706 | Module["preInit"].pop()(); } } run(); | | > > | | 697 698 699 700 701 702 703 704 705 706 707 708 709 710 711 712 713 | Module["preInit"].pop()(); } } run(); return initPikchrModule.ready } ); })(); if (typeof exports === 'object' && typeof module === 'object') module.exports = initPikchrModule; else if (typeof define === 'function' && define['amd']) define([], function() { return initPikchrModule; }); else if (typeof exports === 'object') exports["initPikchrModule"] = initPikchrModule; |
Changes to extsrc/pikchr.wasm.
cannot compute difference between binary files
Changes to extsrc/shell.c.
︙ | ︙ | |||
248 249 250 251 252 253 254 255 256 257 | #endif #undef WIN32_LEAN_AND_MEAN #define WIN32_LEAN_AND_MEAN #include <windows.h> /* string conversion routines only needed on Win32 */ extern char *sqlite3_win32_unicode_to_utf8(LPCWSTR); extern LPWSTR sqlite3_win32_utf8_to_unicode(const char *zText); #endif | > > < < | < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < | < < < | < | < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < | < < < < < < | < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < | < < < < < < < < < < < < < < < < < | < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < | | < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < | < < < < < < < < < < < < < < < < < < < < < < < < | < < < < < < < < < < < < < < < < < < < < < < < < | < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < | < | < | 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 | #endif #undef WIN32_LEAN_AND_MEAN #define WIN32_LEAN_AND_MEAN #include <windows.h> /* string conversion routines only needed on Win32 */ extern char *sqlite3_win32_unicode_to_utf8(LPCWSTR); extern char *sqlite3_win32_mbcs_to_utf8_v2(const char *, int); extern char *sqlite3_win32_utf8_to_mbcs_v2(const char *, int); extern LPWSTR sqlite3_win32_utf8_to_unicode(const char *zText); #endif /* On Windows, we normally run with output mode of TEXT so that \n characters ** are automatically translated into \r\n. However, this behavior needs ** to be disabled in some cases (ex: when generating CSV output and when ** rendering quoted strings that contain \n characters). The following ** routines take care of that. */ #if (defined(_WIN32) || defined(WIN32)) && !SQLITE_OS_WINRT static void setBinaryMode(FILE *file, int isOutput){ if( isOutput ) fflush(file); _setmode(_fileno(file), _O_BINARY); } static void setTextMode(FILE *file, int isOutput){ if( isOutput ) fflush(file); _setmode(_fileno(file), _O_TEXT); } #else # define setBinaryMode(X,Y) # define setTextMode(X,Y) #endif /* True if the timer is enabled */ static int enableTimer = 0; /* A version of strcmp() that works with NULL values */ static int cli_strcmp(const char *a, const char *b){ |
︙ | ︙ | |||
1358 1359 1360 1361 1362 1363 1364 | ** Print the timing results. */ static void endTimer(void){ if( enableTimer ){ sqlite3_int64 iEnd = timeOfDay(); struct rusage sEnd; getrusage(RUSAGE_SELF, &sEnd); | | | | | | 345 346 347 348 349 350 351 352 353 354 355 356 357 358 359 360 361 362 | ** Print the timing results. */ static void endTimer(void){ if( enableTimer ){ sqlite3_int64 iEnd = timeOfDay(); struct rusage sEnd; getrusage(RUSAGE_SELF, &sEnd); printf("Run Time: real %.3f user %f sys %f\n", (iEnd - iBegin)*0.001, timeDiff(&sBegin.ru_utime, &sEnd.ru_utime), timeDiff(&sBegin.ru_stime, &sEnd.ru_stime)); } } #define BEGIN_TIMER beginTimer() #define END_TIMER endTimer() #define HAS_TIMER 1 |
︙ | ︙ | |||
1437 1438 1439 1440 1441 1442 1443 | ** Print the timing results. */ static void endTimer(void){ if( enableTimer && getProcessTimesAddr){ FILETIME ftCreation, ftExit, ftKernelEnd, ftUserEnd; sqlite3_int64 ftWallEnd = timeOfDay(); getProcessTimesAddr(hProcess,&ftCreation,&ftExit,&ftKernelEnd,&ftUserEnd); | | | | | | 424 425 426 427 428 429 430 431 432 433 434 435 436 437 438 439 440 441 | ** Print the timing results. */ static void endTimer(void){ if( enableTimer && getProcessTimesAddr){ FILETIME ftCreation, ftExit, ftKernelEnd, ftUserEnd; sqlite3_int64 ftWallEnd = timeOfDay(); getProcessTimesAddr(hProcess,&ftCreation,&ftExit,&ftKernelEnd,&ftUserEnd); printf("Run Time: real %.3f user %f sys %f\n", (ftWallEnd - ftWallBegin)*0.001, timeDiff(&ftUserBegin, &ftUserEnd), timeDiff(&ftKernelBegin, &ftKernelEnd)); } } #define BEGIN_TIMER beginTimer() #define END_TIMER endTimer() #define HAS_TIMER hasTimer() |
︙ | ︙ | |||
1477 1478 1479 1480 1481 1482 1483 | /* ** Treat stdin as an interactive input if the following variable ** is true. Otherwise, assume stdin is connected to a file or pipe. */ static int stdin_is_interactive = 1; /* | > > > > > > > > > > > > > > > > > > > | | | | 464 465 466 467 468 469 470 471 472 473 474 475 476 477 478 479 480 481 482 483 484 485 486 487 488 489 490 491 492 493 494 495 496 497 498 499 | /* ** Treat stdin as an interactive input if the following variable ** is true. Otherwise, assume stdin is connected to a file or pipe. */ static int stdin_is_interactive = 1; /* ** If build is for non-RT Windows, without 3rd-party line editing, ** console input and output may be done in a UTF-8 compatible way, ** if the OS is capable of it and the --no-utf8 option is not seen. */ #if (defined(_WIN32) || defined(WIN32)) && SHELL_USE_LOCAL_GETLINE \ && !defined(SHELL_OMIT_WIN_UTF8) && !SQLITE_OS_WINRT # define SHELL_WIN_UTF8_OPT 1 /* Record whether to do UTF-8 console I/O translation per stream. */ static int console_utf8_in = 0; static int console_utf8_out = 0; /* Record whether can do UTF-8 or --no-utf8 seen in invocation. */ static int mbcs_opted = 1; /* Assume cannot do until shown otherwise. */ #else # define console_utf8_in 0 # define console_utf8_out 0 # define SHELL_WIN_UTF8_OPT 0 #endif /* ** On Windows systems we have to know if standard output is a console ** in order to translate UTF-8 into MBCS. The following variable is ** true if translation is required. */ static int stdout_is_console = 1; /* ** The following is the open SQLite database. We make a pointer ** to this database a static variable so that it can be accessed ** by the SIGINT handler to interrupt database processing. |
︙ | ︙ | |||
1601 1602 1603 1604 1605 1606 1607 | shell_strncpy(dynPrompt.dynamicPrompt, "(..", 4); }else if( dynPrompt.inParenLevel<0 ){ shell_strncpy(dynPrompt.dynamicPrompt, ")x!", 4); }else{ shell_strncpy(dynPrompt.dynamicPrompt, "(x.", 4); dynPrompt.dynamicPrompt[2] = (char)('0'+dynPrompt.inParenLevel); } | | < > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > | | 607 608 609 610 611 612 613 614 615 616 617 618 619 620 621 622 623 624 625 626 627 628 629 630 631 632 633 634 635 636 637 638 639 640 641 642 643 644 645 646 647 648 649 650 651 652 653 654 655 656 657 658 659 660 661 662 663 664 665 666 667 668 669 670 671 672 673 674 675 676 677 678 679 680 681 682 683 684 685 686 687 688 689 690 691 692 693 694 695 696 697 698 699 700 701 702 703 704 705 706 707 708 709 710 711 712 713 714 715 716 717 718 719 720 721 722 723 724 725 726 727 728 729 730 731 732 733 734 735 736 737 738 739 740 741 742 743 744 745 746 747 748 749 750 751 752 753 754 755 756 757 758 759 760 761 762 763 764 765 766 767 768 769 770 771 772 773 774 775 776 777 778 779 780 781 782 783 784 785 786 787 788 789 790 791 792 793 794 795 796 797 798 799 800 801 802 803 804 805 806 807 808 809 810 811 812 813 814 815 816 817 818 819 820 821 822 823 824 825 826 827 828 829 830 831 832 833 834 835 836 837 838 839 840 841 842 843 844 845 846 847 848 849 850 851 852 853 854 855 856 857 858 859 860 861 862 863 864 865 | shell_strncpy(dynPrompt.dynamicPrompt, "(..", 4); }else if( dynPrompt.inParenLevel<0 ){ shell_strncpy(dynPrompt.dynamicPrompt, ")x!", 4); }else{ shell_strncpy(dynPrompt.dynamicPrompt, "(x.", 4); dynPrompt.dynamicPrompt[2] = (char)('0'+dynPrompt.inParenLevel); } shell_strncpy(dynPrompt.dynamicPrompt+3, continuePrompt+3, PROMPT_LEN_MAX-4); } } return dynPrompt.dynamicPrompt; } #endif /* !defined(SQLITE_OMIT_DYNAPROMPT) */ #if SHELL_WIN_UTF8_OPT /* Following struct is used for UTF-8 console I/O. */ static struct ConsoleState { int stdinEof; /* EOF has been seen on console input */ int infsMode; /* Input file stream mode upon shell start */ UINT inCodePage; /* Input code page upon shell start */ UINT outCodePage; /* Output code page upon shell start */ HANDLE hConsole; /* Console input or output handle */ DWORD consoleMode; /* Console mode upon shell start */ } conState = { 0, 0, 0, 0, INVALID_HANDLE_VALUE, 0 }; #ifndef _O_U16TEXT /* For build environments lacking this constant: */ # define _O_U16TEXT 0x20000 #endif /* ** If given stream number is a console, return 1 and get some attributes, ** else return 0 and set the output attributes to invalid values. */ static short console_attrs(unsigned stnum, HANDLE *pH, DWORD *pConsMode){ static int stid[3] = { STD_INPUT_HANDLE,STD_OUTPUT_HANDLE,STD_ERROR_HANDLE }; HANDLE h; *pH = INVALID_HANDLE_VALUE; *pConsMode = 0; if( stnum > 2 ) return 0; h = GetStdHandle(stid[stnum]); if( h!=*pH && GetFileType(h)==FILE_TYPE_CHAR && GetConsoleMode(h,pConsMode) ){ *pH = h; return 1; } return 0; } /* ** Perform a runtime test of Windows console to determine if it can ** do char-stream I/O correctly when the code page is set to CP_UTF8. ** Returns are: 1 => yes it can, 0 => no it cannot ** ** The console's output code page is momentarily set, then restored. ** So this should only be run when the process is given use of the ** console for either input or output. */ static short ConsoleDoesUTF8(void){ UINT ocp = GetConsoleOutputCP(); const char TrialUtf8[] = { '\xC8', '\xAB' }; /* "ȫ" or 2 MBCS characters */ WCHAR aReadBack[1] = { 0 }; /* Read back as 0x022B when decoded as UTF-8. */ CONSOLE_SCREEN_BUFFER_INFO csbInfo = {0}; /* Create an inactive screen buffer with which to do the experiment. */ HANDLE hCSB = CreateConsoleScreenBuffer(GENERIC_READ|GENERIC_WRITE, 0, 0, CONSOLE_TEXTMODE_BUFFER, NULL); if( hCSB!=INVALID_HANDLE_VALUE ){ COORD cpos = {0,0}; DWORD rbc; SetConsoleCursorPosition(hCSB, cpos); SetConsoleOutputCP(CP_UTF8); /* Write 2 chars which are a single character in UTF-8 but more in MBCS. */ WriteConsoleA(hCSB, TrialUtf8, sizeof(TrialUtf8), NULL, NULL); ReadConsoleOutputCharacterW(hCSB, &aReadBack[0], 1, cpos, &rbc); GetConsoleScreenBufferInfo(hCSB, &csbInfo); SetConsoleOutputCP(ocp); CloseHandle(hCSB); } /* Return 1 if cursor advanced by 1 position, else 0. */ return (short)(csbInfo.dwCursorPosition.X == 1 && aReadBack[0] == 0x022B); } static short in_console = 0; static short out_console = 0; /* ** Determine whether either normal I/O stream is the console, ** and whether it can do UTF-8 translation, setting globals ** in_console, out_console and mbcs_opted accordingly. */ static void probe_console(void){ HANDLE h; DWORD cMode; in_console = console_attrs(0, &h, &cMode); out_console = console_attrs(1, &h, &cMode); if( in_console || out_console ) mbcs_opted = !ConsoleDoesUTF8(); } /* ** If console is used for normal I/O, absent a --no-utf8 option, ** prepare console for UTF-8 input (from either typing or suitable ** paste operations) and/or for UTF-8 output rendering. ** ** The console state upon entry is preserved, in conState, so that ** console_restore() can later restore the same console state. ** ** The globals console_utf8_in and console_utf8_out are set, for ** later use in selecting UTF-8 or MBCS console I/O translations. ** This routine depends upon globals set by probe_console(). */ static void console_prepare_utf8(void){ struct ConsoleState csWork = { 0, 0, 0, 0, INVALID_HANDLE_VALUE, 0 }; console_utf8_in = console_utf8_out = 0; if( (!in_console && !out_console) || mbcs_opted ) return; console_attrs((in_console)? 0 : 1, &conState.hConsole, &conState.consoleMode); conState.inCodePage = GetConsoleCP(); conState.outCodePage = GetConsoleOutputCP(); if( in_console ){ SetConsoleCP(CP_UTF8); DWORD newConsoleMode = conState.consoleMode | ENABLE_LINE_INPUT | ENABLE_PROCESSED_INPUT; SetConsoleMode(conState.hConsole, newConsoleMode); conState.infsMode = _setmode(_fileno(stdin), _O_U16TEXT); console_utf8_in = 1; } if( out_console ){ SetConsoleOutputCP(CP_UTF8); console_utf8_out = 1; } } /* ** Undo the effects of console_prepare_utf8(), if any. */ static void SQLITE_CDECL console_restore(void){ if( (console_utf8_in||console_utf8_out) && conState.hConsole!=INVALID_HANDLE_VALUE ){ if( console_utf8_in ){ SetConsoleCP(conState.inCodePage); _setmode(_fileno(stdin), conState.infsMode); } if( console_utf8_out ) SetConsoleOutputCP(conState.outCodePage); SetConsoleMode(conState.hConsole, conState.consoleMode); /* Avoid multiple calls. */ conState.hConsole = INVALID_HANDLE_VALUE; conState.consoleMode = 0; console_utf8_in = 0; console_utf8_out = 0; } } /* ** Collect input like fgets(...) with special provisions for input ** from the Windows console to get around its strange coding issues. ** Defers to plain fgets() when input is not interactive or when the ** UTF-8 input is unavailable or opted out. */ static char* utf8_fgets(char *buf, int ncmax, FILE *fin){ if( fin==0 ) fin = stdin; if( fin==stdin && stdin_is_interactive && console_utf8_in ){ # define SQLITE_IALIM 150 wchar_t wbuf[SQLITE_IALIM]; int lend = 0; int noc = 0; if( ncmax==0 || conState.stdinEof ) return 0; buf[0] = 0; while( noc<ncmax-7-1 && !lend ){ /* There is room for at least 2 more characters and a 0-terminator. */ int na = (ncmax > SQLITE_IALIM*4+1 + noc) ? SQLITE_IALIM : (ncmax-1 - noc)/4; # undef SQLITE_IALIM DWORD nbr = 0; BOOL bRC = ReadConsoleW(conState.hConsole, wbuf, na, &nbr, 0); if( !bRC || (noc==0 && nbr==0) ) return 0; if( nbr > 0 ){ int nmb = WideCharToMultiByte(CP_UTF8,WC_COMPOSITECHECK|WC_DEFAULTCHAR, wbuf,nbr,0,0,0,0); if( nmb !=0 && noc+nmb <= ncmax ){ int iseg = noc; nmb = WideCharToMultiByte(CP_UTF8,WC_COMPOSITECHECK|WC_DEFAULTCHAR, wbuf,nbr,buf+noc,nmb,0,0); noc += nmb; /* Fixup line-ends as coded by Windows for CR (or "Enter".)*/ if( noc > 0 ){ if( buf[noc-1]=='\n' ){ lend = 1; if( noc > 1 && buf[noc-2]=='\r' ){ buf[noc-2] = '\n'; --noc; } } } /* Check for ^Z (anywhere in line) too. */ while( iseg < noc ){ if( buf[iseg]==0x1a ){ conState.stdinEof = 1; noc = iseg; /* Chop ^Z and anything following. */ break; } ++iseg; } }else break; /* Drop apparent garbage in. (Could assert.) */ }else break; } /* If got nothing, (after ^Z chop), must be at end-of-file. */ if( noc == 0 ) return 0; buf[noc] = 0; return buf; }else{ return fgets(buf, ncmax, fin); } } # define fgets(b,n,f) utf8_fgets(b,n,f) #endif /* SHELL_WIN_UTF8_OPT */ /* ** Render output like fprintf(). Except, if the output is going to the ** console and if this is running on a Windows machine, and if UTF-8 ** output unavailable (or available but opted out), translate the ** output from UTF-8 into MBCS for output through 8-bit stdout stream. ** (Without -no-utf8, no translation is needed and must not be done.) */ #if defined(_WIN32) || defined(WIN32) void utf8_printf(FILE *out, const char *zFormat, ...){ va_list ap; va_start(ap, zFormat); if( stdout_is_console && (out==stdout || out==stderr) && !console_utf8_out ){ char *z1 = sqlite3_vmprintf(zFormat, ap); char *z2 = sqlite3_win32_utf8_to_mbcs_v2(z1, 0); sqlite3_free(z1); fputs(z2, out); sqlite3_free(z2); }else{ vfprintf(out, zFormat, ap); } va_end(ap); } #elif !defined(utf8_printf) # define utf8_printf fprintf #endif /* ** Render output like fprintf(). This should not be used on anything that ** includes string formatting (e.g. "%s"). */ #if !defined(raw_printf) # define raw_printf fprintf #endif /* Indicate out-of-memory and exit. */ static void shell_out_of_memory(void){ raw_printf(stderr,"Error: out of memory\n"); exit(1); } /* Check a pointer to see if it is NULL. If it is NULL, exit with an ** out-of-memory error. */ static void shell_check_oom(const void *p){ |
︙ | ︙ | |||
1643 1644 1645 1646 1647 1648 1649 | static void SQLITE_CDECL iotracePrintf(const char *zFormat, ...){ va_list ap; char *z; if( iotrace==0 ) return; va_start(ap, zFormat); z = sqlite3_vmprintf(zFormat, ap); va_end(ap); | | | | | | | | 883 884 885 886 887 888 889 890 891 892 893 894 895 896 897 898 899 900 901 902 903 904 905 906 907 908 909 910 911 912 913 914 915 916 917 918 919 920 921 922 923 924 925 926 927 | static void SQLITE_CDECL iotracePrintf(const char *zFormat, ...){ va_list ap; char *z; if( iotrace==0 ) return; va_start(ap, zFormat); z = sqlite3_vmprintf(zFormat, ap); va_end(ap); utf8_printf(iotrace, "%s", z); sqlite3_free(z); } #endif /* ** Output string zUtf to stream pOut as w characters. If w is negative, ** then right-justify the text. W is the width in UTF-8 characters, not ** in bytes. This is different from the %*.*s specification in printf ** since with %*.*s the width is measured in bytes, not characters. */ static void utf8_width_print(FILE *pOut, int w, const char *zUtf){ int i; int n; int aw = w<0 ? -w : w; if( zUtf==0 ) zUtf = ""; for(i=n=0; zUtf[i]; i++){ if( (zUtf[i]&0xc0)!=0x80 ){ n++; if( n==aw ){ do{ i++; }while( (zUtf[i]&0xc0)==0x80 ); break; } } } if( n>=aw ){ utf8_printf(pOut, "%.*s", i, zUtf); }else if( w<0 ){ utf8_printf(pOut, "%*s%s", aw-n, "", zUtf); }else{ utf8_printf(pOut, "%s%*s", zUtf, aw-n, ""); } } /* ** Determines if a string is a number of not. */ |
︙ | ︙ | |||
1733 1734 1735 1736 1737 1738 1739 | /* ** Return open FILE * if zFile exists, can be opened for read ** and is an ordinary file or a character stream source. ** Otherwise return 0. */ static FILE * openChrSource(const char *zFile){ | | | | | 973 974 975 976 977 978 979 980 981 982 983 984 985 986 987 988 989 990 991 992 993 994 995 | /* ** Return open FILE * if zFile exists, can be opened for read ** and is an ordinary file or a character stream source. ** Otherwise return 0. */ static FILE * openChrSource(const char *zFile){ #ifdef _WIN32 struct _stat x = {0}; # define STAT_CHR_SRC(mode) ((mode & (_S_IFCHR|_S_IFIFO|_S_IFREG))!=0) /* On Windows, open first, then check the stream nature. This order ** is necessary because _stat() and sibs, when checking a named pipe, ** effectively break the pipe as its supplier sees it. */ FILE *rv = fopen(zFile, "rb"); if( rv==0 ) return 0; if( _fstat(_fileno(rv), &x) != 0 || !STAT_CHR_SRC(x.st_mode)){ fclose(rv); rv = 0; } return rv; #else struct stat x = {0}; |
︙ | ︙ | |||
1796 1797 1798 1799 1800 1801 1802 1803 1804 1805 1806 1807 1808 1809 | if( n>0 && zLine[n-1]=='\n' ){ n--; if( n>0 && zLine[n-1]=='\r' ) n--; zLine[n] = 0; break; } } return zLine; } /* ** Retrieve a single line of input text. ** ** If in==0 then read from standard input and prompt before each line. | > > > > > > > > > > > > > > > > > | 1036 1037 1038 1039 1040 1041 1042 1043 1044 1045 1046 1047 1048 1049 1050 1051 1052 1053 1054 1055 1056 1057 1058 1059 1060 1061 1062 1063 1064 1065 1066 | if( n>0 && zLine[n-1]=='\n' ){ n--; if( n>0 && zLine[n-1]=='\r' ) n--; zLine[n] = 0; break; } } #if defined(_WIN32) || defined(WIN32) /* For interactive input on Windows systems, with -no-utf8, ** translate the multi-byte characterset characters into UTF-8. ** This is the translation that predates console UTF-8 input. */ if( stdin_is_interactive && in==stdin && !console_utf8_in ){ char *zTrans = sqlite3_win32_mbcs_to_utf8_v2(zLine, 0); if( zTrans ){ i64 nTrans = strlen(zTrans)+1; if( nTrans>nLine ){ zLine = realloc(zLine, nTrans); shell_check_oom(zLine); } memcpy(zLine, zTrans, nTrans); sqlite3_free(zTrans); } } #endif /* defined(_WIN32) || defined(WIN32) */ return zLine; } /* ** Retrieve a single line of input text. ** ** If in==0 then read from standard input and prompt before each line. |
︙ | ︙ | |||
1822 1823 1824 1825 1826 1827 1828 | char *zPrompt; char *zResult; if( in!=0 ){ zResult = local_getline(zPrior, in); }else{ zPrompt = isContinuation ? CONTINUATION_PROMPT : mainPrompt; #if SHELL_USE_LOCAL_GETLINE | | | 1079 1080 1081 1082 1083 1084 1085 1086 1087 1088 1089 1090 1091 1092 1093 | char *zPrompt; char *zResult; if( in!=0 ){ zResult = local_getline(zPrior, in); }else{ zPrompt = isContinuation ? CONTINUATION_PROMPT : mainPrompt; #if SHELL_USE_LOCAL_GETLINE printf("%s", zPrompt); fflush(stdout); do{ zResult = local_getline(zPrior, stdin); zPrior = 0; /* ^C trap creates a false EOF, so let "interrupt" thread catch up. */ if( zResult==0 ) sqlite3_sleep(50); }while( zResult==0 && seenInterrupt>0 ); |
︙ | ︙ | |||
2069 2070 2071 2072 2073 2074 2075 | sqlite3_value **apVal ){ double r = sqlite3_value_double(apVal[0]); int n = nVal>=2 ? sqlite3_value_int(apVal[1]) : 26; char z[400]; if( n<1 ) n = 1; if( n>350 ) n = 350; | | | 1326 1327 1328 1329 1330 1331 1332 1333 1334 1335 1336 1337 1338 1339 1340 | sqlite3_value **apVal ){ double r = sqlite3_value_double(apVal[0]); int n = nVal>=2 ? sqlite3_value_int(apVal[1]) : 26; char z[400]; if( n<1 ) n = 1; if( n>350 ) n = 350; snprintf(z, sizeof(z)-1, "%#+.*e", n, r); sqlite3_result_text(pCtx, z, -1, SQLITE_TRANSIENT); } /* ** SQL function: shell_module_schema(X) ** |
︙ | ︙ | |||
5727 5728 5729 5730 5731 5732 5733 | #ifndef SQLITE_OMIT_VIRTUALTABLE /* ** Return that member of a generate_series(...) sequence whose 0-based ** index is ix. The 0th member is given by smBase. The sequence members ** progress per ix increment by smStep. */ | | < | | < | < < | | | | 4984 4985 4986 4987 4988 4989 4990 4991 4992 4993 4994 4995 4996 4997 4998 4999 5000 5001 5002 5003 5004 5005 5006 5007 | #ifndef SQLITE_OMIT_VIRTUALTABLE /* ** Return that member of a generate_series(...) sequence whose 0-based ** index is ix. The 0th member is given by smBase. The sequence members ** progress per ix increment by smStep. */ static sqlite3_int64 genSeqMember(sqlite3_int64 smBase, sqlite3_int64 smStep, sqlite3_uint64 ix){ if( ix>=(sqlite3_uint64)LLONG_MAX ){ /* Get ix into signed i64 range. */ ix -= (sqlite3_uint64)LLONG_MAX; /* With 2's complement ALU, this next can be 1 step, but is split into * 2 for UBSAN's satisfaction (and hypothetical 1's complement ALUs.) */ smBase += (LLONG_MAX/2) * smStep; smBase += (LLONG_MAX - LLONG_MAX/2) * smStep; } /* Under UBSAN (or on 1's complement machines), must do this last term * in steps to avoid the dreaded (and harmless) signed multiply overlow. */ if( ix>=2 ){ sqlite3_int64 ix2 = (sqlite3_int64)ix/2; smBase += ix2*smStep; ix -= ix2; |
︙ | ︙ | |||
13804 13805 13806 13807 13808 13809 13810 | #endif /* Copy the entire schema of database [db] into [dbm]. */ if( rc==SQLITE_OK ){ sqlite3_stmt *pSql = 0; rc = idxPrintfPrepareStmt(pNew->db, &pSql, pzErrmsg, "SELECT sql FROM sqlite_schema WHERE name NOT LIKE 'sqlite_%%'" | | | 13057 13058 13059 13060 13061 13062 13063 13064 13065 13066 13067 13068 13069 13070 13071 | #endif /* Copy the entire schema of database [db] into [dbm]. */ if( rc==SQLITE_OK ){ sqlite3_stmt *pSql = 0; rc = idxPrintfPrepareStmt(pNew->db, &pSql, pzErrmsg, "SELECT sql FROM sqlite_schema WHERE name NOT LIKE 'sqlite_%%'" " AND sql NOT LIKE 'CREATE VIRTUAL %%'" ); while( rc==SQLITE_OK && SQLITE_ROW==sqlite3_step(pSql) ){ const char *zSql = (const char*)sqlite3_column_text(pSql, 0); if( zSql ) rc = sqlite3_exec(pNew->dbm, zSql, 0, 0, pzErrmsg); } idxFinalize(&rc, pSql); } |
︙ | ︙ | |||
14006 14007 14008 14009 14010 14011 14012 | } } #endif /* ifndef SQLITE_OMIT_VIRTUALTABLE */ /************************* End ../ext/expert/sqlite3expert.c ********************/ | < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < | 13259 13260 13261 13262 13263 13264 13265 13266 13267 13268 13269 13270 13271 13272 | } } #endif /* ifndef SQLITE_OMIT_VIRTUALTABLE */ /************************* End ../ext/expert/sqlite3expert.c ********************/ #if !defined(SQLITE_OMIT_VIRTUALTABLE) && defined(SQLITE_ENABLE_DBPAGE_VTAB) #define SQLITE_SHELL_HAVE_RECOVER 1 #else #define SQLITE_SHELL_HAVE_RECOVER 0 #endif #if SQLITE_SHELL_HAVE_RECOVER /************************* Begin ../ext/recover/sqlite3recover.h ******************/ |
︙ | ︙ | |||
15880 15881 15882 15883 15884 15885 15886 | sqlite3_result_blob(pCtx, pData, n, SQLITE_TRANSIENT); } } } } } | < < < < < < < < < | 14015 14016 14017 14018 14019 14020 14021 14022 14023 14024 14025 14026 14027 14028 | sqlite3_result_blob(pCtx, pData, n, SQLITE_TRANSIENT); } } } } } /* ** Move an sqlite_dbdata or sqlite_dbptr cursor to the next entry. */ static int dbdataNext(sqlite3_vtab_cursor *pCursor){ DbdataCursor *pCsr = (DbdataCursor*)pCursor; DbdataTable *pTab = (DbdataTable*)pCursor->pVtab; |
︙ | ︙ | |||
15917 15918 15919 15920 15921 15922 15923 | if( pCsr->bOnePage ) return SQLITE_OK; pCsr->iPgno++; } assert( iOff+3+2<=pCsr->nPage ); pCsr->iCell = pTab->bPtr ? -2 : 0; pCsr->nCell = get_uint16(&pCsr->aPage[iOff+3]); | < < < | 14043 14044 14045 14046 14047 14048 14049 14050 14051 14052 14053 14054 14055 14056 | if( pCsr->bOnePage ) return SQLITE_OK; pCsr->iPgno++; } assert( iOff+3+2<=pCsr->nPage ); pCsr->iCell = pTab->bPtr ? -2 : 0; pCsr->nCell = get_uint16(&pCsr->aPage[iOff+3]); } if( pTab->bPtr ){ if( pCsr->aPage[iOff]!=0x02 && pCsr->aPage[iOff]!=0x05 ){ pCsr->iCell = pCsr->nCell; } pCsr->iCell++; |
︙ | ︙ | |||
15964 15965 15966 15967 15968 15969 15970 | pCsr->iCell = pCsr->nCell; break; } if( pCsr->iCell>=pCsr->nCell ){ bNextPage = 1; }else{ | < > | | | < | 14087 14088 14089 14090 14091 14092 14093 14094 14095 14096 14097 14098 14099 14100 14101 14102 14103 14104 14105 14106 14107 14108 14109 14110 14111 14112 14113 14114 14115 14116 | pCsr->iCell = pCsr->nCell; break; } if( pCsr->iCell>=pCsr->nCell ){ bNextPage = 1; }else{ iOff += 8 + nPointer + pCsr->iCell*2; if( iOff>pCsr->nPage ){ bNextPage = 1; }else{ iOff = get_uint16(&pCsr->aPage[iOff]); } /* For an interior node cell, skip past the child-page number */ iOff += nPointer; /* Load the "byte of payload including overflow" field */ if( bNextPage || iOff>pCsr->nPage ){ bNextPage = 1; }else{ iOff += dbdataGetVarintU32(&pCsr->aPage[iOff], &nPayload); } /* If this is a leaf intkey cell, load the rowid */ if( bHasRowid && !bNextPage && iOff<pCsr->nPage ){ iOff += dbdataGetVarint(&pCsr->aPage[iOff], &pCsr->iIntkey); } |
︙ | ︙ | |||
16059 16060 16061 16062 16063 16064 16065 | pCsr->iField = (bHasRowid ? -1 : 0); } } }else{ pCsr->iField++; if( pCsr->iField>0 ){ sqlite3_int64 iType; | | < < | 14181 14182 14183 14184 14185 14186 14187 14188 14189 14190 14191 14192 14193 14194 14195 | pCsr->iField = (bHasRowid ? -1 : 0); } } }else{ pCsr->iField++; if( pCsr->iField>0 ){ sqlite3_int64 iType; if( pCsr->pHdrPtr>&pCsr->pRec[pCsr->nRec] ){ bNextPage = 1; }else{ int szField = 0; pCsr->pHdrPtr += dbdataGetVarintU32(pCsr->pHdrPtr, &iType); szField = dbdataValueBytes(iType); if( (pCsr->nRec - (pCsr->pPtr - pCsr->pRec))<szField ){ pCsr->pPtr = &pCsr->pRec[pCsr->nRec]; |
︙ | ︙ | |||
17549 17550 17551 17552 17553 17554 17555 | } rc = sqlite3_exec(p->dbOut, zSql, 0, 0, 0); if( rc==SQLITE_OK ){ recoverSqlCallback(p, zSql); if( bTable && !bVirtual ){ if( SQLITE_ROW==sqlite3_step(pTblname) ){ const char *zTbl = (const char*)sqlite3_column_text(pTblname, 0); | | | 15669 15670 15671 15672 15673 15674 15675 15676 15677 15678 15679 15680 15681 15682 15683 | } rc = sqlite3_exec(p->dbOut, zSql, 0, 0, 0); if( rc==SQLITE_OK ){ recoverSqlCallback(p, zSql); if( bTable && !bVirtual ){ if( SQLITE_ROW==sqlite3_step(pTblname) ){ const char *zTbl = (const char*)sqlite3_column_text(pTblname, 0); recoverAddTable(p, zTbl, iRoot); } recoverReset(p, pTblname); } }else if( rc!=SQLITE_ERROR ){ recoverDbError(p, p->dbOut); } sqlite3_free(zFree); |
︙ | ︙ | |||
19299 19300 19301 19302 19303 19304 19305 | u8 scanstatsOn; /* True to display scan stats before each finalize */ u8 openMode; /* SHELL_OPEN_NORMAL, _APPENDVFS, or _ZIPFILE */ u8 doXdgOpen; /* Invoke start/open/xdg-open in output_reset() */ u8 nEqpLevel; /* Depth of the EQP output graph */ u8 eTraceType; /* SHELL_TRACE_* value for type of trace */ u8 bSafeMode; /* True to prohibit unsafe operations */ u8 bSafeModePersist; /* The long-term value of bSafeMode */ | < | 17419 17420 17421 17422 17423 17424 17425 17426 17427 17428 17429 17430 17431 17432 | u8 scanstatsOn; /* True to display scan stats before each finalize */ u8 openMode; /* SHELL_OPEN_NORMAL, _APPENDVFS, or _ZIPFILE */ u8 doXdgOpen; /* Invoke start/open/xdg-open in output_reset() */ u8 nEqpLevel; /* Depth of the EQP output graph */ u8 eTraceType; /* SHELL_TRACE_* value for type of trace */ u8 bSafeMode; /* True to prohibit unsafe operations */ u8 bSafeModePersist; /* The long-term value of bSafeMode */ ColModeOpts cmOpts; /* Option values affecting columnar mode output */ unsigned statsOn; /* True to display memory stats before each finalize */ unsigned mEqpLines; /* Mask of vertical lines in the EQP output graph */ int inputNesting; /* Track nesting level of .read and other redirects */ int outCount; /* Revert to stdout when reaching zero */ int cnt; /* Number of records displayed so far */ int lineno; /* Line number of last line read from in */ |
︙ | ︙ | |||
19493 19494 19495 19496 19497 19498 19499 | /* ** A callback for the sqlite3_log() interface. */ static void shellLog(void *pArg, int iErrCode, const char *zMsg){ ShellState *p = (ShellState*)pArg; if( p->pLog==0 ) return; | | | | | > | 17612 17613 17614 17615 17616 17617 17618 17619 17620 17621 17622 17623 17624 17625 17626 17627 17628 17629 17630 17631 17632 17633 17634 17635 17636 17637 17638 17639 17640 17641 17642 17643 17644 17645 17646 17647 17648 17649 17650 17651 17652 17653 17654 17655 17656 17657 17658 17659 17660 17661 17662 17663 | /* ** A callback for the sqlite3_log() interface. */ static void shellLog(void *pArg, int iErrCode, const char *zMsg){ ShellState *p = (ShellState*)pArg; if( p->pLog==0 ) return; utf8_printf(p->pLog, "(%d) %s\n", iErrCode, zMsg); fflush(p->pLog); } /* ** SQL function: shell_putsnl(X) ** ** Write the text X to the screen (or whatever output is being directed) ** adding a newline at the end, and then return X. */ static void shellPutsFunc( sqlite3_context *pCtx, int nVal, sqlite3_value **apVal ){ ShellState *p = (ShellState*)sqlite3_user_data(pCtx); (void)nVal; utf8_printf(p->out, "%s\n", sqlite3_value_text(apVal[0])); sqlite3_result_value(pCtx, apVal[0]); } /* ** If in safe mode, print an error message described by the arguments ** and exit immediately. */ static void failIfSafeMode( ShellState *p, const char *zErrMsg, ... ){ if( p->bSafeMode ){ va_list ap; char *zMsg; va_start(ap, zErrMsg); zMsg = sqlite3_vmprintf(zErrMsg, ap); va_end(ap); raw_printf(stderr, "line %d: ", p->lineno); utf8_printf(stderr, "%s\n", zMsg); exit(1); } } /* ** SQL function: edit(VALUE) ** edit(VALUE,EDITOR) |
︙ | ︙ | |||
19697 19698 19699 19700 19701 19702 19703 | memcpy(p->colSeparator, p->colSepPrior, sizeof(p->colSeparator)); memcpy(p->rowSeparator, p->rowSepPrior, sizeof(p->rowSeparator)); } /* ** Output the given string as a hex-encoded blob (eg. X'1234' ) */ | | | | 17817 17818 17819 17820 17821 17822 17823 17824 17825 17826 17827 17828 17829 17830 17831 17832 17833 17834 17835 17836 17837 17838 17839 17840 17841 17842 17843 17844 17845 17846 17847 17848 | memcpy(p->colSeparator, p->colSepPrior, sizeof(p->colSeparator)); memcpy(p->rowSeparator, p->rowSepPrior, sizeof(p->rowSeparator)); } /* ** Output the given string as a hex-encoded blob (eg. X'1234' ) */ static void output_hex_blob(FILE *out, const void *pBlob, int nBlob){ int i; unsigned char *aBlob = (unsigned char*)pBlob; char *zStr = sqlite3_malloc(nBlob*2 + 1); shell_check_oom(zStr); for(i=0; i<nBlob; i++){ static const char aHex[] = { '0', '1', '2', '3', '4', '5', '6', '7', '8', '9', 'a', 'b', 'c', 'd', 'e', 'f' }; zStr[i*2] = aHex[ (aBlob[i] >> 4) ]; zStr[i*2+1] = aHex[ (aBlob[i] & 0x0F) ]; } zStr[i*2] = '\0'; raw_printf(out,"X'%s'", zStr); sqlite3_free(zStr); } /* ** Find a string that is not found anywhere in z[]. Return a pointer ** to that string. ** |
︙ | ︙ | |||
19744 19745 19746 19747 19748 19749 19750 | } /* ** Output the given string as a quoted string using SQL quoting conventions. ** ** See also: output_quoted_escaped_string() */ | | < < | < | | | | | < | < < < | < < | < | | | | | | | | | | | < | < < < < < < < < < < < < < < < < < < < < | | < < < < < | | < < < | > > | < > > | | < < | | < < < < > > | < > | | < | < | | < < < < < < < | | | | < < < < < < < < | | | > > | | | > | < < < < > | < | | > > | > < | | | | | | | | | | 17864 17865 17866 17867 17868 17869 17870 17871 17872 17873 17874 17875 17876 17877 17878 17879 17880 17881 17882 17883 17884 17885 17886 17887 17888 17889 17890 17891 17892 17893 17894 17895 17896 17897 17898 17899 17900 17901 17902 17903 17904 17905 17906 17907 17908 17909 17910 17911 17912 17913 17914 17915 17916 17917 17918 17919 17920 17921 17922 17923 17924 17925 17926 17927 17928 17929 17930 17931 17932 17933 17934 17935 17936 17937 17938 17939 17940 17941 17942 17943 17944 17945 17946 17947 17948 17949 17950 17951 17952 17953 17954 17955 17956 17957 17958 17959 17960 17961 17962 17963 17964 17965 17966 17967 17968 17969 17970 17971 17972 17973 17974 17975 17976 17977 17978 17979 17980 17981 17982 17983 17984 17985 17986 17987 17988 17989 17990 17991 17992 17993 17994 17995 17996 17997 17998 17999 18000 18001 18002 18003 18004 18005 18006 18007 18008 18009 18010 18011 18012 18013 18014 18015 18016 18017 18018 18019 18020 18021 18022 18023 18024 18025 18026 18027 18028 18029 18030 18031 18032 18033 18034 18035 18036 18037 18038 18039 18040 18041 18042 18043 18044 18045 18046 18047 18048 18049 18050 18051 18052 18053 18054 18055 18056 18057 18058 18059 18060 18061 18062 18063 18064 18065 18066 18067 18068 18069 | } /* ** Output the given string as a quoted string using SQL quoting conventions. ** ** See also: output_quoted_escaped_string() */ static void output_quoted_string(FILE *out, const char *z){ int i; char c; setBinaryMode(out, 1); if( z==0 ) return; for(i=0; (c = z[i])!=0 && c!='\''; i++){} if( c==0 ){ utf8_printf(out,"'%s'",z); }else{ raw_printf(out, "'"); while( *z ){ for(i=0; (c = z[i])!=0 && c!='\''; i++){} if( c=='\'' ) i++; if( i ){ utf8_printf(out, "%.*s", i, z); z += i; } if( c=='\'' ){ raw_printf(out, "'"); continue; } if( c==0 ){ break; } z++; } raw_printf(out, "'"); } setTextMode(out, 1); } /* ** Output the given string as a quoted string using SQL quoting conventions. ** Additionallly , escape the "\n" and "\r" characters so that they do not ** get corrupted by end-of-line translation facilities in some operating ** systems. ** ** This is like output_quoted_string() but with the addition of the \r\n ** escape mechanism. */ static void output_quoted_escaped_string(FILE *out, const char *z){ int i; char c; setBinaryMode(out, 1); for(i=0; (c = z[i])!=0 && c!='\'' && c!='\n' && c!='\r'; i++){} if( c==0 ){ utf8_printf(out,"'%s'",z); }else{ const char *zNL = 0; const char *zCR = 0; int nNL = 0; int nCR = 0; char zBuf1[20], zBuf2[20]; for(i=0; z[i]; i++){ if( z[i]=='\n' ) nNL++; if( z[i]=='\r' ) nCR++; } if( nNL ){ raw_printf(out, "replace("); zNL = unused_string(z, "\\n", "\\012", zBuf1); } if( nCR ){ raw_printf(out, "replace("); zCR = unused_string(z, "\\r", "\\015", zBuf2); } raw_printf(out, "'"); while( *z ){ for(i=0; (c = z[i])!=0 && c!='\n' && c!='\r' && c!='\''; i++){} if( c=='\'' ) i++; if( i ){ utf8_printf(out, "%.*s", i, z); z += i; } if( c=='\'' ){ raw_printf(out, "'"); continue; } if( c==0 ){ break; } z++; if( c=='\n' ){ raw_printf(out, "%s", zNL); continue; } raw_printf(out, "%s", zCR); } raw_printf(out, "'"); if( nCR ){ raw_printf(out, ",'%s',char(13))", zCR); } if( nNL ){ raw_printf(out, ",'%s',char(10))", zNL); } } setTextMode(out, 1); } /* ** Output the given string as a quoted according to C or TCL quoting rules. */ static void output_c_string(FILE *out, const char *z){ unsigned int c; fputc('"', out); while( (c = *(z++))!=0 ){ if( c=='\\' ){ fputc(c, out); fputc(c, out); }else if( c=='"' ){ fputc('\\', out); fputc('"', out); }else if( c=='\t' ){ fputc('\\', out); fputc('t', out); }else if( c=='\n' ){ fputc('\\', out); fputc('n', out); }else if( c=='\r' ){ fputc('\\', out); fputc('r', out); }else if( !isprint(c&0xff) ){ raw_printf(out, "\\%03o", c&0xff); }else{ fputc(c, out); } } fputc('"', out); } /* ** Output the given string as a quoted according to JSON quoting rules. */ static void output_json_string(FILE *out, const char *z, i64 n){ unsigned int c; if( z==0 ) z = ""; if( n<0 ) n = strlen(z); fputc('"', out); while( n-- ){ c = *(z++); if( c=='\\' || c=='"' ){ fputc('\\', out); fputc(c, out); }else if( c<=0x1f ){ fputc('\\', out); if( c=='\b' ){ fputc('b', out); }else if( c=='\f' ){ fputc('f', out); }else if( c=='\n' ){ fputc('n', out); }else if( c=='\r' ){ fputc('r', out); }else if( c=='\t' ){ fputc('t', out); }else{ raw_printf(out, "u%04x",c); } }else{ fputc(c, out); } } fputc('"', out); } /* ** Output the given string with characters that are special to ** HTML escaped. */ static void output_html_string(FILE *out, const char *z){ int i; if( z==0 ) z = ""; while( *z ){ for(i=0; z[i] && z[i]!='<' && z[i]!='&' && z[i]!='>' && z[i]!='\"' && z[i]!='\''; i++){} if( i>0 ){ utf8_printf(out,"%.*s",i,z); } if( z[i]=='<' ){ raw_printf(out,"<"); }else if( z[i]=='&' ){ raw_printf(out,"&"); }else if( z[i]=='>' ){ raw_printf(out,">"); }else if( z[i]=='\"' ){ raw_printf(out,"""); }else if( z[i]=='\'' ){ raw_printf(out,"'"); }else{ break; } z += i + 1; } } |
︙ | ︙ | |||
20029 20030 20031 20032 20033 20034 20035 20036 | /* ** Output a single term of CSV. Actually, p->colSeparator is used for ** the separator, which may or may not be a comma. p->nullValue is ** the null value. Strings are quoted if necessary. The separator ** is only issued if bSep is true. */ static void output_csv(ShellState *p, const char *z, int bSep){ if( z==0 ){ | > | | | | | 18093 18094 18095 18096 18097 18098 18099 18100 18101 18102 18103 18104 18105 18106 18107 18108 18109 18110 18111 18112 18113 18114 18115 18116 18117 18118 18119 18120 18121 18122 18123 18124 18125 18126 18127 18128 | /* ** Output a single term of CSV. Actually, p->colSeparator is used for ** the separator, which may or may not be a comma. p->nullValue is ** the null value. Strings are quoted if necessary. The separator ** is only issued if bSep is true. */ static void output_csv(ShellState *p, const char *z, int bSep){ FILE *out = p->out; if( z==0 ){ utf8_printf(out,"%s",p->nullValue); }else{ unsigned i; for(i=0; z[i]; i++){ if( needCsvQuote[((unsigned char*)z)[i]] ){ i = 0; break; } } if( i==0 || strstr(z, p->colSeparator)!=0 ){ char *zQuoted = sqlite3_mprintf("\"%w\"", z); shell_check_oom(zQuoted); utf8_printf(out, "%s", zQuoted); sqlite3_free(zQuoted); }else{ utf8_printf(out, "%s", z); } } if( bSep ){ utf8_printf(p->out, "%s", p->colSeparator); } } /* ** This routine runs when the user presses Ctrl-C */ static void interrupt_handler(int NotUsed){ |
︙ | ︙ | |||
20157 20158 20159 20160 20161 20162 20163 | }; int i; const char *az[4]; az[0] = zA1; az[1] = zA2; az[2] = zA3; az[3] = zA4; | | | | | | | | 18222 18223 18224 18225 18226 18227 18228 18229 18230 18231 18232 18233 18234 18235 18236 18237 18238 18239 18240 18241 18242 18243 18244 18245 18246 18247 18248 18249 18250 18251 18252 18253 18254 18255 18256 18257 18258 18259 18260 18261 | }; int i; const char *az[4]; az[0] = zA1; az[1] = zA2; az[2] = zA3; az[3] = zA4; utf8_printf(p->out, "authorizer: %s", azAction[op]); for(i=0; i<4; i++){ raw_printf(p->out, " "); if( az[i] ){ output_c_string(p->out, az[i]); }else{ raw_printf(p->out, "NULL"); } } raw_printf(p->out, "\n"); if( p->bSafeMode ) (void)safeModeAuth(pClientData, op, zA1, zA2, zA3, zA4); return SQLITE_OK; } #endif /* ** Print a schema statement. Part of MODE_Semi and MODE_Pretty output. ** ** This routine converts some CREATE TABLE statements for shadow tables ** in FTS3/4/5 into CREATE TABLE IF NOT EXISTS statements. ** ** If the schema statement in z[] contains a start-of-comment and if ** sqlite3_complete() returns false, try to terminate the comment before ** printing the result. https://sqlite.org/forum/forumpost/d7be961c5c */ static void printSchemaLine(FILE *out, const char *z, const char *zTail){ char *zToFree = 0; if( z==0 ) return; if( zTail==0 ) return; if( zTail[0]==';' && (strstr(z, "/*")!=0 || strstr(z,"--")!=0) ){ const char *zOrig = z; static const char *azTerm[] = { "", "*/", "\n" }; int i; |
︙ | ︙ | |||
20204 20205 20206 20207 20208 20209 20210 | z = zNew; break; } sqlite3_free(zNew); } } if( sqlite3_strglob("CREATE TABLE ['\"]*", z)==0 ){ | | | | | | 18269 18270 18271 18272 18273 18274 18275 18276 18277 18278 18279 18280 18281 18282 18283 18284 18285 18286 18287 18288 18289 18290 18291 18292 | z = zNew; break; } sqlite3_free(zNew); } } if( sqlite3_strglob("CREATE TABLE ['\"]*", z)==0 ){ utf8_printf(out, "CREATE TABLE IF NOT EXISTS %s%s", z+13, zTail); }else{ utf8_printf(out, "%s%s", z, zTail); } sqlite3_free(zToFree); } static void printSchemaLineN(FILE *out, char *z, int n, const char *zTail){ char c = z[n]; z[n] = 0; printSchemaLine(out, z, zTail); z[n] = c; } /* ** Return true if string z[] has nothing but whitespace and comments to the ** end of the first line. */ |
︙ | ︙ | |||
20241 20242 20243 20244 20245 20246 20247 | */ static void eqp_append(ShellState *p, int iEqpId, int p2, const char *zText){ EQPGraphRow *pNew; i64 nText; if( zText==0 ) return; nText = strlen(zText); if( p->autoEQPtest ){ | | | 18306 18307 18308 18309 18310 18311 18312 18313 18314 18315 18316 18317 18318 18319 18320 | */ static void eqp_append(ShellState *p, int iEqpId, int p2, const char *zText){ EQPGraphRow *pNew; i64 nText; if( zText==0 ) return; nText = strlen(zText); if( p->autoEQPtest ){ utf8_printf(p->out, "%d,%d,%s\n", iEqpId, p2, zText); } pNew = sqlite3_malloc64( sizeof(*pNew) + nText ); shell_check_oom(pNew); pNew->iEqpId = iEqpId; pNew->iParentId = p2; memcpy(pNew->zText, zText, nText+1); pNew->pNext = 0; |
︙ | ︙ | |||
20289 20290 20291 20292 20293 20294 20295 | static void eqp_render_level(ShellState *p, int iEqpId){ EQPGraphRow *pRow, *pNext; i64 n = strlen(p->sGraph.zPrefix); char *z; for(pRow = eqp_next_row(p, iEqpId, 0); pRow; pRow = pNext){ pNext = eqp_next_row(p, iEqpId, pRow); z = pRow->zText; | > | | | | | | | | | | | | | | | | 18354 18355 18356 18357 18358 18359 18360 18361 18362 18363 18364 18365 18366 18367 18368 18369 18370 18371 18372 18373 18374 18375 18376 18377 18378 18379 18380 18381 18382 18383 18384 18385 18386 18387 18388 18389 18390 18391 18392 18393 18394 18395 18396 18397 18398 18399 18400 18401 18402 18403 18404 18405 18406 18407 18408 18409 18410 18411 18412 18413 18414 18415 18416 18417 18418 18419 18420 18421 18422 18423 18424 18425 18426 18427 18428 18429 18430 18431 18432 18433 18434 18435 18436 18437 18438 18439 18440 18441 18442 18443 18444 18445 18446 18447 18448 18449 18450 18451 18452 18453 18454 | static void eqp_render_level(ShellState *p, int iEqpId){ EQPGraphRow *pRow, *pNext; i64 n = strlen(p->sGraph.zPrefix); char *z; for(pRow = eqp_next_row(p, iEqpId, 0); pRow; pRow = pNext){ pNext = eqp_next_row(p, iEqpId, pRow); z = pRow->zText; utf8_printf(p->out, "%s%s%s\n", p->sGraph.zPrefix, pNext ? "|--" : "`--", z); if( n<(i64)sizeof(p->sGraph.zPrefix)-7 ){ memcpy(&p->sGraph.zPrefix[n], pNext ? "| " : " ", 4); eqp_render_level(p, pRow->iEqpId); p->sGraph.zPrefix[n] = 0; } } } /* ** Display and reset the EXPLAIN QUERY PLAN data */ static void eqp_render(ShellState *p, i64 nCycle){ EQPGraphRow *pRow = p->sGraph.pRow; if( pRow ){ if( pRow->zText[0]=='-' ){ if( pRow->pNext==0 ){ eqp_reset(p); return; } utf8_printf(p->out, "%s\n", pRow->zText+3); p->sGraph.pRow = pRow->pNext; sqlite3_free(pRow); }else if( nCycle>0 ){ utf8_printf(p->out, "QUERY PLAN (cycles=%lld [100%%])\n", nCycle); }else{ utf8_printf(p->out, "QUERY PLAN\n"); } p->sGraph.zPrefix[0] = 0; eqp_render_level(p, 0); eqp_reset(p); } } #ifndef SQLITE_OMIT_PROGRESS_CALLBACK /* ** Progress handler callback. */ static int progress_handler(void *pClientData) { ShellState *p = (ShellState*)pClientData; p->nProgress++; if( p->nProgress>=p->mxProgress && p->mxProgress>0 ){ raw_printf(p->out, "Progress limit reached (%u)\n", p->nProgress); if( p->flgProgress & SHELL_PROGRESS_RESET ) p->nProgress = 0; if( p->flgProgress & SHELL_PROGRESS_ONCE ) p->mxProgress = 0; return 1; } if( (p->flgProgress & SHELL_PROGRESS_QUIET)==0 ){ raw_printf(p->out, "Progress %u\n", p->nProgress); } return 0; } #endif /* SQLITE_OMIT_PROGRESS_CALLBACK */ /* ** Print N dashes */ static void print_dashes(FILE *out, int N){ const char zDash[] = "--------------------------------------------------"; const int nDash = sizeof(zDash) - 1; while( N>nDash ){ fputs(zDash, out); N -= nDash; } raw_printf(out, "%.*s", N, zDash); } /* ** Print a markdown or table-style row separator using ascii-art */ static void print_row_separator( ShellState *p, int nArg, const char *zSep ){ int i; if( nArg>0 ){ fputs(zSep, p->out); print_dashes(p->out, p->actualWidth[0]+2); for(i=1; i<nArg; i++){ fputs(zSep, p->out); print_dashes(p->out, p->actualWidth[i]+2); } fputs(zSep, p->out); } fputs("\n", p->out); } /* ** This is the callback routine that the shell ** invokes for each row of a query result. */ static int shell_callback( |
︙ | ︙ | |||
20404 20405 20406 20407 20408 20409 20410 | case MODE_Line: { int w = 5; if( azArg==0 ) break; for(i=0; i<nArg; i++){ int len = strlen30(azCol[i] ? azCol[i] : ""); if( len>w ) w = len; } | | | | | 18470 18471 18472 18473 18474 18475 18476 18477 18478 18479 18480 18481 18482 18483 18484 18485 18486 18487 | case MODE_Line: { int w = 5; if( azArg==0 ) break; for(i=0; i<nArg; i++){ int len = strlen30(azCol[i] ? azCol[i] : ""); if( len>w ) w = len; } if( p->cnt++>0 ) utf8_printf(p->out, "%s", p->rowSeparator); for(i=0; i<nArg; i++){ utf8_printf(p->out,"%*s = %s%s", w, azCol[i], azArg[i] ? azArg[i] : p->nullValue, p->rowSeparator); } break; } case MODE_ScanExp: case MODE_Explain: { static const int aExplainWidth[] = {4, 13, 4, 4, 4, 13, 2, 13}; static const int aExplainMap[] = {0, 1, 2, 3, 4, 5, 6, 7 }; |
︙ | ︙ | |||
20434 20435 20436 20437 20438 20439 20440 | iIndent = 3; } if( nArg>nWidth ) nArg = nWidth; /* If this is the first row seen, print out the headers */ if( p->cnt++==0 ){ for(i=0; i<nArg; i++){ | | | | | | | | | | | 18500 18501 18502 18503 18504 18505 18506 18507 18508 18509 18510 18511 18512 18513 18514 18515 18516 18517 18518 18519 18520 18521 18522 18523 18524 18525 18526 18527 18528 18529 18530 18531 18532 18533 18534 18535 18536 18537 18538 18539 18540 18541 18542 18543 18544 18545 18546 18547 18548 18549 18550 18551 18552 18553 18554 18555 18556 18557 18558 18559 18560 18561 18562 | iIndent = 3; } if( nArg>nWidth ) nArg = nWidth; /* If this is the first row seen, print out the headers */ if( p->cnt++==0 ){ for(i=0; i<nArg; i++){ utf8_width_print(p->out, aWidth[i], azCol[ aMap[i] ]); fputs(i==nArg-1 ? "\n" : " ", p->out); } for(i=0; i<nArg; i++){ print_dashes(p->out, aWidth[i]); fputs(i==nArg-1 ? "\n" : " ", p->out); } } /* If there is no data, exit early. */ if( azArg==0 ) break; for(i=0; i<nArg; i++){ const char *zSep = " "; int w = aWidth[i]; const char *zVal = azArg[ aMap[i] ]; if( i==nArg-1 ) w = 0; if( zVal && strlenChar(zVal)>w ){ w = strlenChar(zVal); zSep = " "; } if( i==iIndent && p->aiIndent && p->pStmt ){ if( p->iIndent<p->nIndent ){ utf8_printf(p->out, "%*.s", p->aiIndent[p->iIndent], ""); } p->iIndent++; } utf8_width_print(p->out, w, zVal ? zVal : p->nullValue); fputs(i==nArg-1 ? "\n" : zSep, p->out); } break; } case MODE_Semi: { /* .schema and .fullschema output */ printSchemaLine(p->out, azArg[0], ";\n"); break; } case MODE_Pretty: { /* .schema and .fullschema with --indent */ char *z; int j; int nParen = 0; char cEnd = 0; char c; int nLine = 0; assert( nArg==1 ); if( azArg[0]==0 ) break; if( sqlite3_strlike("CREATE VIEW%", azArg[0], 0)==0 || sqlite3_strlike("CREATE TRIG%", azArg[0], 0)==0 ){ utf8_printf(p->out, "%s;\n", azArg[0]); break; } z = sqlite3_mprintf("%s", azArg[0]); shell_check_oom(z); j = 0; for(i=0; IsSpace(z[i]); i++){} for(; (c = z[i])!=0; i++){ |
︙ | ︙ | |||
20515 20516 20517 20518 20519 20520 20521 | }else if( c=='-' && z[i+1]=='-' ){ cEnd = '\n'; }else if( c=='(' ){ nParen++; }else if( c==')' ){ nParen--; if( nLine>0 && nParen==0 && j>0 ){ | | | | > | | > | > > > | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | 18581 18582 18583 18584 18585 18586 18587 18588 18589 18590 18591 18592 18593 18594 18595 18596 18597 18598 18599 18600 18601 18602 18603 18604 18605 18606 18607 18608 18609 18610 18611 18612 18613 18614 18615 18616 18617 18618 18619 18620 18621 18622 18623 18624 18625 18626 18627 18628 18629 18630 18631 18632 18633 18634 18635 18636 18637 18638 18639 18640 18641 18642 18643 18644 18645 18646 18647 18648 18649 18650 18651 18652 18653 18654 18655 18656 18657 18658 18659 18660 18661 18662 18663 18664 18665 18666 18667 18668 18669 18670 18671 18672 18673 18674 18675 18676 18677 18678 18679 18680 18681 18682 18683 18684 18685 18686 18687 18688 18689 18690 18691 18692 18693 18694 18695 18696 18697 18698 18699 18700 18701 18702 18703 18704 18705 18706 18707 18708 18709 18710 18711 18712 18713 18714 18715 18716 18717 18718 18719 18720 18721 18722 18723 18724 18725 18726 18727 18728 18729 18730 18731 18732 18733 18734 18735 18736 18737 18738 18739 18740 18741 18742 18743 18744 18745 18746 18747 18748 18749 18750 18751 18752 18753 18754 18755 18756 18757 18758 18759 18760 18761 18762 18763 18764 18765 18766 18767 18768 18769 18770 18771 18772 18773 18774 18775 18776 18777 18778 18779 18780 18781 18782 18783 18784 18785 18786 18787 18788 18789 18790 18791 18792 18793 18794 18795 18796 18797 18798 18799 18800 18801 18802 18803 18804 18805 18806 18807 18808 18809 18810 18811 18812 18813 18814 18815 18816 18817 18818 18819 18820 18821 18822 18823 18824 18825 18826 18827 18828 18829 18830 18831 18832 18833 18834 18835 18836 18837 18838 18839 18840 18841 18842 18843 18844 | }else if( c=='-' && z[i+1]=='-' ){ cEnd = '\n'; }else if( c=='(' ){ nParen++; }else if( c==')' ){ nParen--; if( nLine>0 && nParen==0 && j>0 ){ printSchemaLineN(p->out, z, j, "\n"); j = 0; } } z[j++] = c; if( nParen==1 && cEnd==0 && (c=='(' || c=='\n' || (c==',' && !wsToEol(z+i+1))) ){ if( c=='\n' ) j--; printSchemaLineN(p->out, z, j, "\n "); j = 0; nLine++; while( IsSpace(z[i+1]) ){ i++; } } } z[j] = 0; } printSchemaLine(p->out, z, ";\n"); sqlite3_free(z); break; } case MODE_List: { if( p->cnt++==0 && p->showHeader ){ for(i=0; i<nArg; i++){ utf8_printf(p->out,"%s%s",azCol[i], i==nArg-1 ? p->rowSeparator : p->colSeparator); } } if( azArg==0 ) break; for(i=0; i<nArg; i++){ char *z = azArg[i]; if( z==0 ) z = p->nullValue; utf8_printf(p->out, "%s", z); if( i<nArg-1 ){ utf8_printf(p->out, "%s", p->colSeparator); }else{ utf8_printf(p->out, "%s", p->rowSeparator); } } break; } case MODE_Html: { if( p->cnt++==0 && p->showHeader ){ raw_printf(p->out,"<TR>"); for(i=0; i<nArg; i++){ raw_printf(p->out,"<TH>"); output_html_string(p->out, azCol[i]); raw_printf(p->out,"</TH>\n"); } raw_printf(p->out,"</TR>\n"); } if( azArg==0 ) break; raw_printf(p->out,"<TR>"); for(i=0; i<nArg; i++){ raw_printf(p->out,"<TD>"); output_html_string(p->out, azArg[i] ? azArg[i] : p->nullValue); raw_printf(p->out,"</TD>\n"); } raw_printf(p->out,"</TR>\n"); break; } case MODE_Tcl: { if( p->cnt++==0 && p->showHeader ){ for(i=0; i<nArg; i++){ output_c_string(p->out,azCol[i] ? azCol[i] : ""); if(i<nArg-1) utf8_printf(p->out, "%s", p->colSeparator); } utf8_printf(p->out, "%s", p->rowSeparator); } if( azArg==0 ) break; for(i=0; i<nArg; i++){ output_c_string(p->out, azArg[i] ? azArg[i] : p->nullValue); if(i<nArg-1) utf8_printf(p->out, "%s", p->colSeparator); } utf8_printf(p->out, "%s", p->rowSeparator); break; } case MODE_Csv: { setBinaryMode(p->out, 1); if( p->cnt++==0 && p->showHeader ){ for(i=0; i<nArg; i++){ output_csv(p, azCol[i] ? azCol[i] : "", i<nArg-1); } utf8_printf(p->out, "%s", p->rowSeparator); } if( nArg>0 ){ for(i=0; i<nArg; i++){ output_csv(p, azArg[i], i<nArg-1); } utf8_printf(p->out, "%s", p->rowSeparator); } setTextMode(p->out, 1); break; } case MODE_Insert: { if( azArg==0 ) break; utf8_printf(p->out,"INSERT INTO %s",p->zDestTable); if( p->showHeader ){ raw_printf(p->out,"("); for(i=0; i<nArg; i++){ if( i>0 ) raw_printf(p->out, ","); if( quoteChar(azCol[i]) ){ char *z = sqlite3_mprintf("\"%w\"", azCol[i]); shell_check_oom(z); utf8_printf(p->out, "%s", z); sqlite3_free(z); }else{ raw_printf(p->out, "%s", azCol[i]); } } raw_printf(p->out,")"); } p->cnt++; for(i=0; i<nArg; i++){ raw_printf(p->out, i>0 ? "," : " VALUES("); if( (azArg[i]==0) || (aiType && aiType[i]==SQLITE_NULL) ){ utf8_printf(p->out,"NULL"); }else if( aiType && aiType[i]==SQLITE_TEXT ){ if( ShellHasFlag(p, SHFLG_Newlines) ){ output_quoted_string(p->out, azArg[i]); }else{ output_quoted_escaped_string(p->out, azArg[i]); } }else if( aiType && aiType[i]==SQLITE_INTEGER ){ utf8_printf(p->out,"%s", azArg[i]); }else if( aiType && aiType[i]==SQLITE_FLOAT ){ char z[50]; double r = sqlite3_column_double(p->pStmt, i); sqlite3_uint64 ur; memcpy(&ur,&r,sizeof(r)); if( ur==0x7ff0000000000000LL ){ raw_printf(p->out, "9.0e+999"); }else if( ur==0xfff0000000000000LL ){ raw_printf(p->out, "-9.0e+999"); }else{ sqlite3_int64 ir = (sqlite3_int64)r; if( r==(double)ir ){ sqlite3_snprintf(50,z,"%lld.0", ir); }else{ sqlite3_snprintf(50,z,"%!.20g", r); } raw_printf(p->out, "%s", z); } }else if( aiType && aiType[i]==SQLITE_BLOB && p->pStmt ){ const void *pBlob = sqlite3_column_blob(p->pStmt, i); int nBlob = sqlite3_column_bytes(p->pStmt, i); output_hex_blob(p->out, pBlob, nBlob); }else if( isNumber(azArg[i], 0) ){ utf8_printf(p->out,"%s", azArg[i]); }else if( ShellHasFlag(p, SHFLG_Newlines) ){ output_quoted_string(p->out, azArg[i]); }else{ output_quoted_escaped_string(p->out, azArg[i]); } } raw_printf(p->out,");\n"); break; } case MODE_Json: { if( azArg==0 ) break; if( p->cnt==0 ){ fputs("[{", p->out); }else{ fputs(",\n{", p->out); } p->cnt++; for(i=0; i<nArg; i++){ output_json_string(p->out, azCol[i], -1); putc(':', p->out); if( (azArg[i]==0) || (aiType && aiType[i]==SQLITE_NULL) ){ fputs("null",p->out); }else if( aiType && aiType[i]==SQLITE_FLOAT ){ char z[50]; double r = sqlite3_column_double(p->pStmt, i); sqlite3_uint64 ur; memcpy(&ur,&r,sizeof(r)); if( ur==0x7ff0000000000000LL ){ raw_printf(p->out, "9.0e+999"); }else if( ur==0xfff0000000000000LL ){ raw_printf(p->out, "-9.0e+999"); }else{ sqlite3_snprintf(50,z,"%!.20g", r); raw_printf(p->out, "%s", z); } }else if( aiType && aiType[i]==SQLITE_BLOB && p->pStmt ){ const void *pBlob = sqlite3_column_blob(p->pStmt, i); int nBlob = sqlite3_column_bytes(p->pStmt, i); output_json_string(p->out, pBlob, nBlob); }else if( aiType && aiType[i]==SQLITE_TEXT ){ output_json_string(p->out, azArg[i], -1); }else{ utf8_printf(p->out,"%s", azArg[i]); } if( i<nArg-1 ){ putc(',', p->out); } } putc('}', p->out); break; } case MODE_Quote: { if( azArg==0 ) break; if( p->cnt==0 && p->showHeader ){ for(i=0; i<nArg; i++){ if( i>0 ) fputs(p->colSeparator, p->out); output_quoted_string(p->out, azCol[i]); } fputs(p->rowSeparator, p->out); } p->cnt++; for(i=0; i<nArg; i++){ if( i>0 ) fputs(p->colSeparator, p->out); if( (azArg[i]==0) || (aiType && aiType[i]==SQLITE_NULL) ){ utf8_printf(p->out,"NULL"); }else if( aiType && aiType[i]==SQLITE_TEXT ){ output_quoted_string(p->out, azArg[i]); }else if( aiType && aiType[i]==SQLITE_INTEGER ){ utf8_printf(p->out,"%s", azArg[i]); }else if( aiType && aiType[i]==SQLITE_FLOAT ){ char z[50]; double r = sqlite3_column_double(p->pStmt, i); sqlite3_snprintf(50,z,"%!.20g", r); raw_printf(p->out, "%s", z); }else if( aiType && aiType[i]==SQLITE_BLOB && p->pStmt ){ const void *pBlob = sqlite3_column_blob(p->pStmt, i); int nBlob = sqlite3_column_bytes(p->pStmt, i); output_hex_blob(p->out, pBlob, nBlob); }else if( isNumber(azArg[i], 0) ){ utf8_printf(p->out,"%s", azArg[i]); }else{ output_quoted_string(p->out, azArg[i]); } } fputs(p->rowSeparator, p->out); break; } case MODE_Ascii: { if( p->cnt++==0 && p->showHeader ){ for(i=0; i<nArg; i++){ if( i>0 ) utf8_printf(p->out, "%s", p->colSeparator); utf8_printf(p->out,"%s",azCol[i] ? azCol[i] : ""); } utf8_printf(p->out, "%s", p->rowSeparator); } if( azArg==0 ) break; for(i=0; i<nArg; i++){ if( i>0 ) utf8_printf(p->out, "%s", p->colSeparator); utf8_printf(p->out,"%s",azArg[i] ? azArg[i] : p->nullValue); } utf8_printf(p->out, "%s", p->rowSeparator); break; } case MODE_EQP: { eqp_append(p, atoi(azArg[0]), atoi(azArg[1]), azArg[3]); break; } } |
︙ | ︙ | |||
20838 20839 20840 20841 20842 20843 20844 | "INSERT INTO [_shell$self]\n" " VALUES('run','PRAGMA integrity_check','ok');\n" "INSERT INTO selftest(tno,op,cmd,ans)" " SELECT rowid*10,op,cmd,ans FROM [_shell$self];\n" "DROP TABLE [_shell$self];" ,0,0,&zErrMsg); if( zErrMsg ){ | | | 18909 18910 18911 18912 18913 18914 18915 18916 18917 18918 18919 18920 18921 18922 18923 | "INSERT INTO [_shell$self]\n" " VALUES('run','PRAGMA integrity_check','ok');\n" "INSERT INTO selftest(tno,op,cmd,ans)" " SELECT rowid*10,op,cmd,ans FROM [_shell$self];\n" "DROP TABLE [_shell$self];" ,0,0,&zErrMsg); if( zErrMsg ){ utf8_printf(stderr, "SELFTEST initialization failure: %s\n", zErrMsg); sqlite3_free(zErrMsg); } sqlite3_exec(p->db, "RELEASE selftest_init",0,0,0); } /* |
︙ | ︙ | |||
20941 20942 20943 20944 20945 20946 20947 | int rc; int nResult; int i; const char *z; rc = sqlite3_prepare_v2(p->db, zSelect, -1, &pSelect, 0); if( rc!=SQLITE_OK || !pSelect ){ char *zContext = shell_error_context(zSelect, p->db); | | | | | | | | > | 19012 19013 19014 19015 19016 19017 19018 19019 19020 19021 19022 19023 19024 19025 19026 19027 19028 19029 19030 19031 19032 19033 19034 19035 19036 19037 19038 19039 19040 19041 19042 19043 19044 19045 19046 19047 19048 19049 19050 19051 19052 | int rc; int nResult; int i; const char *z; rc = sqlite3_prepare_v2(p->db, zSelect, -1, &pSelect, 0); if( rc!=SQLITE_OK || !pSelect ){ char *zContext = shell_error_context(zSelect, p->db); utf8_printf(p->out, "/**** ERROR: (%d) %s *****/\n%s", rc, sqlite3_errmsg(p->db), zContext); sqlite3_free(zContext); if( (rc&0xff)!=SQLITE_CORRUPT ) p->nErr++; return rc; } rc = sqlite3_step(pSelect); nResult = sqlite3_column_count(pSelect); while( rc==SQLITE_ROW ){ z = (const char*)sqlite3_column_text(pSelect, 0); utf8_printf(p->out, "%s", z); for(i=1; i<nResult; i++){ utf8_printf(p->out, ",%s", sqlite3_column_text(pSelect, i)); } if( z==0 ) z = ""; while( z[0] && (z[0]!='-' || z[1]!='-') ) z++; if( z[0] ){ raw_printf(p->out, "\n;\n"); }else{ raw_printf(p->out, ";\n"); } rc = sqlite3_step(pSelect); } rc = sqlite3_finalize(pSelect); if( rc!=SQLITE_OK ){ utf8_printf(p->out, "/**** ERROR: (%d) %s *****/\n", rc, sqlite3_errmsg(p->db)); if( (rc&0xff)!=SQLITE_CORRUPT ) p->nErr++; } return rc; } /* ** Allocate space and save off string indicating current error. |
︙ | ︙ | |||
21002 21003 21004 21005 21006 21007 21008 | return zErr; } #ifdef __linux__ /* ** Attempt to display I/O stats on Linux using /proc/PID/io */ | | | 19074 19075 19076 19077 19078 19079 19080 19081 19082 19083 19084 19085 19086 19087 19088 | return zErr; } #ifdef __linux__ /* ** Attempt to display I/O stats on Linux using /proc/PID/io */ static void displayLinuxIoStats(FILE *out){ FILE *in; char z[200]; sqlite3_snprintf(sizeof(z), z, "/proc/%d/io", getpid()); in = fopen(z, "rb"); if( in==0 ) return; while( fgets(z, sizeof(z), in)!=0 ){ static const struct { |
︙ | ︙ | |||
21025 21026 21027 21028 21029 21030 21031 | { "write_bytes: ", "Bytes written to storage:" }, { "cancelled_write_bytes: ", "Cancelled write bytes:" }, }; int i; for(i=0; i<ArraySize(aTrans); i++){ int n = strlen30(aTrans[i].zPattern); if( cli_strncmp(aTrans[i].zPattern, z, n)==0 ){ | | > | > > | | | | | | | | | | | | | | > | > | > | > | > | > | | | | | > | > | | | | > | | | | | | 19097 19098 19099 19100 19101 19102 19103 19104 19105 19106 19107 19108 19109 19110 19111 19112 19113 19114 19115 19116 19117 19118 19119 19120 19121 19122 19123 19124 19125 19126 19127 19128 19129 19130 19131 19132 19133 19134 19135 19136 19137 19138 19139 19140 19141 19142 19143 19144 19145 19146 19147 19148 19149 19150 19151 19152 19153 19154 19155 19156 19157 19158 19159 19160 19161 19162 19163 19164 19165 19166 19167 19168 19169 19170 19171 19172 19173 19174 19175 19176 19177 19178 19179 19180 19181 19182 19183 19184 19185 19186 19187 19188 19189 19190 19191 19192 19193 19194 19195 19196 19197 19198 19199 19200 19201 19202 19203 19204 19205 19206 19207 19208 19209 19210 19211 19212 19213 19214 19215 19216 19217 19218 19219 19220 19221 19222 19223 19224 19225 19226 19227 19228 19229 19230 19231 19232 19233 19234 19235 19236 19237 19238 19239 19240 19241 19242 19243 19244 19245 19246 19247 19248 19249 19250 19251 19252 19253 19254 19255 19256 19257 19258 19259 19260 19261 19262 19263 19264 19265 19266 19267 19268 19269 19270 19271 19272 19273 19274 19275 19276 19277 19278 19279 19280 19281 19282 19283 19284 19285 19286 | { "write_bytes: ", "Bytes written to storage:" }, { "cancelled_write_bytes: ", "Cancelled write bytes:" }, }; int i; for(i=0; i<ArraySize(aTrans); i++){ int n = strlen30(aTrans[i].zPattern); if( cli_strncmp(aTrans[i].zPattern, z, n)==0 ){ utf8_printf(out, "%-36s %s", aTrans[i].zDesc, &z[n]); break; } } } fclose(in); } #endif /* ** Display a single line of status using 64-bit values. */ static void displayStatLine( ShellState *p, /* The shell context */ char *zLabel, /* Label for this one line */ char *zFormat, /* Format for the result */ int iStatusCtrl, /* Which status to display */ int bReset /* True to reset the stats */ ){ sqlite3_int64 iCur = -1; sqlite3_int64 iHiwtr = -1; int i, nPercent; char zLine[200]; sqlite3_status64(iStatusCtrl, &iCur, &iHiwtr, bReset); for(i=0, nPercent=0; zFormat[i]; i++){ if( zFormat[i]=='%' ) nPercent++; } if( nPercent>1 ){ sqlite3_snprintf(sizeof(zLine), zLine, zFormat, iCur, iHiwtr); }else{ sqlite3_snprintf(sizeof(zLine), zLine, zFormat, iHiwtr); } raw_printf(p->out, "%-36s %s\n", zLabel, zLine); } /* ** Display memory stats. */ static int display_stats( sqlite3 *db, /* Database to query */ ShellState *pArg, /* Pointer to ShellState */ int bReset /* True to reset the stats */ ){ int iCur; int iHiwtr; FILE *out; if( pArg==0 || pArg->out==0 ) return 0; out = pArg->out; if( pArg->pStmt && pArg->statsOn==2 ){ int nCol, i, x; sqlite3_stmt *pStmt = pArg->pStmt; char z[100]; nCol = sqlite3_column_count(pStmt); raw_printf(out, "%-36s %d\n", "Number of output columns:", nCol); for(i=0; i<nCol; i++){ sqlite3_snprintf(sizeof(z),z,"Column %d %nname:", i, &x); utf8_printf(out, "%-36s %s\n", z, sqlite3_column_name(pStmt,i)); #ifndef SQLITE_OMIT_DECLTYPE sqlite3_snprintf(30, z+x, "declared type:"); utf8_printf(out, "%-36s %s\n", z, sqlite3_column_decltype(pStmt, i)); #endif #ifdef SQLITE_ENABLE_COLUMN_METADATA sqlite3_snprintf(30, z+x, "database name:"); utf8_printf(out, "%-36s %s\n", z, sqlite3_column_database_name(pStmt,i)); sqlite3_snprintf(30, z+x, "table name:"); utf8_printf(out, "%-36s %s\n", z, sqlite3_column_table_name(pStmt,i)); sqlite3_snprintf(30, z+x, "origin name:"); utf8_printf(out, "%-36s %s\n", z, sqlite3_column_origin_name(pStmt,i)); #endif } } if( pArg->statsOn==3 ){ if( pArg->pStmt ){ iCur = sqlite3_stmt_status(pArg->pStmt, SQLITE_STMTSTATUS_VM_STEP,bReset); raw_printf(pArg->out, "VM-steps: %d\n", iCur); } return 0; } displayStatLine(pArg, "Memory Used:", "%lld (max %lld) bytes", SQLITE_STATUS_MEMORY_USED, bReset); displayStatLine(pArg, "Number of Outstanding Allocations:", "%lld (max %lld)", SQLITE_STATUS_MALLOC_COUNT, bReset); if( pArg->shellFlgs & SHFLG_Pagecache ){ displayStatLine(pArg, "Number of Pcache Pages Used:", "%lld (max %lld) pages", SQLITE_STATUS_PAGECACHE_USED, bReset); } displayStatLine(pArg, "Number of Pcache Overflow Bytes:", "%lld (max %lld) bytes", SQLITE_STATUS_PAGECACHE_OVERFLOW, bReset); displayStatLine(pArg, "Largest Allocation:", "%lld bytes", SQLITE_STATUS_MALLOC_SIZE, bReset); displayStatLine(pArg, "Largest Pcache Allocation:", "%lld bytes", SQLITE_STATUS_PAGECACHE_SIZE, bReset); #ifdef YYTRACKMAXSTACKDEPTH displayStatLine(pArg, "Deepest Parser Stack:", "%lld (max %lld)", SQLITE_STATUS_PARSER_STACK, bReset); #endif if( db ){ if( pArg->shellFlgs & SHFLG_Lookaside ){ iHiwtr = iCur = -1; sqlite3_db_status(db, SQLITE_DBSTATUS_LOOKASIDE_USED, &iCur, &iHiwtr, bReset); raw_printf(pArg->out, "Lookaside Slots Used: %d (max %d)\n", iCur, iHiwtr); sqlite3_db_status(db, SQLITE_DBSTATUS_LOOKASIDE_HIT, &iCur, &iHiwtr, bReset); raw_printf(pArg->out, "Successful lookaside attempts: %d\n", iHiwtr); sqlite3_db_status(db, SQLITE_DBSTATUS_LOOKASIDE_MISS_SIZE, &iCur, &iHiwtr, bReset); raw_printf(pArg->out, "Lookaside failures due to size: %d\n", iHiwtr); sqlite3_db_status(db, SQLITE_DBSTATUS_LOOKASIDE_MISS_FULL, &iCur, &iHiwtr, bReset); raw_printf(pArg->out, "Lookaside failures due to OOM: %d\n", iHiwtr); } iHiwtr = iCur = -1; sqlite3_db_status(db, SQLITE_DBSTATUS_CACHE_USED, &iCur, &iHiwtr, bReset); raw_printf(pArg->out, "Pager Heap Usage: %d bytes\n", iCur); iHiwtr = iCur = -1; sqlite3_db_status(db, SQLITE_DBSTATUS_CACHE_HIT, &iCur, &iHiwtr, 1); raw_printf(pArg->out, "Page cache hits: %d\n", iCur); iHiwtr = iCur = -1; sqlite3_db_status(db, SQLITE_DBSTATUS_CACHE_MISS, &iCur, &iHiwtr, 1); raw_printf(pArg->out, "Page cache misses: %d\n", iCur); iHiwtr = iCur = -1; sqlite3_db_status(db, SQLITE_DBSTATUS_CACHE_WRITE, &iCur, &iHiwtr, 1); raw_printf(pArg->out, "Page cache writes: %d\n", iCur); iHiwtr = iCur = -1; sqlite3_db_status(db, SQLITE_DBSTATUS_CACHE_SPILL, &iCur, &iHiwtr, 1); raw_printf(pArg->out, "Page cache spills: %d\n", iCur); iHiwtr = iCur = -1; sqlite3_db_status(db, SQLITE_DBSTATUS_SCHEMA_USED, &iCur, &iHiwtr, bReset); raw_printf(pArg->out, "Schema Heap Usage: %d bytes\n", iCur); iHiwtr = iCur = -1; sqlite3_db_status(db, SQLITE_DBSTATUS_STMT_USED, &iCur, &iHiwtr, bReset); raw_printf(pArg->out, "Statement Heap/Lookaside Usage: %d bytes\n", iCur); } if( pArg->pStmt ){ int iHit, iMiss; iCur = sqlite3_stmt_status(pArg->pStmt, SQLITE_STMTSTATUS_FULLSCAN_STEP, bReset); raw_printf(pArg->out, "Fullscan Steps: %d\n", iCur); iCur = sqlite3_stmt_status(pArg->pStmt, SQLITE_STMTSTATUS_SORT, bReset); raw_printf(pArg->out, "Sort Operations: %d\n", iCur); iCur = sqlite3_stmt_status(pArg->pStmt, SQLITE_STMTSTATUS_AUTOINDEX,bReset); raw_printf(pArg->out, "Autoindex Inserts: %d\n", iCur); iHit = sqlite3_stmt_status(pArg->pStmt, SQLITE_STMTSTATUS_FILTER_HIT, bReset); iMiss = sqlite3_stmt_status(pArg->pStmt, SQLITE_STMTSTATUS_FILTER_MISS, bReset); if( iHit || iMiss ){ raw_printf(pArg->out, "Bloom filter bypass taken: %d/%d\n", iHit, iHit+iMiss); } iCur = sqlite3_stmt_status(pArg->pStmt, SQLITE_STMTSTATUS_VM_STEP, bReset); raw_printf(pArg->out, "Virtual Machine Steps: %d\n", iCur); iCur = sqlite3_stmt_status(pArg->pStmt, SQLITE_STMTSTATUS_REPREPARE,bReset); raw_printf(pArg->out, "Reprepare operations: %d\n", iCur); iCur = sqlite3_stmt_status(pArg->pStmt, SQLITE_STMTSTATUS_RUN, bReset); raw_printf(pArg->out, "Number of times run: %d\n", iCur); iCur = sqlite3_stmt_status(pArg->pStmt, SQLITE_STMTSTATUS_MEMUSED, bReset); raw_printf(pArg->out, "Memory used by prepared stmt: %d\n", iCur); } #ifdef __linux__ displayLinuxIoStats(pArg->out); #endif /* Do not remove this machine readable comment: extra-stats-output-here */ return 0; } |
︙ | ︙ | |||
21423 21424 21425 21426 21427 21428 21429 | ShellState *pArg /* Pointer to ShellState */ ){ #ifndef SQLITE_ENABLE_STMT_SCANSTATUS UNUSED_PARAMETER(db); UNUSED_PARAMETER(pArg); #else if( pArg->scanstatsOn==3 ){ | | | 19507 19508 19509 19510 19511 19512 19513 19514 19515 19516 19517 19518 19519 19520 19521 | ShellState *pArg /* Pointer to ShellState */ ){ #ifndef SQLITE_ENABLE_STMT_SCANSTATUS UNUSED_PARAMETER(db); UNUSED_PARAMETER(pArg); #else if( pArg->scanstatsOn==3 ){ const char *zSql = " SELECT addr, opcode, p1, p2, p3, p4, p5, comment, nexec," " round(ncycle*100.0 / (sum(ncycle) OVER ()), 2)||'%' AS cycles" " FROM bytecode(?)"; int rc = SQLITE_OK; sqlite3_stmt *pStmt = 0; rc = sqlite3_prepare_v2(db, zSql, -1, &pStmt, 0); |
︙ | ︙ | |||
21569 21570 21571 21572 21573 21574 21575 | #define BOX_234 "\342\224\254" /* U+252c -,- */ #define BOX_124 "\342\224\264" /* U+2534 -'- */ #define BOX_1234 "\342\224\274" /* U+253c -|- */ /* Draw horizontal line N characters long using unicode box ** characters */ | | | | | | | | | | | 19653 19654 19655 19656 19657 19658 19659 19660 19661 19662 19663 19664 19665 19666 19667 19668 19669 19670 19671 19672 19673 19674 19675 19676 19677 19678 19679 19680 19681 19682 19683 19684 19685 19686 19687 19688 19689 19690 19691 19692 19693 19694 19695 19696 19697 19698 19699 19700 | #define BOX_234 "\342\224\254" /* U+252c -,- */ #define BOX_124 "\342\224\264" /* U+2534 -'- */ #define BOX_1234 "\342\224\274" /* U+253c -|- */ /* Draw horizontal line N characters long using unicode box ** characters */ static void print_box_line(FILE *out, int N){ const char zDash[] = BOX_24 BOX_24 BOX_24 BOX_24 BOX_24 BOX_24 BOX_24 BOX_24 BOX_24 BOX_24 BOX_24 BOX_24 BOX_24 BOX_24 BOX_24 BOX_24 BOX_24 BOX_24 BOX_24 BOX_24; const int nDash = sizeof(zDash) - 1; N *= 3; while( N>nDash ){ utf8_printf(out, zDash); N -= nDash; } utf8_printf(out, "%.*s", N, zDash); } /* ** Draw a horizontal separator for a MODE_Box table. */ static void print_box_row_separator( ShellState *p, int nArg, const char *zSep1, const char *zSep2, const char *zSep3 ){ int i; if( nArg>0 ){ utf8_printf(p->out, "%s", zSep1); print_box_line(p->out, p->actualWidth[0]+2); for(i=1; i<nArg; i++){ utf8_printf(p->out, "%s", zSep2); print_box_line(p->out, p->actualWidth[i]+2); } utf8_printf(p->out, "%s", zSep3); } fputs("\n", p->out); } /* ** z[] is a line of text that is to be displayed the .mode box or table or ** similar tabular formats. z[] might contain control characters such ** as \n, \t, \f, or \r. ** |
︙ | ︙ | |||
21771 21772 21773 21774 21775 21776 21777 | int bw = p->cmOpts.bWordWrap; const char *zEmpty = ""; const char *zShowNull = p->nullValue; rc = sqlite3_step(pStmt); if( rc!=SQLITE_ROW ) return; nColumn = sqlite3_column_count(pStmt); | < | 19855 19856 19857 19858 19859 19860 19861 19862 19863 19864 19865 19866 19867 19868 | int bw = p->cmOpts.bWordWrap; const char *zEmpty = ""; const char *zShowNull = p->nullValue; rc = sqlite3_step(pStmt); if( rc!=SQLITE_ROW ) return; nColumn = sqlite3_column_count(pStmt); nAlloc = nColumn*4; if( nAlloc<=0 ) nAlloc = 1; azData = sqlite3_malloc64( nAlloc*sizeof(char*) ); shell_check_oom(azData); azNextLine = sqlite3_malloc64( nColumn*sizeof(char*) ); shell_check_oom(azNextLine); memset((void*)azNextLine, 0, nColumn*sizeof(char*) ); |
︙ | ︙ | |||
21857 21858 21859 21860 21861 21862 21863 21864 21865 21866 21867 21868 21869 21870 21871 | z = azData[i]; if( z==0 ) z = (char*)zEmpty; n = strlenChar(z); j = i%nColumn; if( n>p->actualWidth[j] ) p->actualWidth[j] = n; } if( seenInterrupt ) goto columnar_end; switch( p->cMode ){ case MODE_Column: { colSep = " "; rowSep = "\n"; if( p->showHeader ){ for(i=0; i<nColumn; i++){ w = p->actualWidth[i]; if( p->colWidth[i]<0 ) w = -w; | > | | | | | | | | | | | | | | | | | 19940 19941 19942 19943 19944 19945 19946 19947 19948 19949 19950 19951 19952 19953 19954 19955 19956 19957 19958 19959 19960 19961 19962 19963 19964 19965 19966 19967 19968 19969 19970 19971 19972 19973 19974 19975 19976 19977 19978 19979 19980 19981 19982 19983 19984 19985 19986 19987 19988 19989 19990 19991 19992 19993 19994 19995 19996 19997 19998 19999 20000 20001 20002 20003 20004 20005 20006 20007 20008 20009 20010 20011 20012 20013 20014 20015 20016 20017 20018 20019 20020 20021 20022 20023 20024 20025 20026 20027 20028 20029 20030 20031 20032 20033 20034 20035 20036 20037 20038 20039 20040 20041 20042 20043 20044 20045 20046 20047 20048 20049 | z = azData[i]; if( z==0 ) z = (char*)zEmpty; n = strlenChar(z); j = i%nColumn; if( n>p->actualWidth[j] ) p->actualWidth[j] = n; } if( seenInterrupt ) goto columnar_end; if( nColumn==0 ) goto columnar_end; switch( p->cMode ){ case MODE_Column: { colSep = " "; rowSep = "\n"; if( p->showHeader ){ for(i=0; i<nColumn; i++){ w = p->actualWidth[i]; if( p->colWidth[i]<0 ) w = -w; utf8_width_print(p->out, w, azData[i]); fputs(i==nColumn-1?"\n":" ", p->out); } for(i=0; i<nColumn; i++){ print_dashes(p->out, p->actualWidth[i]); fputs(i==nColumn-1?"\n":" ", p->out); } } break; } case MODE_Table: { colSep = " | "; rowSep = " |\n"; print_row_separator(p, nColumn, "+"); fputs("| ", p->out); for(i=0; i<nColumn; i++){ w = p->actualWidth[i]; n = strlenChar(azData[i]); utf8_printf(p->out, "%*s%s%*s", (w-n)/2, "", azData[i], (w-n+1)/2, ""); fputs(i==nColumn-1?" |\n":" | ", p->out); } print_row_separator(p, nColumn, "+"); break; } case MODE_Markdown: { colSep = " | "; rowSep = " |\n"; fputs("| ", p->out); for(i=0; i<nColumn; i++){ w = p->actualWidth[i]; n = strlenChar(azData[i]); utf8_printf(p->out, "%*s%s%*s", (w-n)/2, "", azData[i], (w-n+1)/2, ""); fputs(i==nColumn-1?" |\n":" | ", p->out); } print_row_separator(p, nColumn, "|"); break; } case MODE_Box: { colSep = " " BOX_13 " "; rowSep = " " BOX_13 "\n"; print_box_row_separator(p, nColumn, BOX_23, BOX_234, BOX_34); utf8_printf(p->out, BOX_13 " "); for(i=0; i<nColumn; i++){ w = p->actualWidth[i]; n = strlenChar(azData[i]); utf8_printf(p->out, "%*s%s%*s%s", (w-n)/2, "", azData[i], (w-n+1)/2, "", i==nColumn-1?" "BOX_13"\n":" "BOX_13" "); } print_box_row_separator(p, nColumn, BOX_123, BOX_1234, BOX_134); break; } } for(i=nColumn, j=0; i<nTotal; i++, j++){ if( j==0 && p->cMode!=MODE_Column ){ utf8_printf(p->out, "%s", p->cMode==MODE_Box?BOX_13" ":"| "); } z = azData[i]; if( z==0 ) z = p->nullValue; w = p->actualWidth[j]; if( p->colWidth[j]<0 ) w = -w; utf8_width_print(p->out, w, z); if( j==nColumn-1 ){ utf8_printf(p->out, "%s", rowSep); if( bMultiLineRowExists && abRowDiv[i/nColumn-1] && i+1<nTotal ){ if( p->cMode==MODE_Table ){ print_row_separator(p, nColumn, "+"); }else if( p->cMode==MODE_Box ){ print_box_row_separator(p, nColumn, BOX_123, BOX_1234, BOX_134); }else if( p->cMode==MODE_Column ){ raw_printf(p->out, "\n"); } } j = -1; if( seenInterrupt ) goto columnar_end; }else{ utf8_printf(p->out, "%s", colSep); } } if( p->cMode==MODE_Table ){ print_row_separator(p, nColumn, "+"); }else if( p->cMode==MODE_Box ){ print_box_row_separator(p, nColumn, BOX_12, BOX_124, BOX_14); } columnar_end: if( seenInterrupt ){ utf8_printf(p->out, "Interrupt\n"); } nData = (nRow+1)*nColumn; for(i=0; i<nData; i++){ z = azData[i]; if( z!=zEmpty && z!=zShowNull ) free(azData[i]); } sqlite3_free(azData); |
︙ | ︙ | |||
22090 22091 22092 22093 22094 22095 22096 22097 22098 22099 22100 22101 22102 22103 22104 22105 | char **pzErr ){ int rc = SQLITE_OK; sqlite3expert *p = pState->expert.pExpert; assert( p ); assert( bCancel || pzErr==0 || *pzErr==0 ); if( bCancel==0 ){ int bVerbose = pState->expert.bVerbose; rc = sqlite3_expert_analyze(p, pzErr); if( rc==SQLITE_OK ){ int nQuery = sqlite3_expert_count(p); int i; if( bVerbose ){ const char *zCand = sqlite3_expert_report(p,0,EXPERT_REPORT_CANDIDATES); | > | | | | | | | 20174 20175 20176 20177 20178 20179 20180 20181 20182 20183 20184 20185 20186 20187 20188 20189 20190 20191 20192 20193 20194 20195 20196 20197 20198 20199 20200 20201 20202 20203 20204 20205 20206 20207 20208 20209 20210 20211 | char **pzErr ){ int rc = SQLITE_OK; sqlite3expert *p = pState->expert.pExpert; assert( p ); assert( bCancel || pzErr==0 || *pzErr==0 ); if( bCancel==0 ){ FILE *out = pState->out; int bVerbose = pState->expert.bVerbose; rc = sqlite3_expert_analyze(p, pzErr); if( rc==SQLITE_OK ){ int nQuery = sqlite3_expert_count(p); int i; if( bVerbose ){ const char *zCand = sqlite3_expert_report(p,0,EXPERT_REPORT_CANDIDATES); raw_printf(out, "-- Candidates -----------------------------\n"); raw_printf(out, "%s\n", zCand); } for(i=0; i<nQuery; i++){ const char *zSql = sqlite3_expert_report(p, i, EXPERT_REPORT_SQL); const char *zIdx = sqlite3_expert_report(p, i, EXPERT_REPORT_INDEXES); const char *zEQP = sqlite3_expert_report(p, i, EXPERT_REPORT_PLAN); if( zIdx==0 ) zIdx = "(no new indexes)\n"; if( bVerbose ){ raw_printf(out, "-- Query %d --------------------------------\n",i+1); raw_printf(out, "%s\n\n", zSql); } raw_printf(out, "%s\n", zIdx); raw_printf(out, "%s\n", zEQP); } } } sqlite3_expert_destroy(p); pState->expert.pExpert = 0; return rc; } |
︙ | ︙ | |||
22147 22148 22149 22150 22151 22152 22153 | if( z[0]=='-' && z[1]=='-' ) z++; n = strlen30(z); if( n>=2 && 0==cli_strncmp(z, "-verbose", n) ){ pState->expert.bVerbose = 1; } else if( n>=2 && 0==cli_strncmp(z, "-sample", n) ){ if( i==(nArg-1) ){ | | | | > | | 20232 20233 20234 20235 20236 20237 20238 20239 20240 20241 20242 20243 20244 20245 20246 20247 20248 20249 20250 20251 20252 20253 20254 20255 20256 20257 20258 20259 20260 20261 20262 20263 20264 20265 20266 | if( z[0]=='-' && z[1]=='-' ) z++; n = strlen30(z); if( n>=2 && 0==cli_strncmp(z, "-verbose", n) ){ pState->expert.bVerbose = 1; } else if( n>=2 && 0==cli_strncmp(z, "-sample", n) ){ if( i==(nArg-1) ){ raw_printf(stderr, "option requires an argument: %s\n", z); rc = SQLITE_ERROR; }else{ iSample = (int)integerValue(azArg[++i]); if( iSample<0 || iSample>100 ){ raw_printf(stderr, "value out of range: %s\n", azArg[i]); rc = SQLITE_ERROR; } } } else{ raw_printf(stderr, "unknown option: %s\n", z); rc = SQLITE_ERROR; } } if( rc==SQLITE_OK ){ pState->expert.pExpert = sqlite3_expert_new(pState->db, &zErr); if( pState->expert.pExpert==0 ){ raw_printf(stderr, "sqlite3_expert_new: %s\n", zErr ? zErr : "out of memory"); rc = SQLITE_ERROR; }else{ sqlite3_expert_config( pState->expert.pExpert, EXPERT_CONFIG_SAMPLE, iSample ); } } |
︙ | ︙ | |||
22493 22494 22495 22496 22497 22498 22499 | zSql = azArg[2]; if( zTable==0 ) return 0; if( zType==0 ) return 0; dataOnly = (p->shellFlgs & SHFLG_DumpDataOnly)!=0; noSys = (p->shellFlgs & SHFLG_DumpNoSys)!=0; if( cli_strcmp(zTable, "sqlite_sequence")==0 && !noSys ){ | | | | | | | 20579 20580 20581 20582 20583 20584 20585 20586 20587 20588 20589 20590 20591 20592 20593 20594 20595 20596 20597 20598 20599 20600 20601 20602 20603 20604 20605 20606 20607 20608 20609 20610 20611 20612 20613 20614 20615 | zSql = azArg[2]; if( zTable==0 ) return 0; if( zType==0 ) return 0; dataOnly = (p->shellFlgs & SHFLG_DumpDataOnly)!=0; noSys = (p->shellFlgs & SHFLG_DumpNoSys)!=0; if( cli_strcmp(zTable, "sqlite_sequence")==0 && !noSys ){ if( !dataOnly ) raw_printf(p->out, "DELETE FROM sqlite_sequence;\n"); }else if( sqlite3_strglob("sqlite_stat?", zTable)==0 && !noSys ){ if( !dataOnly ) raw_printf(p->out, "ANALYZE sqlite_schema;\n"); }else if( cli_strncmp(zTable, "sqlite_", 7)==0 ){ return 0; }else if( dataOnly ){ /* no-op */ }else if( cli_strncmp(zSql, "CREATE VIRTUAL TABLE", 20)==0 ){ char *zIns; if( !p->writableSchema ){ raw_printf(p->out, "PRAGMA writable_schema=ON;\n"); p->writableSchema = 1; } zIns = sqlite3_mprintf( "INSERT INTO sqlite_schema(type,name,tbl_name,rootpage,sql)" "VALUES('table','%q','%q',0,'%q');", zTable, zTable, zSql); shell_check_oom(zIns); utf8_printf(p->out, "%s\n", zIns); sqlite3_free(zIns); return 0; }else{ printSchemaLine(p->out, zSql, ";\n"); } if( cli_strcmp(zType, "table")==0 ){ ShellText sSelect; ShellText sTable; char **azCol; int i; |
︙ | ︙ | |||
22573 22574 22575 22576 22577 22578 22579 | savedDestTable = p->zDestTable; savedMode = p->mode; p->zDestTable = sTable.z; p->mode = p->cMode = MODE_Insert; rc = shell_exec(p, sSelect.z, 0); if( (rc&0xff)==SQLITE_CORRUPT ){ | | | 20659 20660 20661 20662 20663 20664 20665 20666 20667 20668 20669 20670 20671 20672 20673 | savedDestTable = p->zDestTable; savedMode = p->mode; p->zDestTable = sTable.z; p->mode = p->cMode = MODE_Insert; rc = shell_exec(p, sSelect.z, 0); if( (rc&0xff)==SQLITE_CORRUPT ){ raw_printf(p->out, "/****** CORRUPTION ERROR *******/\n"); toggleSelectOrder(p->db); shell_exec(p, sSelect.z, 0); toggleSelectOrder(p->db); } p->zDestTable = savedDestTable; p->mode = savedMode; freeText(&sTable); |
︙ | ︙ | |||
22604 22605 22606 22607 22608 22609 22610 | ){ int rc; char *zErr = 0; rc = sqlite3_exec(p->db, zQuery, dump_callback, p, &zErr); if( rc==SQLITE_CORRUPT ){ char *zQ2; int len = strlen30(zQuery); | | | | | 20690 20691 20692 20693 20694 20695 20696 20697 20698 20699 20700 20701 20702 20703 20704 20705 20706 20707 20708 20709 20710 20711 20712 20713 20714 20715 | ){ int rc; char *zErr = 0; rc = sqlite3_exec(p->db, zQuery, dump_callback, p, &zErr); if( rc==SQLITE_CORRUPT ){ char *zQ2; int len = strlen30(zQuery); raw_printf(p->out, "/****** CORRUPTION ERROR *******/\n"); if( zErr ){ utf8_printf(p->out, "/****** %s ******/\n", zErr); sqlite3_free(zErr); zErr = 0; } zQ2 = malloc( len+100 ); if( zQ2==0 ) return rc; sqlite3_snprintf(len+100, zQ2, "%s ORDER BY rowid DESC", zQuery); rc = sqlite3_exec(p->db, zQ2, dump_callback, p, &zErr); if( rc ){ utf8_printf(p->out, "/****** ERROR: %s ******/\n", zErr); }else{ rc = SQLITE_CORRUPT; } sqlite3_free(zErr); free(zQ2); } return rc; |
︙ | ︙ | |||
22739 22740 22741 22742 22743 22744 22745 | #endif #ifndef SQLITE_OMIT_TEST_CONTROL ",imposter INDEX TABLE Create imposter table TABLE on index INDEX", #endif ".indexes ?TABLE? Show names of indexes", " If TABLE is specified, only show indexes for", " tables matching TABLE using the LIKE operator.", | < | 20825 20826 20827 20828 20829 20830 20831 20832 20833 20834 20835 20836 20837 20838 | #endif #ifndef SQLITE_OMIT_TEST_CONTROL ",imposter INDEX TABLE Create imposter table TABLE on index INDEX", #endif ".indexes ?TABLE? Show names of indexes", " If TABLE is specified, only show indexes for", " tables matching TABLE using the LIKE operator.", #ifdef SQLITE_ENABLE_IOTRACE ",iotrace FILE Enable I/O diagnostic logging to FILE", #endif ".limit ?LIMIT? ?VAL? Display or change the value of an SQLITE_LIMIT", ".lint OPTIONS Report potential schema issues.", " Options:", " fkey-indexes Find missing foreign key indexes", |
︙ | ︙ | |||
22972 22973 22974 22975 22976 22977 22978 | break; default: hh &= ~HH_Summary; break; } if( ((hw^hh)&HH_Undoc)==0 ){ if( (hh&HH_Summary)!=0 ){ | | | | | | | | 21057 21058 21059 21060 21061 21062 21063 21064 21065 21066 21067 21068 21069 21070 21071 21072 21073 21074 21075 21076 21077 21078 21079 21080 21081 21082 21083 21084 21085 21086 21087 21088 21089 21090 21091 21092 21093 21094 21095 21096 21097 21098 21099 21100 21101 21102 21103 21104 21105 21106 21107 21108 21109 21110 21111 21112 21113 21114 21115 | break; default: hh &= ~HH_Summary; break; } if( ((hw^hh)&HH_Undoc)==0 ){ if( (hh&HH_Summary)!=0 ){ utf8_printf(out, ".%s\n", azHelp[i]+1); ++n; }else if( (hw&HW_SummaryOnly)==0 ){ utf8_printf(out, "%s\n", azHelp[i]); } } } }else{ /* Seek documented commands for which zPattern is an exact prefix */ zPat = sqlite3_mprintf(".%s*", zPattern); shell_check_oom(zPat); for(i=0; i<ArraySize(azHelp); i++){ if( sqlite3_strglob(zPat, azHelp[i])==0 ){ utf8_printf(out, "%s\n", azHelp[i]); j = i+1; n++; } } sqlite3_free(zPat); if( n ){ if( n==1 ){ /* when zPattern is a prefix of exactly one command, then include ** the details of that command, which should begin at offset j */ while( j<ArraySize(azHelp)-1 && azHelp[j][0]==' ' ){ utf8_printf(out, "%s\n", azHelp[j]); j++; } } return n; } /* Look for documented commands that contain zPattern anywhere. ** Show complete text of all documented commands that match. */ zPat = sqlite3_mprintf("%%%s%%", zPattern); shell_check_oom(zPat); for(i=0; i<ArraySize(azHelp); i++){ if( azHelp[i][0]==',' ){ while( i<ArraySize(azHelp)-1 && azHelp[i+1][0]==' ' ) ++i; continue; } if( azHelp[i][0]=='.' ) j = i; if( sqlite3_strlike(zPat, azHelp[i], 0)==0 ){ utf8_printf(out, "%s\n", azHelp[j]); while( j<ArraySize(azHelp)-1 && azHelp[j+1][0]==' ' ){ j++; utf8_printf(out, "%s\n", azHelp[j]); } i = j; n++; } } sqlite3_free(zPat); } |
︙ | ︙ | |||
23054 23055 23056 23057 23058 23059 23060 | long nIn; size_t nRead; char *pBuf; int rc; if( in==0 ) return 0; rc = fseek(in, 0, SEEK_END); if( rc!=0 ){ | | | | | 21139 21140 21141 21142 21143 21144 21145 21146 21147 21148 21149 21150 21151 21152 21153 21154 21155 21156 21157 21158 21159 21160 21161 21162 21163 21164 21165 21166 21167 21168 21169 | long nIn; size_t nRead; char *pBuf; int rc; if( in==0 ) return 0; rc = fseek(in, 0, SEEK_END); if( rc!=0 ){ raw_printf(stderr, "Error: '%s' not seekable\n", zName); fclose(in); return 0; } nIn = ftell(in); rewind(in); pBuf = sqlite3_malloc64( nIn+1 ); if( pBuf==0 ){ raw_printf(stderr, "Error: out of memory\n"); fclose(in); return 0; } nRead = fread(pBuf, nIn, 1, in); fclose(in); if( nRead!=1 ){ sqlite3_free(pBuf); raw_printf(stderr, "Error: cannot read '%s'\n", zName); return 0; } pBuf[nIn] = 0; if( pnByte ) *pnByte = nIn; return pBuf; } |
︙ | ︙ | |||
23191 23192 23193 23194 23195 23196 23197 | FILE *in; const char *zDbFilename = p->pAuxDb->zDbFilename; unsigned int x[16]; char zLine[1000]; if( zDbFilename ){ in = fopen(zDbFilename, "r"); if( in==0 ){ | | | | 21276 21277 21278 21279 21280 21281 21282 21283 21284 21285 21286 21287 21288 21289 21290 21291 21292 21293 21294 21295 21296 21297 21298 21299 21300 21301 21302 21303 21304 21305 21306 21307 21308 21309 21310 21311 | FILE *in; const char *zDbFilename = p->pAuxDb->zDbFilename; unsigned int x[16]; char zLine[1000]; if( zDbFilename ){ in = fopen(zDbFilename, "r"); if( in==0 ){ utf8_printf(stderr, "cannot open \"%s\" for reading\n", zDbFilename); return 0; } nLine = 0; }else{ in = p->in; nLine = p->lineno; if( in==0 ) in = stdin; } *pnData = 0; nLine++; if( fgets(zLine, sizeof(zLine), in)==0 ) goto readHexDb_error; rc = sscanf(zLine, "| size %d pagesize %d", &n, &pgsz); if( rc!=2 ) goto readHexDb_error; if( n<0 ) goto readHexDb_error; if( pgsz<512 || pgsz>65536 || (pgsz&(pgsz-1))!=0 ) goto readHexDb_error; n = (n+pgsz-1)&~(pgsz-1); /* Round n up to the next multiple of pgsz */ a = sqlite3_malloc( n ? n : 1 ); shell_check_oom(a); memset(a, 0, n); if( pgsz<512 || pgsz>65536 || (pgsz & (pgsz-1))!=0 ){ utf8_printf(stderr, "invalid pagesize\n"); goto readHexDb_error; } for(nLine++; fgets(zLine, sizeof(zLine), in)!=0; nLine++){ rc = sscanf(zLine, "| page %d offset %d", &j, &k); if( rc==2 ){ iOffset = k; continue; |
︙ | ︙ | |||
23254 23255 23256 23257 23258 23259 23260 | while( fgets(zLine, sizeof(zLine), p->in)!=0 ){ nLine++; if(cli_strncmp(zLine, "| end ", 6)==0 ) break; } p->lineno = nLine; } sqlite3_free(a); | | | 21339 21340 21341 21342 21343 21344 21345 21346 21347 21348 21349 21350 21351 21352 21353 | while( fgets(zLine, sizeof(zLine), p->in)!=0 ){ nLine++; if(cli_strncmp(zLine, "| end ", 6)==0 ) break; } p->lineno = nLine; } sqlite3_free(a); utf8_printf(stderr,"Error on line %d of --hexdb input\n", nLine); return 0; } #endif /* SQLITE_OMIT_DESERIALIZE */ /* ** Scalar function "usleep(X)" invokes sqlite3_sleep(X) and returns X. */ |
︙ | ︙ | |||
23328 23329 23330 23331 23332 23333 23334 23335 | case SHELL_OPEN_UNSPEC: case SHELL_OPEN_NORMAL: { sqlite3_open_v2(zDbFilename, &p->db, SQLITE_OPEN_READWRITE|SQLITE_OPEN_CREATE|p->openFlags, 0); break; } } if( p->db==0 || SQLITE_OK!=sqlite3_errcode(p->db) ){ | > | | > | > > | | < | 21413 21414 21415 21416 21417 21418 21419 21420 21421 21422 21423 21424 21425 21426 21427 21428 21429 21430 21431 21432 21433 21434 21435 21436 21437 21438 21439 21440 21441 21442 21443 21444 21445 21446 | case SHELL_OPEN_UNSPEC: case SHELL_OPEN_NORMAL: { sqlite3_open_v2(zDbFilename, &p->db, SQLITE_OPEN_READWRITE|SQLITE_OPEN_CREATE|p->openFlags, 0); break; } } globalDb = p->db; if( p->db==0 || SQLITE_OK!=sqlite3_errcode(p->db) ){ utf8_printf(stderr,"Error: unable to open database \"%s\": %s\n", zDbFilename, sqlite3_errmsg(p->db)); if( (openFlags & OPEN_DB_KEEPALIVE)==0 ){ exit(1); } sqlite3_close(p->db); sqlite3_open(":memory:", &p->db); if( p->db==0 || SQLITE_OK!=sqlite3_errcode(p->db) ){ utf8_printf(stderr, "Also: unable to open substitute in-memory database.\n" ); exit(1); }else{ utf8_printf(stderr, "Notice: using substitute in-memory database instead of \"%s\"\n", zDbFilename); } } sqlite3_db_config(p->db, SQLITE_DBCONFIG_STMT_SCANSTATUS, (int)0, (int*)0); /* Reflect the use or absence of --unsafe-testing invocation. */ { int testmode_on = ShellHasFlag(p,SHFLG_TestingMode); sqlite3_db_config(p->db, SQLITE_DBCONFIG_TRUSTED_SCHEMA, testmode_on,0); sqlite3_db_config(p->db, SQLITE_DBCONFIG_DEFENSIVE, !testmode_on,0); |
︙ | ︙ | |||
23449 23450 23451 23452 23453 23454 23455 | if( aData==0 ){ return; } rc = sqlite3_deserialize(p->db, "main", aData, nData, nData, SQLITE_DESERIALIZE_RESIZEABLE | SQLITE_DESERIALIZE_FREEONCLOSE); if( rc ){ | | | 21537 21538 21539 21540 21541 21542 21543 21544 21545 21546 21547 21548 21549 21550 21551 | if( aData==0 ){ return; } rc = sqlite3_deserialize(p->db, "main", aData, nData, nData, SQLITE_DESERIALIZE_RESIZEABLE | SQLITE_DESERIALIZE_FREEONCLOSE); if( rc ){ utf8_printf(stderr, "Error: sqlite3_deserialize() returns %d\n", rc); } if( p->szMax>0 ){ sqlite3_file_control(p->db, "main", SQLITE_FCNTL_SIZE_LIMIT, &p->szMax); } } #endif } |
︙ | ︙ | |||
23473 23474 23475 23476 23477 23478 23479 | /* ** Attempt to close the database connection. Report errors. */ void close_db(sqlite3 *db){ int rc = sqlite3_close(db); if( rc ){ | | > | 21561 21562 21563 21564 21565 21566 21567 21568 21569 21570 21571 21572 21573 21574 21575 21576 | /* ** Attempt to close the database connection. Report errors. */ void close_db(sqlite3 *db){ int rc = sqlite3_close(db); if( rc ){ utf8_printf(stderr, "Error: sqlite3_close() returns %d: %s\n", rc, sqlite3_errmsg(db)); } } #if HAVE_READLINE || HAVE_EDITLINE /* ** Readline completion callbacks */ |
︙ | ︙ | |||
23634 23635 23636 23637 23638 23639 23640 | if( i>0 && zArg[i]==0 ) return (int)(integerValue(zArg) & 0xffffffff); if( sqlite3_stricmp(zArg, "on")==0 || sqlite3_stricmp(zArg,"yes")==0 ){ return 1; } if( sqlite3_stricmp(zArg, "off")==0 || sqlite3_stricmp(zArg,"no")==0 ){ return 0; } | | > | 21723 21724 21725 21726 21727 21728 21729 21730 21731 21732 21733 21734 21735 21736 21737 21738 | if( i>0 && zArg[i]==0 ) return (int)(integerValue(zArg) & 0xffffffff); if( sqlite3_stricmp(zArg, "on")==0 || sqlite3_stricmp(zArg,"yes")==0 ){ return 1; } if( sqlite3_stricmp(zArg, "off")==0 || sqlite3_stricmp(zArg,"no")==0 ){ return 0; } utf8_printf(stderr, "ERROR: Not a boolean value: \"%s\". Assuming \"no\".\n", zArg); return 0; } /* ** Set or clear a shell flag according to a boolean value. */ static void setOrClearFlag(ShellState *p, unsigned mFlag, const char *zArg){ |
︙ | ︙ | |||
23672 23673 23674 23675 23676 23677 23678 | }else if( cli_strcmp(zFile, "stderr")==0 ){ f = stderr; }else if( cli_strcmp(zFile, "off")==0 ){ f = 0; }else{ f = fopen(zFile, bTextMode ? "w" : "wb"); if( f==0 ){ | | | 21762 21763 21764 21765 21766 21767 21768 21769 21770 21771 21772 21773 21774 21775 21776 | }else if( cli_strcmp(zFile, "stderr")==0 ){ f = stderr; }else if( cli_strcmp(zFile, "off")==0 ){ f = 0; }else{ f = fopen(zFile, bTextMode ? "w" : "wb"); if( f==0 ){ utf8_printf(stderr, "Error: cannot open \"%s\"\n", zFile); } } return f; } #ifndef SQLITE_OMIT_TRACE /* |
︙ | ︙ | |||
23694 23695 23696 23697 23698 23699 23700 | ){ ShellState *p = (ShellState*)pArg; sqlite3_stmt *pStmt; const char *zSql; i64 nSql; if( p->traceOut==0 ) return 0; if( mType==SQLITE_TRACE_CLOSE ){ | | | 21784 21785 21786 21787 21788 21789 21790 21791 21792 21793 21794 21795 21796 21797 21798 | ){ ShellState *p = (ShellState*)pArg; sqlite3_stmt *pStmt; const char *zSql; i64 nSql; if( p->traceOut==0 ) return 0; if( mType==SQLITE_TRACE_CLOSE ){ utf8_printf(p->traceOut, "-- closing database connection\n"); return 0; } if( mType!=SQLITE_TRACE_ROW && pX!=0 && ((const char*)pX)[0]=='-' ){ zSql = (const char*)pX; }else{ pStmt = (sqlite3_stmt*)pP; switch( p->eTraceType ){ |
︙ | ︙ | |||
23725 23726 23727 23728 23729 23730 23731 | if( zSql==0 ) return 0; nSql = strlen(zSql); if( nSql>1000000000 ) nSql = 1000000000; while( nSql>0 && zSql[nSql-1]==';' ){ nSql--; } switch( mType ){ case SQLITE_TRACE_ROW: case SQLITE_TRACE_STMT: { | | | | 21815 21816 21817 21818 21819 21820 21821 21822 21823 21824 21825 21826 21827 21828 21829 21830 21831 21832 21833 21834 | if( zSql==0 ) return 0; nSql = strlen(zSql); if( nSql>1000000000 ) nSql = 1000000000; while( nSql>0 && zSql[nSql-1]==';' ){ nSql--; } switch( mType ){ case SQLITE_TRACE_ROW: case SQLITE_TRACE_STMT: { utf8_printf(p->traceOut, "%.*s;\n", (int)nSql, zSql); break; } case SQLITE_TRACE_PROFILE: { sqlite3_int64 nNanosec = pX ? *(sqlite3_int64*)pX : 0; utf8_printf(p->traceOut, "%.*s; -- %lld ns\n", (int)nSql, zSql, nNanosec); break; } } return 0; } #endif |
︙ | ︙ | |||
23837 23838 23839 23840 23841 23842 23843 | || (c==EOF && pc==cQuote) ){ do{ p->n--; }while( p->z[p->n]!=cQuote ); p->cTerm = c; break; } if( pc==cQuote && c!='\r' ){ | > | | | | 21927 21928 21929 21930 21931 21932 21933 21934 21935 21936 21937 21938 21939 21940 21941 21942 21943 21944 21945 21946 | || (c==EOF && pc==cQuote) ){ do{ p->n--; }while( p->z[p->n]!=cQuote ); p->cTerm = c; break; } if( pc==cQuote && c!='\r' ){ utf8_printf(stderr, "%s:%d: unescaped %c character\n", p->zFile, p->nLine, cQuote); } if( c==EOF ){ utf8_printf(stderr, "%s:%d: unterminated %c-quoted field\n", p->zFile, startLine, cQuote); p->cTerm = c; break; } import_append_char(p, c); ppc = pc; pc = c; } |
︙ | ︙ | |||
23939 23940 23941 23942 23943 23944 23945 | int cnt = 0; const int spinRate = 10000; zQuery = sqlite3_mprintf("SELECT * FROM \"%w\"", zTable); shell_check_oom(zQuery); rc = sqlite3_prepare_v2(p->db, zQuery, -1, &pQuery, 0); if( rc ){ | | | > | | > | 22030 22031 22032 22033 22034 22035 22036 22037 22038 22039 22040 22041 22042 22043 22044 22045 22046 22047 22048 22049 22050 22051 22052 22053 22054 22055 22056 22057 22058 22059 22060 22061 22062 22063 22064 | int cnt = 0; const int spinRate = 10000; zQuery = sqlite3_mprintf("SELECT * FROM \"%w\"", zTable); shell_check_oom(zQuery); rc = sqlite3_prepare_v2(p->db, zQuery, -1, &pQuery, 0); if( rc ){ utf8_printf(stderr, "Error %d: %s on [%s]\n", sqlite3_extended_errcode(p->db), sqlite3_errmsg(p->db), zQuery); goto end_data_xfer; } n = sqlite3_column_count(pQuery); zInsert = sqlite3_malloc64(200 + nTable + n*3); shell_check_oom(zInsert); sqlite3_snprintf(200+nTable,zInsert, "INSERT OR IGNORE INTO \"%s\" VALUES(?", zTable); i = strlen30(zInsert); for(j=1; j<n; j++){ memcpy(zInsert+i, ",?", 2); i += 2; } memcpy(zInsert+i, ");", 3); rc = sqlite3_prepare_v2(newDb, zInsert, -1, &pInsert, 0); if( rc ){ utf8_printf(stderr, "Error %d: %s on [%s]\n", sqlite3_extended_errcode(newDb), sqlite3_errmsg(newDb), zInsert); goto end_data_xfer; } for(k=0; k<2; k++){ while( (rc = sqlite3_step(pQuery))==SQLITE_ROW ){ for(i=0; i<n; i++){ switch( sqlite3_column_type(pQuery, i) ){ case SQLITE_NULL: { |
︙ | ︙ | |||
23992 23993 23994 23995 23996 23997 23998 | SQLITE_STATIC); break; } } } /* End for */ rc = sqlite3_step(pInsert); if( rc!=SQLITE_OK && rc!=SQLITE_ROW && rc!=SQLITE_DONE ){ | | | | | 22085 22086 22087 22088 22089 22090 22091 22092 22093 22094 22095 22096 22097 22098 22099 22100 22101 22102 22103 22104 22105 22106 22107 22108 22109 22110 22111 22112 22113 22114 22115 22116 22117 | SQLITE_STATIC); break; } } } /* End for */ rc = sqlite3_step(pInsert); if( rc!=SQLITE_OK && rc!=SQLITE_ROW && rc!=SQLITE_DONE ){ utf8_printf(stderr, "Error %d: %s\n", sqlite3_extended_errcode(newDb), sqlite3_errmsg(newDb)); } sqlite3_reset(pInsert); cnt++; if( (cnt%spinRate)==0 ){ printf("%c\b", "|/-\\"[(cnt/spinRate)%4]); fflush(stdout); } } /* End while */ if( rc==SQLITE_DONE ) break; sqlite3_finalize(pQuery); sqlite3_free(zQuery); zQuery = sqlite3_mprintf("SELECT * FROM \"%w\" ORDER BY rowid DESC;", zTable); shell_check_oom(zQuery); rc = sqlite3_prepare_v2(p->db, zQuery, -1, &pQuery, 0); if( rc ){ utf8_printf(stderr, "Warning: cannot step \"%s\" backwards", zTable); break; } } /* End for(k=0...) */ end_data_xfer: sqlite3_finalize(pQuery); sqlite3_finalize(pInsert); |
︙ | ︙ | |||
24047 24048 24049 24050 24051 24052 24053 | char *zErrMsg = 0; zQuery = sqlite3_mprintf("SELECT name, sql FROM sqlite_schema" " WHERE %s ORDER BY rowid ASC", zWhere); shell_check_oom(zQuery); rc = sqlite3_prepare_v2(p->db, zQuery, -1, &pQuery, 0); if( rc ){ | | | > | | | | | > | | | | | > < < < < < < < < < < < < | 22140 22141 22142 22143 22144 22145 22146 22147 22148 22149 22150 22151 22152 22153 22154 22155 22156 22157 22158 22159 22160 22161 22162 22163 22164 22165 22166 22167 22168 22169 22170 22171 22172 22173 22174 22175 22176 22177 22178 22179 22180 22181 22182 22183 22184 22185 22186 22187 22188 22189 22190 22191 22192 22193 22194 22195 22196 22197 22198 22199 22200 22201 22202 22203 22204 22205 22206 22207 22208 22209 22210 22211 22212 22213 22214 22215 22216 22217 22218 22219 22220 22221 22222 22223 22224 22225 22226 22227 22228 22229 22230 22231 22232 22233 22234 22235 22236 22237 22238 22239 | char *zErrMsg = 0; zQuery = sqlite3_mprintf("SELECT name, sql FROM sqlite_schema" " WHERE %s ORDER BY rowid ASC", zWhere); shell_check_oom(zQuery); rc = sqlite3_prepare_v2(p->db, zQuery, -1, &pQuery, 0); if( rc ){ utf8_printf(stderr, "Error: (%d) %s on [%s]\n", sqlite3_extended_errcode(p->db), sqlite3_errmsg(p->db), zQuery); goto end_schema_xfer; } while( (rc = sqlite3_step(pQuery))==SQLITE_ROW ){ zName = sqlite3_column_text(pQuery, 0); zSql = sqlite3_column_text(pQuery, 1); if( zName==0 || zSql==0 ) continue; if( sqlite3_stricmp((char*)zName, "sqlite_sequence")!=0 ){ printf("%s... ", zName); fflush(stdout); sqlite3_exec(newDb, (const char*)zSql, 0, 0, &zErrMsg); if( zErrMsg ){ utf8_printf(stderr, "Error: %s\nSQL: [%s]\n", zErrMsg, zSql); sqlite3_free(zErrMsg); zErrMsg = 0; } } if( xForEach ){ xForEach(p, newDb, (const char*)zName); } printf("done\n"); } if( rc!=SQLITE_DONE ){ sqlite3_finalize(pQuery); sqlite3_free(zQuery); zQuery = sqlite3_mprintf("SELECT name, sql FROM sqlite_schema" " WHERE %s ORDER BY rowid DESC", zWhere); shell_check_oom(zQuery); rc = sqlite3_prepare_v2(p->db, zQuery, -1, &pQuery, 0); if( rc ){ utf8_printf(stderr, "Error: (%d) %s on [%s]\n", sqlite3_extended_errcode(p->db), sqlite3_errmsg(p->db), zQuery); goto end_schema_xfer; } while( sqlite3_step(pQuery)==SQLITE_ROW ){ zName = sqlite3_column_text(pQuery, 0); zSql = sqlite3_column_text(pQuery, 1); if( zName==0 || zSql==0 ) continue; if( sqlite3_stricmp((char*)zName, "sqlite_sequence")==0 ) continue; printf("%s... ", zName); fflush(stdout); sqlite3_exec(newDb, (const char*)zSql, 0, 0, &zErrMsg); if( zErrMsg ){ utf8_printf(stderr, "Error: %s\nSQL: [%s]\n", zErrMsg, zSql); sqlite3_free(zErrMsg); zErrMsg = 0; } if( xForEach ){ xForEach(p, newDb, (const char*)zName); } printf("done\n"); } } end_schema_xfer: sqlite3_finalize(pQuery); sqlite3_free(zQuery); } /* ** Open a new database file named "zNewDb". Try to recover as much information ** as possible out of the main database (which might be corrupt) and write it ** into zNewDb. */ static void tryToClone(ShellState *p, const char *zNewDb){ int rc; sqlite3 *newDb = 0; if( access(zNewDb,0)==0 ){ utf8_printf(stderr, "File \"%s\" already exists.\n", zNewDb); return; } rc = sqlite3_open(zNewDb, &newDb); if( rc ){ utf8_printf(stderr, "Cannot create output database: %s\n", sqlite3_errmsg(newDb)); }else{ sqlite3_exec(p->db, "PRAGMA writable_schema=ON;", 0, 0, 0); sqlite3_exec(newDb, "BEGIN EXCLUSIVE;", 0, 0, 0); tryToCloneSchema(p, newDb, "type='table'", tryToCloneData); tryToCloneSchema(p, newDb, "type!='table'", 0); sqlite3_exec(newDb, "COMMIT;", 0, 0, 0); sqlite3_exec(p->db, "PRAGMA writable_schema=OFF;", 0, 0, 0); } close_db(newDb); } /* ** Change the output file back to stdout. ** ** If the p->doXdgOpen flag is set, that means the output was being ** redirected to a temporary file named by p->zTempFile. In that case, ** launch start/open/xdg-open on that temporary file. */ |
︙ | ︙ | |||
24169 24170 24171 24172 24173 24174 24175 | "open"; #else "xdg-open"; #endif char *zCmd; zCmd = sqlite3_mprintf("%s %s", zXdgOpenCmd, p->zTempFile); if( system(zCmd) ){ | | < < < < < | 22253 22254 22255 22256 22257 22258 22259 22260 22261 22262 22263 22264 22265 22266 22267 22268 22269 22270 22271 22272 22273 22274 22275 22276 22277 22278 22279 22280 22281 22282 | "open"; #else "xdg-open"; #endif char *zCmd; zCmd = sqlite3_mprintf("%s %s", zXdgOpenCmd, p->zTempFile); if( system(zCmd) ){ utf8_printf(stderr, "Failed: [%s]\n", zCmd); }else{ /* Give the start/open/xdg-open command some time to get ** going before we continue, and potential delete the ** p->zTempFile data file out from under it */ sqlite3_sleep(2000); } sqlite3_free(zCmd); outputModePop(p); p->doXdgOpen = 0; } #endif /* !defined(SQLITE_NOHAVE_SYSTEM) */ } p->outfile[0] = 0; p->out = stdout; } /* ** Run an SQL command and return the single integer result. */ static int db_int(sqlite3 *db, const char *zSql){ sqlite3_stmt *pStmt; int res = 0; |
︙ | ︙ | |||
24260 24261 24262 24263 24264 24265 24266 | unsigned char aHdr[100]; open_db(p, 0); if( p->db==0 ) return 1; rc = sqlite3_prepare_v2(p->db, "SELECT data FROM sqlite_dbpage(?1) WHERE pgno=1", -1, &pStmt, 0); if( rc ){ | | | | | | | | | | | | | | | | 22339 22340 22341 22342 22343 22344 22345 22346 22347 22348 22349 22350 22351 22352 22353 22354 22355 22356 22357 22358 22359 22360 22361 22362 22363 22364 22365 22366 22367 22368 22369 22370 22371 22372 22373 22374 22375 22376 22377 22378 22379 22380 22381 22382 22383 22384 22385 22386 22387 22388 22389 22390 22391 22392 22393 22394 22395 22396 22397 22398 22399 22400 22401 22402 22403 22404 22405 22406 22407 22408 22409 22410 22411 22412 22413 22414 | unsigned char aHdr[100]; open_db(p, 0); if( p->db==0 ) return 1; rc = sqlite3_prepare_v2(p->db, "SELECT data FROM sqlite_dbpage(?1) WHERE pgno=1", -1, &pStmt, 0); if( rc ){ utf8_printf(stderr, "error: %s\n", sqlite3_errmsg(p->db)); sqlite3_finalize(pStmt); return 1; } sqlite3_bind_text(pStmt, 1, zDb, -1, SQLITE_STATIC); if( sqlite3_step(pStmt)==SQLITE_ROW && sqlite3_column_bytes(pStmt,0)>100 ){ const u8 *pb = sqlite3_column_blob(pStmt,0); shell_check_oom(pb); memcpy(aHdr, pb, 100); sqlite3_finalize(pStmt); }else{ raw_printf(stderr, "unable to read database header\n"); sqlite3_finalize(pStmt); return 1; } i = get2byteInt(aHdr+16); if( i==1 ) i = 65536; utf8_printf(p->out, "%-20s %d\n", "database page size:", i); utf8_printf(p->out, "%-20s %d\n", "write format:", aHdr[18]); utf8_printf(p->out, "%-20s %d\n", "read format:", aHdr[19]); utf8_printf(p->out, "%-20s %d\n", "reserved bytes:", aHdr[20]); for(i=0; i<ArraySize(aField); i++){ int ofst = aField[i].ofst; unsigned int val = get4byteInt(aHdr + ofst); utf8_printf(p->out, "%-20s %u", aField[i].zName, val); switch( ofst ){ case 56: { if( val==1 ) raw_printf(p->out, " (utf8)"); if( val==2 ) raw_printf(p->out, " (utf16le)"); if( val==3 ) raw_printf(p->out, " (utf16be)"); } } raw_printf(p->out, "\n"); } if( zDb==0 ){ zSchemaTab = sqlite3_mprintf("main.sqlite_schema"); }else if( cli_strcmp(zDb,"temp")==0 ){ zSchemaTab = sqlite3_mprintf("%s", "sqlite_temp_schema"); }else{ zSchemaTab = sqlite3_mprintf("\"%w\".sqlite_schema", zDb); } for(i=0; i<ArraySize(aQuery); i++){ char *zSql = sqlite3_mprintf(aQuery[i].zSql, zSchemaTab); int val = db_int(p->db, zSql); sqlite3_free(zSql); utf8_printf(p->out, "%-20s %d\n", aQuery[i].zName, val); } sqlite3_free(zSchemaTab); sqlite3_file_control(p->db, zDb, SQLITE_FCNTL_DATA_VERSION, &iDataVersion); utf8_printf(p->out, "%-20s %u\n", "data version", iDataVersion); return 0; } #endif /* SQLITE_SHELL_HAVE_RECOVER */ /* ** Print the current sqlite3_errmsg() value to stderr and return 1. */ static int shellDatabaseError(sqlite3 *db){ const char *zErr = sqlite3_errmsg(db); utf8_printf(stderr, "Error: %s\n", zErr); return 1; } /* ** Compare the pattern in zGlob[] against the text in z[]. Return TRUE ** if they match and FALSE (0) if they do not match. ** |
︙ | ︙ | |||
24556 24557 24558 24559 24560 24561 24562 24563 24564 24565 24566 24567 24568 24569 | */ static int lintFkeyIndexes( ShellState *pState, /* Current shell tool state */ char **azArg, /* Array of arguments passed to dot command */ int nArg /* Number of entries in azArg[] */ ){ sqlite3 *db = pState->db; /* Database handle to query "main" db of */ int bVerbose = 0; /* If -verbose is present */ int bGroupByParent = 0; /* If -groupbyparent is present */ int i; /* To iterate through azArg[] */ const char *zIndent = ""; /* How much to indent CREATE INDEX by */ int rc; /* Return code */ sqlite3_stmt *pSql = 0; /* Compiled version of SQL statement below */ | > | 22635 22636 22637 22638 22639 22640 22641 22642 22643 22644 22645 22646 22647 22648 22649 | */ static int lintFkeyIndexes( ShellState *pState, /* Current shell tool state */ char **azArg, /* Array of arguments passed to dot command */ int nArg /* Number of entries in azArg[] */ ){ sqlite3 *db = pState->db; /* Database handle to query "main" db of */ FILE *out = pState->out; /* Stream to write non-error output to */ int bVerbose = 0; /* If -verbose is present */ int bGroupByParent = 0; /* If -groupbyparent is present */ int i; /* To iterate through azArg[] */ const char *zIndent = ""; /* How much to indent CREATE INDEX by */ int rc; /* Return code */ sqlite3_stmt *pSql = 0; /* Compiled version of SQL statement below */ |
︙ | ︙ | |||
24637 24638 24639 24640 24641 24642 24643 | bVerbose = 1; } else if( n>1 && sqlite3_strnicmp("-groupbyparent", azArg[i], n)==0 ){ bGroupByParent = 1; zIndent = " "; } else{ | | > > | 22717 22718 22719 22720 22721 22722 22723 22724 22725 22726 22727 22728 22729 22730 22731 22732 22733 | bVerbose = 1; } else if( n>1 && sqlite3_strnicmp("-groupbyparent", azArg[i], n)==0 ){ bGroupByParent = 1; zIndent = " "; } else{ raw_printf(stderr, "Usage: %s %s ?-verbose? ?-groupbyparent?\n", azArg[0], azArg[1] ); return SQLITE_ERROR; } } /* Register the fkey_collate_clause() SQL function */ rc = sqlite3_create_function(db, "fkey_collate_clause", 4, SQLITE_UTF8, 0, shellFkeyCollateClause, 0, 0 |
︙ | ︙ | |||
24681 24682 24683 24684 24685 24686 24687 | res = zPlan!=0 && ( 0==sqlite3_strglob(zGlob, zPlan) || 0==sqlite3_strglob(zGlobIPK, zPlan)); } rc = sqlite3_finalize(pExplain); if( rc!=SQLITE_OK ) break; if( res<0 ){ | | | | | | | | | | | | > > | > > > > > | | 22763 22764 22765 22766 22767 22768 22769 22770 22771 22772 22773 22774 22775 22776 22777 22778 22779 22780 22781 22782 22783 22784 22785 22786 22787 22788 22789 22790 22791 22792 22793 22794 22795 22796 22797 22798 22799 22800 22801 22802 22803 22804 22805 22806 22807 22808 22809 22810 22811 22812 22813 22814 22815 22816 22817 22818 22819 22820 22821 22822 22823 22824 22825 22826 22827 22828 22829 22830 22831 22832 22833 22834 22835 22836 22837 22838 22839 22840 22841 22842 22843 22844 22845 22846 22847 22848 22849 22850 22851 22852 22853 22854 22855 22856 22857 22858 22859 22860 22861 22862 | res = zPlan!=0 && ( 0==sqlite3_strglob(zGlob, zPlan) || 0==sqlite3_strglob(zGlobIPK, zPlan)); } rc = sqlite3_finalize(pExplain); if( rc!=SQLITE_OK ) break; if( res<0 ){ raw_printf(stderr, "Error: internal error"); break; }else{ if( bGroupByParent && (bVerbose || res==0) && (zPrev==0 || sqlite3_stricmp(zParent, zPrev)) ){ raw_printf(out, "-- Parent table %s\n", zParent); sqlite3_free(zPrev); zPrev = sqlite3_mprintf("%s", zParent); } if( res==0 ){ raw_printf(out, "%s%s --> %s\n", zIndent, zCI, zTarget); }else if( bVerbose ){ raw_printf(out, "%s/* no extra indexes required for %s -> %s */\n", zIndent, zFrom, zTarget ); } } } sqlite3_free(zPrev); if( rc!=SQLITE_OK ){ raw_printf(stderr, "%s\n", sqlite3_errmsg(db)); } rc2 = sqlite3_finalize(pSql); if( rc==SQLITE_OK && rc2!=SQLITE_OK ){ rc = rc2; raw_printf(stderr, "%s\n", sqlite3_errmsg(db)); } }else{ raw_printf(stderr, "%s\n", sqlite3_errmsg(db)); } return rc; } /* ** Implementation of ".lint" dot command. */ static int lintDotCommand( ShellState *pState, /* Current shell tool state */ char **azArg, /* Array of arguments passed to dot command */ int nArg /* Number of entries in azArg[] */ ){ int n; n = (nArg>=2 ? strlen30(azArg[1]) : 0); if( n<1 || sqlite3_strnicmp(azArg[1], "fkey-indexes", n) ) goto usage; return lintFkeyIndexes(pState, azArg, nArg); usage: raw_printf(stderr, "Usage %s sub-command ?switches...?\n", azArg[0]); raw_printf(stderr, "Where sub-commands are:\n"); raw_printf(stderr, " fkey-indexes\n"); return SQLITE_ERROR; } #if !defined SQLITE_OMIT_VIRTUALTABLE static void shellPrepare( sqlite3 *db, int *pRc, const char *zSql, sqlite3_stmt **ppStmt ){ *ppStmt = 0; if( *pRc==SQLITE_OK ){ int rc = sqlite3_prepare_v2(db, zSql, -1, ppStmt, 0); if( rc!=SQLITE_OK ){ raw_printf(stderr, "sql error: %s (%d)\n", sqlite3_errmsg(db), sqlite3_errcode(db) ); *pRc = rc; } } } /* ** Create a prepared statement using printf-style arguments for the SQL. ** ** This routine is could be marked "static". But it is not always used, ** depending on compile-time options. By omitting the "static", we avoid ** nuisance compiler warnings about "defined but not used". */ void shellPreparePrintf( sqlite3 *db, int *pRc, sqlite3_stmt **ppStmt, const char *zFmt, ... ){ *ppStmt = 0; |
︙ | ︙ | |||
24782 24783 24784 24785 24786 24787 24788 | }else{ shellPrepare(db, pRc, z, ppStmt); sqlite3_free(z); } } } | < | > > > > | | < | | 22871 22872 22873 22874 22875 22876 22877 22878 22879 22880 22881 22882 22883 22884 22885 22886 22887 22888 22889 22890 22891 22892 22893 22894 22895 22896 22897 22898 22899 22900 22901 22902 22903 22904 22905 22906 22907 22908 22909 22910 22911 22912 22913 22914 22915 22916 22917 22918 22919 22920 22921 | }else{ shellPrepare(db, pRc, z, ppStmt); sqlite3_free(z); } } } /* Finalize the prepared statement created using shellPreparePrintf(). ** ** This routine is could be marked "static". But it is not always used, ** depending on compile-time options. By omitting the "static", we avoid ** nuisance compiler warnings about "defined but not used". */ void shellFinalize( int *pRc, sqlite3_stmt *pStmt ){ if( pStmt ){ sqlite3 *db = sqlite3_db_handle(pStmt); int rc = sqlite3_finalize(pStmt); if( *pRc==SQLITE_OK ){ if( rc!=SQLITE_OK ){ raw_printf(stderr, "SQL error: %s\n", sqlite3_errmsg(db)); } *pRc = rc; } } } /* Reset the prepared statement created using shellPreparePrintf(). ** ** This routine is could be marked "static". But it is not always used, ** depending on compile-time options. By omitting the "static", we avoid ** nuisance compiler warnings about "defined but not used". */ void shellReset( int *pRc, sqlite3_stmt *pStmt ){ int rc = sqlite3_reset(pStmt); if( *pRc==SQLITE_OK ){ if( rc!=SQLITE_OK ){ sqlite3 *db = sqlite3_db_handle(pStmt); raw_printf(stderr, "SQL error: %s\n", sqlite3_errmsg(db)); } *pRc = rc; } } #endif /* !defined SQLITE_OMIT_VIRTUALTABLE */ #if !defined(SQLITE_OMIT_VIRTUALTABLE) && defined(SQLITE_HAVE_ZLIB) |
︙ | ︙ | |||
24866 24867 24868 24869 24870 24871 24872 | */ static int arErrorMsg(ArCommand *pAr, const char *zFmt, ...){ va_list ap; char *z; va_start(ap, zFmt); z = sqlite3_vmprintf(zFmt, ap); va_end(ap); | | | | | 22957 22958 22959 22960 22961 22962 22963 22964 22965 22966 22967 22968 22969 22970 22971 22972 22973 22974 22975 | */ static int arErrorMsg(ArCommand *pAr, const char *zFmt, ...){ va_list ap; char *z; va_start(ap, zFmt); z = sqlite3_vmprintf(zFmt, ap); va_end(ap); utf8_printf(stderr, "Error: %s\n", z); if( pAr->fromCmdLine ){ utf8_printf(stderr, "Use \"-A\" for more help\n"); }else{ utf8_printf(stderr, "Use \".archive --help\" for more help\n"); } sqlite3_free(z); return SQLITE_ERROR; } /* ** Values for ArCommand.eCmd. |
︙ | ︙ | |||
24970 24971 24972 24973 24974 24975 24976 | { "dryrun", 'n', AR_SWITCH_DRYRUN, 0 }, { "glob", 'g', AR_SWITCH_GLOB, 0 }, }; int nSwitch = sizeof(aSwitch) / sizeof(struct ArSwitch); struct ArSwitch *pEnd = &aSwitch[nSwitch]; if( nArg<=1 ){ | | | 23061 23062 23063 23064 23065 23066 23067 23068 23069 23070 23071 23072 23073 23074 23075 | { "dryrun", 'n', AR_SWITCH_DRYRUN, 0 }, { "glob", 'g', AR_SWITCH_GLOB, 0 }, }; int nSwitch = sizeof(aSwitch) / sizeof(struct ArSwitch); struct ArSwitch *pEnd = &aSwitch[nSwitch]; if( nArg<=1 ){ utf8_printf(stderr, "Wrong number of arguments. Usage:\n"); return arUsage(stderr); }else{ char *z = azArg[1]; if( z[0]!='-' ){ /* Traditional style [tar] invocation */ int i; int iArg = 2; |
︙ | ︙ | |||
25076 25077 25078 25079 25080 25081 25082 | } if( arProcessSwitch(pAr, pMatch->eSwitch, zArg) ) return SQLITE_ERROR; } } } } if( pAr->eCmd==0 ){ | | | 23167 23168 23169 23170 23171 23172 23173 23174 23175 23176 23177 23178 23179 23180 23181 | } if( arProcessSwitch(pAr, pMatch->eSwitch, zArg) ) return SQLITE_ERROR; } } } } if( pAr->eCmd==0 ){ utf8_printf(stderr, "Required argument missing. Usage:\n"); return arUsage(stderr); } return SQLITE_OK; } /* ** This function assumes that all arguments within the ArCommand.azArg[] |
︙ | ︙ | |||
25119 25120 25121 25122 25123 25124 25125 | z[n] = '\0'; sqlite3_bind_text(pTest, j, z, -1, SQLITE_STATIC); if( SQLITE_ROW==sqlite3_step(pTest) ){ bOk = 1; } shellReset(&rc, pTest); if( rc==SQLITE_OK && bOk==0 ){ | | | 23210 23211 23212 23213 23214 23215 23216 23217 23218 23219 23220 23221 23222 23223 23224 | z[n] = '\0'; sqlite3_bind_text(pTest, j, z, -1, SQLITE_STATIC); if( SQLITE_ROW==sqlite3_step(pTest) ){ bOk = 1; } shellReset(&rc, pTest); if( rc==SQLITE_OK && bOk==0 ){ utf8_printf(stderr, "not found in archive: %s\n", z); rc = SQLITE_ERROR; } } shellFinalize(&rc, pTest); } return rc; } |
︙ | ︙ | |||
25186 25187 25188 25189 25190 25191 25192 | rc = arCheckEntries(pAr); arWhereClause(&rc, pAr, &zWhere); shellPreparePrintf(pAr->db, &rc, &pSql, zSql, azCols[pAr->bVerbose], pAr->zSrcTable, zWhere); if( pAr->bDryRun ){ | | | | > | > > | > | | | 23277 23278 23279 23280 23281 23282 23283 23284 23285 23286 23287 23288 23289 23290 23291 23292 23293 23294 23295 23296 23297 23298 23299 23300 23301 23302 23303 23304 23305 23306 23307 23308 23309 23310 23311 23312 23313 23314 23315 23316 23317 23318 23319 23320 23321 23322 23323 23324 23325 23326 23327 23328 23329 23330 23331 23332 23333 23334 23335 23336 23337 23338 23339 23340 23341 23342 23343 | rc = arCheckEntries(pAr); arWhereClause(&rc, pAr, &zWhere); shellPreparePrintf(pAr->db, &rc, &pSql, zSql, azCols[pAr->bVerbose], pAr->zSrcTable, zWhere); if( pAr->bDryRun ){ utf8_printf(pAr->p->out, "%s\n", sqlite3_sql(pSql)); }else{ while( rc==SQLITE_OK && SQLITE_ROW==sqlite3_step(pSql) ){ if( pAr->bVerbose ){ utf8_printf(pAr->p->out, "%s % 10d %s %s\n", sqlite3_column_text(pSql, 0), sqlite3_column_int(pSql, 1), sqlite3_column_text(pSql, 2), sqlite3_column_text(pSql, 3) ); }else{ utf8_printf(pAr->p->out, "%s\n", sqlite3_column_text(pSql, 0)); } } } shellFinalize(&rc, pSql); sqlite3_free(zWhere); return rc; } /* ** Implementation of .ar "Remove" command. */ static int arRemoveCommand(ArCommand *pAr){ int rc = 0; char *zSql = 0; char *zWhere = 0; if( pAr->nArg ){ /* Verify that args actually exist within the archive before proceeding. ** And formulate a WHERE clause to match them. */ rc = arCheckEntries(pAr); arWhereClause(&rc, pAr, &zWhere); } if( rc==SQLITE_OK ){ zSql = sqlite3_mprintf("DELETE FROM %s WHERE %s;", pAr->zSrcTable, zWhere); if( pAr->bDryRun ){ utf8_printf(pAr->p->out, "%s\n", zSql); }else{ char *zErr = 0; rc = sqlite3_exec(pAr->db, "SAVEPOINT ar;", 0, 0, 0); if( rc==SQLITE_OK ){ rc = sqlite3_exec(pAr->db, zSql, 0, 0, &zErr); if( rc!=SQLITE_OK ){ sqlite3_exec(pAr->db, "ROLLBACK TO ar; RELEASE ar;", 0, 0, 0); }else{ rc = sqlite3_exec(pAr->db, "RELEASE ar;", 0, 0, 0); } } if( zErr ){ utf8_printf(stdout, "ERROR: %s\n", zErr); sqlite3_free(zErr); } } } sqlite3_free(zWhere); sqlite3_free(zSql); return rc; |
︙ | ︙ | |||
25298 25299 25300 25301 25302 25303 25304 | ** only for the directories. This is because the timestamps for ** extracted directories must be reset after they are populated (as ** populating them changes the timestamp). */ for(i=0; i<2; i++){ j = sqlite3_bind_parameter_index(pSql, "$dirOnly"); sqlite3_bind_int(pSql, j, i); if( pAr->bDryRun ){ | | | | | | 23393 23394 23395 23396 23397 23398 23399 23400 23401 23402 23403 23404 23405 23406 23407 23408 23409 23410 23411 23412 23413 23414 23415 23416 23417 23418 23419 23420 23421 23422 23423 23424 23425 23426 23427 23428 23429 23430 23431 23432 23433 23434 23435 23436 23437 | ** only for the directories. This is because the timestamps for ** extracted directories must be reset after they are populated (as ** populating them changes the timestamp). */ for(i=0; i<2; i++){ j = sqlite3_bind_parameter_index(pSql, "$dirOnly"); sqlite3_bind_int(pSql, j, i); if( pAr->bDryRun ){ utf8_printf(pAr->p->out, "%s\n", sqlite3_sql(pSql)); }else{ while( rc==SQLITE_OK && SQLITE_ROW==sqlite3_step(pSql) ){ if( i==0 && pAr->bVerbose ){ utf8_printf(pAr->p->out, "%s\n", sqlite3_column_text(pSql, 0)); } } } shellReset(&rc, pSql); } shellFinalize(&rc, pSql); } sqlite3_free(zDir); sqlite3_free(zWhere); return rc; } /* ** Run the SQL statement in zSql. Or if doing a --dryrun, merely print it out. */ static int arExecSql(ArCommand *pAr, const char *zSql){ int rc; if( pAr->bDryRun ){ utf8_printf(pAr->p->out, "%s\n", zSql); rc = SQLITE_OK; }else{ char *zErr = 0; rc = sqlite3_exec(pAr->db, zSql, 0, 0, &zErr); if( zErr ){ utf8_printf(stdout, "ERROR: %s\n", zErr); sqlite3_free(zErr); } } return rc; } |
︙ | ︙ | |||
25503 25504 25505 25506 25507 25508 25509 | || cmd.eCmd==AR_CMD_REMOVE || cmd.eCmd==AR_CMD_UPDATE ){ flags = SQLITE_OPEN_READWRITE|SQLITE_OPEN_CREATE; }else{ flags = SQLITE_OPEN_READONLY; } cmd.db = 0; if( cmd.bDryRun ){ | | | > | > | | 23598 23599 23600 23601 23602 23603 23604 23605 23606 23607 23608 23609 23610 23611 23612 23613 23614 23615 23616 23617 23618 23619 23620 23621 23622 23623 23624 23625 23626 23627 23628 23629 23630 23631 23632 23633 | || cmd.eCmd==AR_CMD_REMOVE || cmd.eCmd==AR_CMD_UPDATE ){ flags = SQLITE_OPEN_READWRITE|SQLITE_OPEN_CREATE; }else{ flags = SQLITE_OPEN_READONLY; } cmd.db = 0; if( cmd.bDryRun ){ utf8_printf(pState->out, "-- open database '%s'%s\n", cmd.zFile, eDbType==SHELL_OPEN_APPENDVFS ? " using 'apndvfs'" : ""); } rc = sqlite3_open_v2(cmd.zFile, &cmd.db, flags, eDbType==SHELL_OPEN_APPENDVFS ? "apndvfs" : 0); if( rc!=SQLITE_OK ){ utf8_printf(stderr, "cannot open file: %s (%s)\n", cmd.zFile, sqlite3_errmsg(cmd.db) ); goto end_ar_command; } sqlite3_fileio_init(cmd.db, 0, 0); sqlite3_sqlar_init(cmd.db, 0, 0); sqlite3_create_function(cmd.db, "shell_putsnl", 1, SQLITE_UTF8, cmd.p, shellPutsFunc, 0, 0); } if( cmd.zSrcTable==0 && cmd.bZip==0 && cmd.eCmd!=AR_CMD_HELP ){ if( cmd.eCmd!=AR_CMD_CREATE && sqlite3_table_column_metadata(cmd.db,0,"sqlar","name",0,0,0,0,0) ){ utf8_printf(stderr, "database does not contain an 'sqlar' table\n"); rc = SQLITE_ERROR; goto end_ar_command; } cmd.zSrcTable = sqlite3_mprintf("sqlar"); } switch( cmd.eCmd ){ |
︙ | ︙ | |||
25580 25581 25582 25583 25584 25585 25586 | /* ** This function is used as a callback by the recover extension. Simply ** print the supplied SQL statement to stdout. */ static int recoverSqlCb(void *pCtx, const char *zSql){ ShellState *pState = (ShellState*)pCtx; | | | 23677 23678 23679 23680 23681 23682 23683 23684 23685 23686 23687 23688 23689 23690 23691 | /* ** This function is used as a callback by the recover extension. Simply ** print the supplied SQL statement to stdout. */ static int recoverSqlCb(void *pCtx, const char *zSql){ ShellState *pState = (ShellState*)pCtx; utf8_printf(pState->out, "%s;\n", zSql); return SQLITE_OK; } /* ** This function is called to recover data from the database. A script ** to construct a new database containing all recovered data is output ** on stream pState->out. |
︙ | ︙ | |||
25623 25624 25625 25626 25627 25628 25629 | i++; zLAF = azArg[i]; }else if( n<=10 && memcmp("-no-rowids", z, n)==0 ){ bRowids = 0; } else{ | | | < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < | | 23720 23721 23722 23723 23724 23725 23726 23727 23728 23729 23730 23731 23732 23733 23734 23735 23736 23737 23738 23739 23740 23741 23742 23743 23744 23745 23746 23747 23748 23749 23750 23751 23752 23753 23754 23755 23756 23757 23758 23759 23760 23761 23762 23763 23764 23765 23766 23767 23768 23769 23770 23771 23772 23773 23774 23775 23776 23777 23778 | i++; zLAF = azArg[i]; }else if( n<=10 && memcmp("-no-rowids", z, n)==0 ){ bRowids = 0; } else{ utf8_printf(stderr, "unexpected option: %s\n", azArg[i]); showHelp(pState->out, azArg[0]); return 1; } } p = sqlite3_recover_init_sql( pState->db, "main", recoverSqlCb, (void*)pState ); sqlite3_recover_config(p, 789, (void*)zRecoveryDb); /* Debug use only */ sqlite3_recover_config(p, SQLITE_RECOVER_LOST_AND_FOUND, (void*)zLAF); sqlite3_recover_config(p, SQLITE_RECOVER_ROWIDS, (void*)&bRowids); sqlite3_recover_config(p, SQLITE_RECOVER_FREELIST_CORRUPT,(void*)&bFreelist); sqlite3_recover_run(p); if( sqlite3_recover_errcode(p)!=SQLITE_OK ){ const char *zErr = sqlite3_recover_errmsg(p); int errCode = sqlite3_recover_errcode(p); raw_printf(stderr, "sql error: %s (%d)\n", zErr, errCode); } rc = sqlite3_recover_finish(p); return rc; } #endif /* SQLITE_SHELL_HAVE_RECOVER */ /* * zAutoColumn(zCol, &db, ?) => Maybe init db, add column zCol to it. * zAutoColumn(0, &db, ?) => (db!=0) Form columns spec for CREATE TABLE, * close db and set it to 0, and return the columns spec, to later * be sqlite3_free()'ed by the caller. * The return is 0 when either: * (a) The db was not initialized and zCol==0 (There are no columns.) * (b) zCol!=0 (Column was added, db initialized as needed.) * The 3rd argument, pRenamed, references an out parameter. If the * pointer is non-zero, its referent will be set to a summary of renames * done if renaming was necessary, or set to 0 if none was done. The out * string (if any) must be sqlite3_free()'ed by the caller. */ #ifdef SHELL_DEBUG #define rc_err_oom_die(rc) \ if( rc==SQLITE_NOMEM ) shell_check_oom(0); \ else if(!(rc==SQLITE_OK||rc==SQLITE_DONE)) \ fprintf(stderr,"E:%d\n",rc), assert(0) #else static void rc_err_oom_die(int rc){ if( rc==SQLITE_NOMEM ) shell_check_oom(0); assert(rc==SQLITE_OK||rc==SQLITE_DONE); } #endif |
︙ | ︙ | |||
25841 25842 25843 25844 25845 25846 25847 | if( *pDb==0 ){ if( SQLITE_OK!=sqlite3_open(zCOL_DB, pDb) ) return 0; #ifdef SHELL_COLFIX_DB if(*zCOL_DB!=':') sqlite3_exec(*pDb,"drop table if exists ColNames;" "drop view if exists RepeatedNames;",0,0,0); #endif | < | 23904 23905 23906 23907 23908 23909 23910 23911 23912 23913 23914 23915 23916 23917 | if( *pDb==0 ){ if( SQLITE_OK!=sqlite3_open(zCOL_DB, pDb) ) return 0; #ifdef SHELL_COLFIX_DB if(*zCOL_DB!=':') sqlite3_exec(*pDb,"drop table if exists ColNames;" "drop view if exists RepeatedNames;",0,0,0); #endif rc = sqlite3_exec(*pDb, zTabMake, 0, 0, 0); rc_err_oom_die(rc); } assert(*pDb!=0); rc = sqlite3_prepare_v2(*pDb, zTabFill, -1, &pStmt, 0); rc_err_oom_die(rc); rc = sqlite3_bind_text(pStmt, 1, zColNew, -1, 0); |
︙ | ︙ | |||
25902 25903 25904 25905 25906 25907 25908 | sqlite3_finalize(pStmt); sqlite3_close(*pDb); *pDb = 0; return zColsSpec; } } | < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < | 23964 23965 23966 23967 23968 23969 23970 23971 23972 23973 23974 23975 23976 23977 | sqlite3_finalize(pStmt); sqlite3_close(*pDb); *pDb = 0; return zColsSpec; } } /* ** If an input line begins with "." then invoke this routine to ** process that line. ** ** Return 1 on error, 2 to exit, and 0 otherwise. */ static int do_meta_command(char *zLine, ShellState *p){ |
︙ | ︙ | |||
26011 26012 26013 26014 26015 26016 26017 | n = strlen30(azArg[0]); c = azArg[0][0]; clearTempFile(p); #ifndef SQLITE_OMIT_AUTHORIZATION if( c=='a' && cli_strncmp(azArg[0], "auth", n)==0 ){ if( nArg!=2 ){ | | | 24017 24018 24019 24020 24021 24022 24023 24024 24025 24026 24027 24028 24029 24030 24031 | n = strlen30(azArg[0]); c = azArg[0][0]; clearTempFile(p); #ifndef SQLITE_OMIT_AUTHORIZATION if( c=='a' && cli_strncmp(azArg[0], "auth", n)==0 ){ if( nArg!=2 ){ raw_printf(stderr, "Usage: .auth ON|OFF\n"); rc = 1; goto meta_command_exit; } open_db(p, 0); if( booleanValue(azArg[1]) ){ sqlite3_set_authorizer(p->db, shellAuth, p); }else if( p->bSafeModePersist ){ |
︙ | ︙ | |||
26058 26059 26060 26061 26062 26063 26064 | if( cli_strcmp(z, "-append")==0 ){ zVfs = "apndvfs"; }else if( cli_strcmp(z, "-async")==0 ){ bAsync = 1; }else { | | | | | | | | | > | | 24064 24065 24066 24067 24068 24069 24070 24071 24072 24073 24074 24075 24076 24077 24078 24079 24080 24081 24082 24083 24084 24085 24086 24087 24088 24089 24090 24091 24092 24093 24094 24095 24096 24097 24098 24099 24100 24101 24102 24103 24104 24105 24106 24107 24108 24109 24110 24111 24112 24113 24114 24115 24116 24117 24118 24119 24120 24121 24122 24123 24124 24125 24126 24127 24128 24129 24130 24131 24132 24133 24134 24135 24136 24137 24138 24139 24140 24141 24142 24143 24144 24145 24146 | if( cli_strcmp(z, "-append")==0 ){ zVfs = "apndvfs"; }else if( cli_strcmp(z, "-async")==0 ){ bAsync = 1; }else { utf8_printf(stderr, "unknown option: %s\n", azArg[j]); return 1; } }else if( zDestFile==0 ){ zDestFile = azArg[j]; }else if( zDb==0 ){ zDb = zDestFile; zDestFile = azArg[j]; }else{ raw_printf(stderr, "Usage: .backup ?DB? ?OPTIONS? FILENAME\n"); return 1; } } if( zDestFile==0 ){ raw_printf(stderr, "missing FILENAME argument on .backup\n"); return 1; } if( zDb==0 ) zDb = "main"; rc = sqlite3_open_v2(zDestFile, &pDest, SQLITE_OPEN_READWRITE|SQLITE_OPEN_CREATE, zVfs); if( rc!=SQLITE_OK ){ utf8_printf(stderr, "Error: cannot open \"%s\"\n", zDestFile); close_db(pDest); return 1; } if( bAsync ){ sqlite3_exec(pDest, "PRAGMA synchronous=OFF; PRAGMA journal_mode=OFF;", 0, 0, 0); } open_db(p, 0); pBackup = sqlite3_backup_init(pDest, "main", p->db, zDb); if( pBackup==0 ){ utf8_printf(stderr, "Error: %s\n", sqlite3_errmsg(pDest)); close_db(pDest); return 1; } while( (rc = sqlite3_backup_step(pBackup,100))==SQLITE_OK ){} sqlite3_backup_finish(pBackup); if( rc==SQLITE_DONE ){ rc = 0; }else{ utf8_printf(stderr, "Error: %s\n", sqlite3_errmsg(pDest)); rc = 1; } close_db(pDest); }else #endif /* !defined(SQLITE_SHELL_FIDDLE) */ if( c=='b' && n>=3 && cli_strncmp(azArg[0], "bail", n)==0 ){ if( nArg==2 ){ bail_on_error = booleanValue(azArg[1]); }else{ raw_printf(stderr, "Usage: .bail on|off\n"); rc = 1; } }else /* Undocumented. Legacy only. See "crnl" below */ if( c=='b' && n>=3 && cli_strncmp(azArg[0], "binary", n)==0 ){ if( nArg==2 ){ if( booleanValue(azArg[1]) ){ setBinaryMode(p->out, 1); }else{ setTextMode(p->out, 1); } }else{ raw_printf(stderr, "The \".binary\" command is deprecated." " Use \".crnl\" instead.\n"); raw_printf(stderr, "Usage: .binary on|off\n"); rc = 1; } }else /* The undocumented ".breakpoint" command causes a call to the no-op ** routine named test_breakpoint(). */ |
︙ | ︙ | |||
26149 26150 26151 26152 26153 26154 26155 | wchar_t *z = sqlite3_win32_utf8_to_unicode(azArg[1]); rc = !SetCurrentDirectoryW(z); sqlite3_free(z); #else rc = chdir(azArg[1]); #endif if( rc ){ | | | | | > | | | | | | | | | | | | > > | | | 24156 24157 24158 24159 24160 24161 24162 24163 24164 24165 24166 24167 24168 24169 24170 24171 24172 24173 24174 24175 24176 24177 24178 24179 24180 24181 24182 24183 24184 24185 24186 24187 24188 24189 24190 24191 24192 24193 24194 24195 24196 24197 24198 24199 24200 24201 24202 24203 24204 24205 24206 24207 24208 24209 24210 24211 24212 24213 24214 24215 24216 24217 24218 24219 24220 24221 24222 24223 24224 24225 24226 24227 24228 24229 24230 24231 24232 24233 24234 24235 24236 24237 24238 24239 24240 24241 24242 24243 24244 24245 24246 24247 24248 24249 24250 24251 24252 24253 24254 24255 24256 24257 24258 24259 24260 24261 24262 24263 24264 24265 24266 24267 24268 24269 24270 24271 24272 24273 24274 24275 24276 24277 24278 24279 24280 24281 24282 24283 24284 24285 24286 24287 24288 24289 24290 24291 24292 24293 24294 24295 24296 24297 24298 24299 24300 24301 24302 24303 24304 24305 24306 24307 24308 24309 24310 24311 24312 24313 24314 24315 24316 24317 24318 24319 24320 | wchar_t *z = sqlite3_win32_utf8_to_unicode(azArg[1]); rc = !SetCurrentDirectoryW(z); sqlite3_free(z); #else rc = chdir(azArg[1]); #endif if( rc ){ utf8_printf(stderr, "Cannot change to directory \"%s\"\n", azArg[1]); rc = 1; } }else{ raw_printf(stderr, "Usage: .cd DIRECTORY\n"); rc = 1; } }else #endif /* !defined(SQLITE_SHELL_FIDDLE) */ if( c=='c' && n>=3 && cli_strncmp(azArg[0], "changes", n)==0 ){ if( nArg==2 ){ setOrClearFlag(p, SHFLG_CountChanges, azArg[1]); }else{ raw_printf(stderr, "Usage: .changes on|off\n"); rc = 1; } }else #ifndef SQLITE_SHELL_FIDDLE /* Cancel output redirection, if it is currently set (by .testcase) ** Then read the content of the testcase-out.txt file and compare against ** azArg[1]. If there are differences, report an error and exit. */ if( c=='c' && n>=3 && cli_strncmp(azArg[0], "check", n)==0 ){ char *zRes = 0; output_reset(p); if( nArg!=2 ){ raw_printf(stderr, "Usage: .check GLOB-PATTERN\n"); rc = 2; }else if( (zRes = readFile("testcase-out.txt", 0))==0 ){ rc = 2; }else if( testcase_glob(azArg[1],zRes)==0 ){ utf8_printf(stderr, "testcase-%s FAILED\n Expected: [%s]\n Got: [%s]\n", p->zTestcase, azArg[1], zRes); rc = 1; }else{ utf8_printf(stdout, "testcase-%s ok\n", p->zTestcase); p->nCheck++; } sqlite3_free(zRes); }else #endif /* !defined(SQLITE_SHELL_FIDDLE) */ #ifndef SQLITE_SHELL_FIDDLE if( c=='c' && cli_strncmp(azArg[0], "clone", n)==0 ){ failIfSafeMode(p, "cannot run .clone in safe mode"); if( nArg==2 ){ tryToClone(p, azArg[1]); }else{ raw_printf(stderr, "Usage: .clone FILENAME\n"); rc = 1; } }else #endif /* !defined(SQLITE_SHELL_FIDDLE) */ if( c=='c' && cli_strncmp(azArg[0], "connection", n)==0 ){ if( nArg==1 ){ /* List available connections */ int i; for(i=0; i<ArraySize(p->aAuxDb); i++){ const char *zFile = p->aAuxDb[i].zDbFilename; if( p->aAuxDb[i].db==0 && p->pAuxDb!=&p->aAuxDb[i] ){ zFile = "(not open)"; }else if( zFile==0 ){ zFile = "(memory)"; }else if( zFile[0]==0 ){ zFile = "(temporary-file)"; } if( p->pAuxDb == &p->aAuxDb[i] ){ utf8_printf(stdout, "ACTIVE %d: %s\n", i, zFile); }else if( p->aAuxDb[i].db!=0 ){ utf8_printf(stdout, " %d: %s\n", i, zFile); } } }else if( nArg==2 && IsDigit(azArg[1][0]) && azArg[1][1]==0 ){ int i = azArg[1][0] - '0'; if( p->pAuxDb != &p->aAuxDb[i] && i>=0 && i<ArraySize(p->aAuxDb) ){ p->pAuxDb->db = p->db; p->pAuxDb = &p->aAuxDb[i]; globalDb = p->db = p->pAuxDb->db; p->pAuxDb->db = 0; } }else if( nArg==3 && cli_strcmp(azArg[1], "close")==0 && IsDigit(azArg[2][0]) && azArg[2][1]==0 ){ int i = azArg[2][0] - '0'; if( i<0 || i>=ArraySize(p->aAuxDb) ){ /* No-op */ }else if( p->pAuxDb == &p->aAuxDb[i] ){ raw_printf(stderr, "cannot close the active database connection\n"); rc = 1; }else if( p->aAuxDb[i].db ){ session_close_all(p, i); close_db(p->aAuxDb[i].db); p->aAuxDb[i].db = 0; } }else{ raw_printf(stderr, "Usage: .connection [close] [CONNECTION-NUMBER]\n"); rc = 1; } }else if( c=='c' && n==4 && cli_strncmp(azArg[0], "crnl", n)==0 ){ if( nArg==2 ){ if( booleanValue(azArg[1]) ){ setTextMode(p->out, 1); }else{ setBinaryMode(p->out, 1); } }else{ #if !defined(_WIN32) && !defined(WIN32) raw_printf(stderr, "The \".crnl\" is a no-op on non-Windows machines.\n"); #endif raw_printf(stderr, "Usage: .crnl on|off\n"); rc = 1; } }else if( c=='d' && n>1 && cli_strncmp(azArg[0], "databases", n)==0 ){ char **azName = 0; int nName = 0; sqlite3_stmt *pStmt; int i; open_db(p, 0); rc = sqlite3_prepare_v2(p->db, "PRAGMA database_list", -1, &pStmt, 0); if( rc ){ utf8_printf(stderr, "Error: %s\n", sqlite3_errmsg(p->db)); rc = 1; }else{ while( sqlite3_step(pStmt)==SQLITE_ROW ){ const char *zSchema = (const char *)sqlite3_column_text(pStmt,1); const char *zFile = (const char*)sqlite3_column_text(pStmt,2); if( zSchema==0 || zFile==0 ) continue; azName = sqlite3_realloc(azName, (nName+1)*2*sizeof(char*)); shell_check_oom(azName); azName[nName*2] = strdup(zSchema); azName[nName*2+1] = strdup(zFile); nName++; } } sqlite3_finalize(pStmt); for(i=0; i<nName; i++){ int eTxn = sqlite3_txn_state(p->db, azName[i*2]); int bRdonly = sqlite3_db_readonly(p->db, azName[i*2]); const char *z = azName[i*2+1]; utf8_printf(p->out, "%s: %s %s%s\n", azName[i*2], z && z[0] ? z : "\"\"", bRdonly ? "r/o" : "r/w", eTxn==SQLITE_TXN_NONE ? "" : eTxn==SQLITE_TXN_READ ? " read-txn" : " write-txn"); free(azName[i*2]); free(azName[i*2+1]); } sqlite3_free(azName); }else |
︙ | ︙ | |||
26336 26337 26338 26339 26340 26341 26342 | open_db(p, 0); for(ii=0; ii<ArraySize(aDbConfig); ii++){ if( nArg>1 && cli_strcmp(azArg[1], aDbConfig[ii].zName)!=0 ) continue; if( nArg>=3 ){ sqlite3_db_config(p->db, aDbConfig[ii].op, booleanValue(azArg[2]), 0); } sqlite3_db_config(p->db, aDbConfig[ii].op, -1, &v); | | | | | 24346 24347 24348 24349 24350 24351 24352 24353 24354 24355 24356 24357 24358 24359 24360 24361 24362 24363 24364 24365 | open_db(p, 0); for(ii=0; ii<ArraySize(aDbConfig); ii++){ if( nArg>1 && cli_strcmp(azArg[1], aDbConfig[ii].zName)!=0 ) continue; if( nArg>=3 ){ sqlite3_db_config(p->db, aDbConfig[ii].op, booleanValue(azArg[2]), 0); } sqlite3_db_config(p->db, aDbConfig[ii].op, -1, &v); utf8_printf(p->out, "%19s %s\n", aDbConfig[ii].zName, v ? "on" : "off"); if( nArg>1 ) break; } if( nArg>1 && ii==ArraySize(aDbConfig) ){ utf8_printf(stderr, "Error: unknown dbconfig \"%s\"\n", azArg[1]); utf8_printf(stderr, "Enter \".dbconfig\" with no arguments for a list\n"); } }else #if SQLITE_SHELL_HAVE_RECOVER if( c=='d' && n>=3 && cli_strncmp(azArg[0], "dbinfo", n)==0 ){ rc = shell_dbinfo_command(p, nArg, azArg); }else |
︙ | ︙ | |||
26371 26372 26373 26374 26375 26376 26377 | |SHFLG_DumpDataOnly|SHFLG_DumpNoSys); for(i=1; i<nArg; i++){ if( azArg[i][0]=='-' ){ const char *z = azArg[i]+1; if( z[0]=='-' ) z++; if( cli_strcmp(z,"preserve-rowids")==0 ){ #ifdef SQLITE_OMIT_VIRTUALTABLE | | | | | 24381 24382 24383 24384 24385 24386 24387 24388 24389 24390 24391 24392 24393 24394 24395 24396 24397 24398 24399 24400 24401 24402 24403 24404 24405 24406 24407 24408 24409 24410 24411 24412 24413 24414 | |SHFLG_DumpDataOnly|SHFLG_DumpNoSys); for(i=1; i<nArg; i++){ if( azArg[i][0]=='-' ){ const char *z = azArg[i]+1; if( z[0]=='-' ) z++; if( cli_strcmp(z,"preserve-rowids")==0 ){ #ifdef SQLITE_OMIT_VIRTUALTABLE raw_printf(stderr, "The --preserve-rowids option is not compatible" " with SQLITE_OMIT_VIRTUALTABLE\n"); rc = 1; sqlite3_free(zLike); goto meta_command_exit; #else ShellSetFlag(p, SHFLG_PreserveRowid); #endif }else if( cli_strcmp(z,"newlines")==0 ){ ShellSetFlag(p, SHFLG_Newlines); }else if( cli_strcmp(z,"data-only")==0 ){ ShellSetFlag(p, SHFLG_DumpDataOnly); }else if( cli_strcmp(z,"nosys")==0 ){ ShellSetFlag(p, SHFLG_DumpNoSys); }else { raw_printf(stderr, "Unknown option \"%s\" on \".dump\"\n", azArg[i]); rc = 1; sqlite3_free(zLike); goto meta_command_exit; } }else{ /* azArg[i] contains a LIKE pattern. This ".dump" request should ** only dump data for tables for which either the table name matches |
︙ | ︙ | |||
26420 26421 26422 26423 26424 26425 26426 | zLike = zExpr; } } } open_db(p, 0); | < | | | 24430 24431 24432 24433 24434 24435 24436 24437 24438 24439 24440 24441 24442 24443 24444 24445 24446 24447 24448 24449 | zLike = zExpr; } } } open_db(p, 0); if( (p->shellFlgs & SHFLG_DumpDataOnly)==0 ){ /* When playing back a "dump", the content might appear in an order ** which causes immediate foreign key constraints to be violated. ** So disable foreign-key constraint enforcement to prevent problems. */ raw_printf(p->out, "PRAGMA foreign_keys=OFF;\n"); raw_printf(p->out, "BEGIN TRANSACTION;\n"); } p->writableSchema = 0; p->showHeader = 0; /* Set writable_schema=ON since doing so forces SQLite to initialize ** as much of the schema as it can even if the sqlite_schema table is ** corrupt. */ sqlite3_exec(p->db, "SAVEPOINT dump; PRAGMA writable_schema=ON", 0, 0, 0); |
︙ | ︙ | |||
26449 26450 26451 26452 26453 26454 26455 | ); run_schema_dump_query(p,zSql); sqlite3_free(zSql); if( (p->shellFlgs & SHFLG_DumpDataOnly)==0 ){ zSql = sqlite3_mprintf( "SELECT sql FROM sqlite_schema AS o " "WHERE (%s) AND sql NOT NULL" | | < | | | | 24458 24459 24460 24461 24462 24463 24464 24465 24466 24467 24468 24469 24470 24471 24472 24473 24474 24475 24476 24477 24478 24479 24480 24481 24482 24483 24484 24485 24486 24487 24488 24489 24490 24491 24492 24493 24494 24495 24496 | ); run_schema_dump_query(p,zSql); sqlite3_free(zSql); if( (p->shellFlgs & SHFLG_DumpDataOnly)==0 ){ zSql = sqlite3_mprintf( "SELECT sql FROM sqlite_schema AS o " "WHERE (%s) AND sql NOT NULL" " AND type IN ('index','trigger','view')", zLike ); run_table_dump_query(p, zSql); sqlite3_free(zSql); } sqlite3_free(zLike); if( p->writableSchema ){ raw_printf(p->out, "PRAGMA writable_schema=OFF;\n"); p->writableSchema = 0; } sqlite3_exec(p->db, "PRAGMA writable_schema=OFF;", 0, 0, 0); sqlite3_exec(p->db, "RELEASE dump;", 0, 0, 0); if( (p->shellFlgs & SHFLG_DumpDataOnly)==0 ){ raw_printf(p->out, p->nErr?"ROLLBACK; -- due to errors\n":"COMMIT;\n"); } p->showHeader = savedShowHeader; p->shellFlgs = savedShellFlags; }else if( c=='e' && cli_strncmp(azArg[0], "echo", n)==0 ){ if( nArg==2 ){ setOrClearFlag(p, SHFLG_Echo, azArg[1]); }else{ raw_printf(stderr, "Usage: .echo on|off\n"); rc = 1; } }else if( c=='e' && cli_strncmp(azArg[0], "eqp", n)==0 ){ if( nArg==2 ){ p->autoEQPtest = 0; |
︙ | ︙ | |||
26505 26506 26507 26508 26509 26510 26511 | sqlite3_exec(p->db, "SELECT name FROM sqlite_schema LIMIT 1", 0, 0, 0); sqlite3_exec(p->db, "PRAGMA vdbe_trace=ON;", 0, 0, 0); #endif }else{ p->autoEQP = (u8)booleanValue(azArg[1]); } }else{ | | | 24513 24514 24515 24516 24517 24518 24519 24520 24521 24522 24523 24524 24525 24526 24527 | sqlite3_exec(p->db, "SELECT name FROM sqlite_schema LIMIT 1", 0, 0, 0); sqlite3_exec(p->db, "PRAGMA vdbe_trace=ON;", 0, 0, 0); #endif }else{ p->autoEQP = (u8)booleanValue(azArg[1]); } }else{ raw_printf(stderr, "Usage: .eqp off|on|trace|trigger|full\n"); rc = 1; } }else #ifndef SQLITE_SHELL_FIDDLE if( c=='e' && cli_strncmp(azArg[0], "exit", n)==0 ){ if( nArg>1 && (rc = (int)integerValue(azArg[1]))!=0 ) exit(rc); |
︙ | ︙ | |||
26544 26545 26546 26547 26548 26549 26550 | p->autoExplain = 1; } }else #ifndef SQLITE_OMIT_VIRTUALTABLE if( c=='e' && cli_strncmp(azArg[0], "expert", n)==0 ){ if( p->bSafeMode ){ | > | | | 24552 24553 24554 24555 24556 24557 24558 24559 24560 24561 24562 24563 24564 24565 24566 24567 24568 | p->autoExplain = 1; } }else #ifndef SQLITE_OMIT_VIRTUALTABLE if( c=='e' && cli_strncmp(azArg[0], "expert", n)==0 ){ if( p->bSafeMode ){ raw_printf(stderr, "Cannot run experimental commands such as \"%s\" in safe mode\n", azArg[0]); rc = 1; }else{ open_db(p, 0); expertDotCommand(p, azArg, nArg); } }else #endif |
︙ | ︙ | |||
26601 26602 26603 26604 26605 26606 26607 | if( zCmd[0]=='-' && zCmd[1] ){ zCmd++; if( zCmd[0]=='-' && zCmd[1] ) zCmd++; } /* --help lists all file-controls */ if( cli_strcmp(zCmd,"help")==0 ){ | | > | | | | | | 24610 24611 24612 24613 24614 24615 24616 24617 24618 24619 24620 24621 24622 24623 24624 24625 24626 24627 24628 24629 24630 24631 24632 24633 24634 24635 24636 24637 24638 24639 24640 24641 24642 24643 24644 24645 24646 24647 24648 24649 24650 24651 | if( zCmd[0]=='-' && zCmd[1] ){ zCmd++; if( zCmd[0]=='-' && zCmd[1] ) zCmd++; } /* --help lists all file-controls */ if( cli_strcmp(zCmd,"help")==0 ){ utf8_printf(p->out, "Available file-controls:\n"); for(i=0; i<ArraySize(aCtrl); i++){ utf8_printf(p->out, " .filectrl %s %s\n", aCtrl[i].zCtrlName, aCtrl[i].zUsage); } rc = 1; goto meta_command_exit; } /* convert filectrl text option to value. allow any unique prefix ** of the option name, or a numerical value. */ n2 = strlen30(zCmd); for(i=0; i<ArraySize(aCtrl); i++){ if( cli_strncmp(zCmd, aCtrl[i].zCtrlName, n2)==0 ){ if( filectrl<0 ){ filectrl = aCtrl[i].ctrlCode; iCtrl = i; }else{ utf8_printf(stderr, "Error: ambiguous file-control: \"%s\"\n" "Use \".filectrl --help\" for help\n", zCmd); rc = 1; goto meta_command_exit; } } } if( filectrl<0 ){ utf8_printf(stderr,"Error: unknown file-control: %s\n" "Use \".filectrl --help\" for help\n", zCmd); }else{ switch(filectrl){ case SQLITE_FCNTL_SIZE_LIMIT: { if( nArg!=2 && nArg!=3 ) break; iRes = nArg==3 ? integerValue(azArg[2]) : -1; sqlite3_file_control(p->db, zSchema, SQLITE_FCNTL_SIZE_LIMIT, &iRes); isOk = 1; |
︙ | ︙ | |||
26670 26671 26672 26673 26674 26675 26676 | break; } case SQLITE_FCNTL_TEMPFILENAME: { char *z = 0; if( nArg!=2 ) break; sqlite3_file_control(p->db, zSchema, filectrl, &z); if( z ){ | | | | | | | 24680 24681 24682 24683 24684 24685 24686 24687 24688 24689 24690 24691 24692 24693 24694 24695 24696 24697 24698 24699 24700 24701 24702 24703 24704 24705 24706 24707 24708 24709 24710 24711 24712 24713 24714 24715 24716 24717 24718 24719 24720 24721 24722 24723 24724 24725 24726 24727 24728 24729 24730 24731 24732 24733 24734 24735 | break; } case SQLITE_FCNTL_TEMPFILENAME: { char *z = 0; if( nArg!=2 ) break; sqlite3_file_control(p->db, zSchema, filectrl, &z); if( z ){ utf8_printf(p->out, "%s\n", z); sqlite3_free(z); } isOk = 2; break; } case SQLITE_FCNTL_RESERVE_BYTES: { int x; if( nArg>=3 ){ x = atoi(azArg[2]); sqlite3_file_control(p->db, zSchema, filectrl, &x); } x = -1; sqlite3_file_control(p->db, zSchema, filectrl, &x); utf8_printf(p->out,"%d\n", x); isOk = 2; break; } } } if( isOk==0 && iCtrl>=0 ){ utf8_printf(p->out, "Usage: .filectrl %s %s\n", zCmd,aCtrl[iCtrl].zUsage); rc = 1; }else if( isOk==1 ){ char zBuf[100]; sqlite3_snprintf(sizeof(zBuf), zBuf, "%lld", iRes); raw_printf(p->out, "%s\n", zBuf); } }else if( c=='f' && cli_strncmp(azArg[0], "fullschema", n)==0 ){ ShellState data; int doStats = 0; memcpy(&data, p, sizeof(data)); data.showHeader = 0; data.cMode = data.mode = MODE_Semi; if( nArg==2 && optionMatch(azArg[1], "indent") ){ data.cMode = data.mode = MODE_Pretty; nArg = 1; } if( nArg!=1 ){ raw_printf(stderr, "Usage: .fullschema ?--indent?\n"); rc = 1; goto meta_command_exit; } open_db(p, 0); rc = sqlite3_exec(p->db, "SELECT sql FROM" " (SELECT sql sql, type type, tbl_name tbl_name, name name, rowid x" |
︙ | ︙ | |||
26737 26738 26739 26740 26741 26742 26743 | -1, &pStmt, 0); if( rc==SQLITE_OK ){ doStats = sqlite3_step(pStmt)==SQLITE_ROW; sqlite3_finalize(pStmt); } } if( doStats==0 ){ | | | | | | | | | > | 24747 24748 24749 24750 24751 24752 24753 24754 24755 24756 24757 24758 24759 24760 24761 24762 24763 24764 24765 24766 24767 24768 24769 24770 24771 24772 24773 24774 24775 24776 24777 24778 24779 24780 24781 24782 24783 24784 24785 24786 24787 24788 24789 24790 24791 24792 24793 24794 24795 24796 24797 24798 24799 24800 24801 24802 24803 24804 24805 24806 | -1, &pStmt, 0); if( rc==SQLITE_OK ){ doStats = sqlite3_step(pStmt)==SQLITE_ROW; sqlite3_finalize(pStmt); } } if( doStats==0 ){ raw_printf(p->out, "/* No STAT tables available */\n"); }else{ raw_printf(p->out, "ANALYZE sqlite_schema;\n"); data.cMode = data.mode = MODE_Insert; data.zDestTable = "sqlite_stat1"; shell_exec(&data, "SELECT * FROM sqlite_stat1", 0); data.zDestTable = "sqlite_stat4"; shell_exec(&data, "SELECT * FROM sqlite_stat4", 0); raw_printf(p->out, "ANALYZE sqlite_schema;\n"); } }else if( c=='h' && cli_strncmp(azArg[0], "headers", n)==0 ){ if( nArg==2 ){ p->showHeader = booleanValue(azArg[1]); p->shellFlgs |= SHFLG_HeaderSet; }else{ raw_printf(stderr, "Usage: .headers on|off\n"); rc = 1; } }else if( c=='h' && cli_strncmp(azArg[0], "help", n)==0 ){ if( nArg>=2 ){ n = showHelp(p->out, azArg[1]); if( n==0 ){ utf8_printf(p->out, "Nothing matches '%s'\n", azArg[1]); } }else{ showHelp(p->out, 0); } }else #ifndef SQLITE_SHELL_FIDDLE if( c=='i' && cli_strncmp(azArg[0], "import", n)==0 ){ char *zTable = 0; /* Insert data into this table */ char *zSchema = 0; /* within this schema (may default to "main") */ char *zFile = 0; /* Name of file to extra content from */ sqlite3_stmt *pStmt = NULL; /* A statement */ int nCol; /* Number of columns in the table */ int nByte; /* Number of bytes in an SQL string */ int i, j; /* Loop counters */ int needCommit; /* True to COMMIT or ROLLBACK at end */ int nSep; /* Number of bytes in p->colSeparator[] */ char *zSql; /* An SQL statement */ char *zFullTabName; /* Table name with schema if applicable */ ImportCtx sCtx; /* Reader context */ char *(SQLITE_CDECL *xRead)(ImportCtx*); /* Func to read one value */ int eVerbose = 0; /* Larger for more console output */ int nSkip = 0; /* Initial lines to skip */ int useOutputMode = 1; /* Use output mode to determine separators */ char *zCreate = 0; /* CREATE TABLE statement text */ |
︙ | ︙ | |||
26806 26807 26808 26809 26810 26811 26812 | if( z[0]=='-' && z[1]=='-' ) z++; if( z[0]!='-' ){ if( zFile==0 ){ zFile = z; }else if( zTable==0 ){ zTable = z; }else{ | | | | | > | > | > | | | | | | | | | | > > > > > > > > > > > > | < | < | | | | > > > > < < < < | < < | | < < < < < < < < < < < | | > > | | < < | | < < < < | < < < < | < < | | < < > > | 24817 24818 24819 24820 24821 24822 24823 24824 24825 24826 24827 24828 24829 24830 24831 24832 24833 24834 24835 24836 24837 24838 24839 24840 24841 24842 24843 24844 24845 24846 24847 24848 24849 24850 24851 24852 24853 24854 24855 24856 24857 24858 24859 24860 24861 24862 24863 24864 24865 24866 24867 24868 24869 24870 24871 24872 24873 24874 24875 24876 24877 24878 24879 24880 24881 24882 24883 24884 24885 24886 24887 24888 24889 24890 24891 24892 24893 24894 24895 24896 24897 24898 24899 24900 24901 24902 24903 24904 24905 24906 24907 24908 24909 24910 24911 24912 24913 24914 24915 24916 24917 24918 24919 24920 24921 24922 24923 24924 24925 24926 24927 24928 24929 24930 24931 24932 24933 24934 24935 24936 24937 24938 24939 24940 24941 24942 24943 24944 24945 24946 24947 24948 24949 24950 24951 24952 24953 24954 24955 24956 24957 24958 24959 24960 24961 24962 24963 24964 24965 24966 24967 24968 24969 24970 24971 24972 24973 24974 24975 24976 24977 24978 24979 24980 24981 24982 24983 24984 24985 24986 24987 24988 24989 24990 24991 24992 24993 24994 24995 24996 24997 24998 24999 25000 25001 25002 25003 25004 25005 25006 25007 25008 25009 25010 25011 25012 25013 25014 25015 25016 25017 25018 25019 25020 25021 25022 25023 25024 25025 25026 25027 25028 25029 | if( z[0]=='-' && z[1]=='-' ) z++; if( z[0]!='-' ){ if( zFile==0 ){ zFile = z; }else if( zTable==0 ){ zTable = z; }else{ utf8_printf(p->out, "ERROR: extra argument: \"%s\". Usage:\n", z); showHelp(p->out, "import"); goto meta_command_exit; } }else if( cli_strcmp(z,"-v")==0 ){ eVerbose++; }else if( cli_strcmp(z,"-schema")==0 && i<nArg-1 ){ zSchema = azArg[++i]; }else if( cli_strcmp(z,"-skip")==0 && i<nArg-1 ){ nSkip = integerValue(azArg[++i]); }else if( cli_strcmp(z,"-ascii")==0 ){ sCtx.cColSep = SEP_Unit[0]; sCtx.cRowSep = SEP_Record[0]; xRead = ascii_read_one_field; useOutputMode = 0; }else if( cli_strcmp(z,"-csv")==0 ){ sCtx.cColSep = ','; sCtx.cRowSep = '\n'; xRead = csv_read_one_field; useOutputMode = 0; }else{ utf8_printf(p->out, "ERROR: unknown option: \"%s\". Usage:\n", z); showHelp(p->out, "import"); goto meta_command_exit; } } if( zTable==0 ){ utf8_printf(p->out, "ERROR: missing %s argument. Usage:\n", zFile==0 ? "FILE" : "TABLE"); showHelp(p->out, "import"); goto meta_command_exit; } seenInterrupt = 0; open_db(p, 0); if( useOutputMode ){ /* If neither the --csv or --ascii options are specified, then set ** the column and row separator characters from the output mode. */ nSep = strlen30(p->colSeparator); if( nSep==0 ){ raw_printf(stderr, "Error: non-null column separator required for import\n"); goto meta_command_exit; } if( nSep>1 ){ raw_printf(stderr, "Error: multi-character column separators not allowed" " for import\n"); goto meta_command_exit; } nSep = strlen30(p->rowSeparator); if( nSep==0 ){ raw_printf(stderr, "Error: non-null row separator required for import\n"); goto meta_command_exit; } if( nSep==2 && p->mode==MODE_Csv && cli_strcmp(p->rowSeparator,SEP_CrLf)==0 ){ /* When importing CSV (only), if the row separator is set to the ** default output row separator, change it to the default input ** row separator. This avoids having to maintain different input ** and output row separators. */ sqlite3_snprintf(sizeof(p->rowSeparator), p->rowSeparator, SEP_Row); nSep = strlen30(p->rowSeparator); } if( nSep>1 ){ raw_printf(stderr, "Error: multi-character row separators not allowed" " for import\n"); goto meta_command_exit; } sCtx.cColSep = (u8)p->colSeparator[0]; sCtx.cRowSep = (u8)p->rowSeparator[0]; } sCtx.zFile = zFile; sCtx.nLine = 1; if( sCtx.zFile[0]=='|' ){ #ifdef SQLITE_OMIT_POPEN raw_printf(stderr, "Error: pipes are not supported in this OS\n"); goto meta_command_exit; #else sCtx.in = popen(sCtx.zFile+1, "r"); sCtx.zFile = "<pipe>"; sCtx.xCloser = pclose; #endif }else{ sCtx.in = fopen(sCtx.zFile, "rb"); sCtx.xCloser = fclose; } if( sCtx.in==0 ){ utf8_printf(stderr, "Error: cannot open \"%s\"\n", zFile); goto meta_command_exit; } if( eVerbose>=2 || (eVerbose>=1 && useOutputMode) ){ char zSep[2]; zSep[1] = 0; zSep[0] = sCtx.cColSep; utf8_printf(p->out, "Column separator "); output_c_string(p->out, zSep); utf8_printf(p->out, ", row separator "); zSep[0] = sCtx.cRowSep; output_c_string(p->out, zSep); utf8_printf(p->out, "\n"); } sCtx.z = sqlite3_malloc64(120); if( sCtx.z==0 ){ import_cleanup(&sCtx); shell_out_of_memory(); } /* Below, resources must be freed before exit. */ while( (nSkip--)>0 ){ while( xRead(&sCtx) && sCtx.cTerm==sCtx.cColSep ){} } if( zSchema!=0 ){ zFullTabName = sqlite3_mprintf("\"%w\".\"%w\"", zSchema, zTable); }else{ zFullTabName = sqlite3_mprintf("\"%w\"", zTable); } zSql = sqlite3_mprintf("SELECT * FROM %s", zFullTabName); if( zSql==0 || zFullTabName==0 ){ import_cleanup(&sCtx); shell_out_of_memory(); } nByte = strlen30(zSql); rc = sqlite3_prepare_v2(p->db, zSql, -1, &pStmt, 0); import_append_char(&sCtx, 0); /* To ensure sCtx.z is allocated */ if( rc && sqlite3_strglob("no such table: *", sqlite3_errmsg(p->db))==0 ){ sqlite3 *dbCols = 0; char *zRenames = 0; char *zColDefs; zCreate = sqlite3_mprintf("CREATE TABLE %s", zFullTabName); while( xRead(&sCtx) ){ zAutoColumn(sCtx.z, &dbCols, 0); if( sCtx.cTerm!=sCtx.cColSep ) break; } zColDefs = zAutoColumn(0, &dbCols, &zRenames); if( zRenames!=0 ){ utf8_printf((stdin_is_interactive && p->in==stdin)? p->out : stderr, "Columns renamed during .import %s due to duplicates:\n" "%s\n", sCtx.zFile, zRenames); sqlite3_free(zRenames); } assert(dbCols==0); if( zColDefs==0 ){ utf8_printf(stderr,"%s: empty file\n", sCtx.zFile); import_fail: sqlite3_free(zCreate); sqlite3_free(zSql); sqlite3_free(zFullTabName); import_cleanup(&sCtx); rc = 1; goto meta_command_exit; } zCreate = sqlite3_mprintf("%z%z\n", zCreate, zColDefs); if( eVerbose>=1 ){ utf8_printf(p->out, "%s\n", zCreate); } rc = sqlite3_exec(p->db, zCreate, 0, 0, 0); if( rc ){ utf8_printf(stderr, "%s failed:\n%s\n", zCreate, sqlite3_errmsg(p->db)); goto import_fail; } sqlite3_free(zCreate); zCreate = 0; rc = sqlite3_prepare_v2(p->db, zSql, -1, &pStmt, 0); } if( rc ){ if (pStmt) sqlite3_finalize(pStmt); utf8_printf(stderr,"Error: %s\n", sqlite3_errmsg(p->db)); goto import_fail; } sqlite3_free(zSql); nCol = sqlite3_column_count(pStmt); sqlite3_finalize(pStmt); pStmt = 0; if( nCol==0 ) return 0; /* no columns, no error */ zSql = sqlite3_malloc64( nByte*2 + 20 + nCol*2 ); if( zSql==0 ){ import_cleanup(&sCtx); shell_out_of_memory(); } sqlite3_snprintf(nByte+20, zSql, "INSERT INTO %s VALUES(?", zFullTabName); j = strlen30(zSql); for(i=1; i<nCol; i++){ zSql[j++] = ','; zSql[j++] = '?'; } zSql[j++] = ')'; zSql[j] = 0; if( eVerbose>=2 ){ utf8_printf(p->out, "Insert using: %s\n", zSql); } rc = sqlite3_prepare_v2(p->db, zSql, -1, &pStmt, 0); if( rc ){ utf8_printf(stderr, "Error: %s\n", sqlite3_errmsg(p->db)); if (pStmt) sqlite3_finalize(pStmt); goto import_fail; } sqlite3_free(zSql); sqlite3_free(zFullTabName); needCommit = sqlite3_get_autocommit(p->db); if( needCommit ) sqlite3_exec(p->db, "BEGIN", 0, 0, 0); do{ int startLine = sCtx.nLine; for(i=0; i<nCol; i++){ char *z = xRead(&sCtx); /* |
︙ | ︙ | |||
27042 27043 27044 27045 27046 27047 27048 | ** (If there are too few fields, it's not valid CSV anyway.) */ if( z==0 && (xRead==csv_read_one_field) && i==nCol-1 && i>0 ){ z = ""; } sqlite3_bind_text(pStmt, i+1, z, -1, SQLITE_TRANSIENT); if( i<nCol-1 && sCtx.cTerm!=sCtx.cColSep ){ | | | | | > | | | > | | | | | | | 25043 25044 25045 25046 25047 25048 25049 25050 25051 25052 25053 25054 25055 25056 25057 25058 25059 25060 25061 25062 25063 25064 25065 25066 25067 25068 25069 25070 25071 25072 25073 25074 25075 25076 25077 25078 25079 25080 25081 25082 25083 25084 25085 25086 25087 25088 25089 25090 25091 25092 25093 25094 25095 25096 25097 25098 25099 25100 25101 25102 25103 25104 25105 25106 25107 25108 25109 25110 25111 25112 25113 25114 | ** (If there are too few fields, it's not valid CSV anyway.) */ if( z==0 && (xRead==csv_read_one_field) && i==nCol-1 && i>0 ){ z = ""; } sqlite3_bind_text(pStmt, i+1, z, -1, SQLITE_TRANSIENT); if( i<nCol-1 && sCtx.cTerm!=sCtx.cColSep ){ utf8_printf(stderr, "%s:%d: expected %d columns but found %d - " "filling the rest with NULL\n", sCtx.zFile, startLine, nCol, i+1); i += 2; while( i<=nCol ){ sqlite3_bind_null(pStmt, i); i++; } } } if( sCtx.cTerm==sCtx.cColSep ){ do{ xRead(&sCtx); i++; }while( sCtx.cTerm==sCtx.cColSep ); utf8_printf(stderr, "%s:%d: expected %d columns but found %d - " "extras ignored\n", sCtx.zFile, startLine, nCol, i); } if( i>=nCol ){ sqlite3_step(pStmt); rc = sqlite3_reset(pStmt); if( rc!=SQLITE_OK ){ utf8_printf(stderr, "%s:%d: INSERT failed: %s\n", sCtx.zFile, startLine, sqlite3_errmsg(p->db)); sCtx.nErr++; }else{ sCtx.nRow++; } } }while( sCtx.cTerm!=EOF ); import_cleanup(&sCtx); sqlite3_finalize(pStmt); if( needCommit ) sqlite3_exec(p->db, "COMMIT", 0, 0, 0); if( eVerbose>0 ){ utf8_printf(p->out, "Added %d rows with %d errors using %d lines of input\n", sCtx.nRow, sCtx.nErr, sCtx.nLine-1); } }else #endif /* !defined(SQLITE_SHELL_FIDDLE) */ #ifndef SQLITE_UNTESTABLE if( c=='i' && cli_strncmp(azArg[0], "imposter", n)==0 ){ char *zSql; char *zCollist = 0; sqlite3_stmt *pStmt; int tnum = 0; int isWO = 0; /* True if making an imposter of a WITHOUT ROWID table */ int lenPK = 0; /* Length of the PRIMARY KEY string for isWO tables */ int i; if( !ShellHasFlag(p,SHFLG_TestingMode) ){ utf8_printf(stderr, ".%s unavailable without --unsafe-testing\n", "imposter"); rc = 1; goto meta_command_exit; } if( !(nArg==3 || (nArg==2 && sqlite3_stricmp(azArg[1],"off")==0)) ){ utf8_printf(stderr, "Usage: .imposter INDEX IMPOSTER\n" " .imposter off\n"); /* Also allowed, but not documented: ** ** .imposter TABLE IMPOSTER ** ** where TABLE is a WITHOUT ROWID table. In that case, the ** imposter is another WITHOUT ROWID table with the columns in ** storage order. */ |
︙ | ︙ | |||
27156 27157 27158 27159 27160 27161 27162 | zCollist = sqlite3_mprintf("\"%w\"", zCol); }else{ zCollist = sqlite3_mprintf("%z,\"%w\"", zCollist, zCol); } } sqlite3_finalize(pStmt); if( i==0 || tnum==0 ){ | | | | > | | > | < < < < < < < < < < < < < < < | | 25159 25160 25161 25162 25163 25164 25165 25166 25167 25168 25169 25170 25171 25172 25173 25174 25175 25176 25177 25178 25179 25180 25181 25182 25183 25184 25185 25186 25187 25188 25189 25190 25191 25192 25193 25194 25195 25196 25197 25198 25199 25200 25201 25202 25203 25204 25205 25206 25207 25208 25209 25210 25211 25212 25213 25214 25215 25216 25217 | zCollist = sqlite3_mprintf("\"%w\"", zCol); }else{ zCollist = sqlite3_mprintf("%z,\"%w\"", zCollist, zCol); } } sqlite3_finalize(pStmt); if( i==0 || tnum==0 ){ utf8_printf(stderr, "no such index: \"%s\"\n", azArg[1]); rc = 1; sqlite3_free(zCollist); goto meta_command_exit; } if( lenPK==0 ) lenPK = 100000; zSql = sqlite3_mprintf( "CREATE TABLE \"%w\"(%s,PRIMARY KEY(%.*s))WITHOUT ROWID", azArg[2], zCollist, lenPK, zCollist); sqlite3_free(zCollist); rc = sqlite3_test_control(SQLITE_TESTCTRL_IMPOSTER, p->db, "main", 1, tnum); if( rc==SQLITE_OK ){ rc = sqlite3_exec(p->db, zSql, 0, 0, 0); sqlite3_test_control(SQLITE_TESTCTRL_IMPOSTER, p->db, "main", 0, 0); if( rc ){ utf8_printf(stderr, "Error in [%s]: %s\n", zSql, sqlite3_errmsg(p->db)); }else{ utf8_printf(stdout, "%s;\n", zSql); raw_printf(stdout, "WARNING: writing to an imposter table will corrupt the \"%s\" %s!\n", azArg[1], isWO ? "table" : "index" ); } }else{ raw_printf(stderr, "SQLITE_TESTCTRL_IMPOSTER returns %d\n", rc); rc = 1; } sqlite3_free(zSql); }else #endif /* !defined(SQLITE_OMIT_TEST_CONTROL) */ #ifdef SQLITE_ENABLE_IOTRACE if( c=='i' && cli_strncmp(azArg[0], "iotrace", n)==0 ){ SQLITE_API extern void (SQLITE_CDECL *sqlite3IoTrace)(const char*, ...); if( iotrace && iotrace!=stdout ) fclose(iotrace); iotrace = 0; if( nArg<2 ){ sqlite3IoTrace = 0; }else if( cli_strcmp(azArg[1], "-")==0 ){ sqlite3IoTrace = iotracePrintf; iotrace = stdout; }else{ iotrace = fopen(azArg[1], "w"); if( iotrace==0 ){ utf8_printf(stderr, "Error: cannot open \"%s\"\n", azArg[1]); sqlite3IoTrace = 0; rc = 1; }else{ sqlite3IoTrace = iotracePrintf; } } }else |
︙ | ︙ | |||
27245 27246 27247 27248 27249 27250 27251 | { "trigger_depth", SQLITE_LIMIT_TRIGGER_DEPTH }, { "worker_threads", SQLITE_LIMIT_WORKER_THREADS }, }; int i, n2; open_db(p, 0); if( nArg==1 ){ for(i=0; i<ArraySize(aLimit); i++){ | | | | | | | | | | | | | | | | 25235 25236 25237 25238 25239 25240 25241 25242 25243 25244 25245 25246 25247 25248 25249 25250 25251 25252 25253 25254 25255 25256 25257 25258 25259 25260 25261 25262 25263 25264 25265 25266 25267 25268 25269 25270 25271 25272 25273 25274 25275 25276 25277 25278 25279 25280 25281 25282 25283 25284 25285 25286 25287 25288 25289 25290 25291 25292 25293 25294 25295 25296 25297 25298 25299 25300 25301 25302 25303 25304 25305 25306 25307 25308 25309 25310 25311 25312 25313 25314 25315 25316 25317 25318 25319 25320 25321 25322 25323 25324 25325 | { "trigger_depth", SQLITE_LIMIT_TRIGGER_DEPTH }, { "worker_threads", SQLITE_LIMIT_WORKER_THREADS }, }; int i, n2; open_db(p, 0); if( nArg==1 ){ for(i=0; i<ArraySize(aLimit); i++){ printf("%20s %d\n", aLimit[i].zLimitName, sqlite3_limit(p->db, aLimit[i].limitCode, -1)); } }else if( nArg>3 ){ raw_printf(stderr, "Usage: .limit NAME ?NEW-VALUE?\n"); rc = 1; goto meta_command_exit; }else{ int iLimit = -1; n2 = strlen30(azArg[1]); for(i=0; i<ArraySize(aLimit); i++){ if( sqlite3_strnicmp(aLimit[i].zLimitName, azArg[1], n2)==0 ){ if( iLimit<0 ){ iLimit = i; }else{ utf8_printf(stderr, "ambiguous limit: \"%s\"\n", azArg[1]); rc = 1; goto meta_command_exit; } } } if( iLimit<0 ){ utf8_printf(stderr, "unknown limit: \"%s\"\n" "enter \".limits\" with no arguments for a list.\n", azArg[1]); rc = 1; goto meta_command_exit; } if( nArg==3 ){ sqlite3_limit(p->db, aLimit[iLimit].limitCode, (int)integerValue(azArg[2])); } printf("%20s %d\n", aLimit[iLimit].zLimitName, sqlite3_limit(p->db, aLimit[iLimit].limitCode, -1)); } }else if( c=='l' && n>2 && cli_strncmp(azArg[0], "lint", n)==0 ){ open_db(p, 0); lintDotCommand(p, azArg, nArg); }else #if !defined(SQLITE_OMIT_LOAD_EXTENSION) && !defined(SQLITE_SHELL_FIDDLE) if( c=='l' && cli_strncmp(azArg[0], "load", n)==0 ){ const char *zFile, *zProc; char *zErrMsg = 0; failIfSafeMode(p, "cannot run .load in safe mode"); if( nArg<2 || azArg[1][0]==0 ){ /* Must have a non-empty FILE. (Will not load self.) */ raw_printf(stderr, "Usage: .load FILE ?ENTRYPOINT?\n"); rc = 1; goto meta_command_exit; } zFile = azArg[1]; zProc = nArg>=3 ? azArg[2] : 0; open_db(p, 0); rc = sqlite3_load_extension(p->db, zFile, zProc, &zErrMsg); if( rc!=SQLITE_OK ){ utf8_printf(stderr, "Error: %s\n", zErrMsg); sqlite3_free(zErrMsg); rc = 1; } }else #endif if( c=='l' && cli_strncmp(azArg[0], "log", n)==0 ){ if( nArg!=2 ){ raw_printf(stderr, "Usage: .log FILENAME\n"); rc = 1; }else{ const char *zFile = azArg[1]; if( p->bSafeMode && cli_strcmp(zFile,"on")!=0 && cli_strcmp(zFile,"off")!=0 ){ raw_printf(stdout, "cannot set .log to anything other " "than \"on\" or \"off\"\n"); zFile = "off"; } output_file_close(p->pLog); if( cli_strcmp(zFile,"on")==0 ) zFile = "stdout"; p->pLog = output_file_open(zFile, 0); } }else |
︙ | ︙ | |||
27360 27361 27362 27363 27364 27365 27366 | ColModeOpts cmo = ColModeOpts_default_qbox; zMode = "box"; cmOpts = cmo; } }else if( zTabname==0 ){ zTabname = z; }else if( z[0]=='-' ){ | | | | | | | | | > > | | | | | | 25350 25351 25352 25353 25354 25355 25356 25357 25358 25359 25360 25361 25362 25363 25364 25365 25366 25367 25368 25369 25370 25371 25372 25373 25374 25375 25376 25377 25378 25379 25380 25381 25382 25383 25384 25385 25386 25387 25388 25389 25390 | ColModeOpts cmo = ColModeOpts_default_qbox; zMode = "box"; cmOpts = cmo; } }else if( zTabname==0 ){ zTabname = z; }else if( z[0]=='-' ){ utf8_printf(stderr, "unknown option: %s\n", z); utf8_printf(stderr, "options:\n" " --noquote\n" " --quote\n" " --wordwrap on/off\n" " --wrap N\n" " --ww\n"); rc = 1; goto meta_command_exit; }else{ utf8_printf(stderr, "extra argument: \"%s\"\n", z); rc = 1; goto meta_command_exit; } } if( zMode==0 ){ if( p->mode==MODE_Column || (p->mode>=MODE_Markdown && p->mode<=MODE_Box) ){ raw_printf (p->out, "current output mode: %s --wrap %d --wordwrap %s --%squote\n", modeDescr[p->mode], p->cmOpts.iWrap, p->cmOpts.bWordWrap ? "on" : "off", p->cmOpts.bQuote ? "" : "no"); }else{ raw_printf(p->out, "current output mode: %s\n", modeDescr[p->mode]); } zMode = modeDescr[p->mode]; } n2 = strlen30(zMode); if( cli_strncmp(zMode,"lines",n2)==0 ){ p->mode = MODE_Line; sqlite3_snprintf(sizeof(p->rowSeparator), p->rowSeparator, SEP_Row); |
︙ | ︙ | |||
27443 27444 27445 27446 27447 27448 27449 | }else if( cli_strncmp(zMode,"count",n2)==0 ){ p->mode = MODE_Count; }else if( cli_strncmp(zMode,"off",n2)==0 ){ p->mode = MODE_Off; }else if( cli_strncmp(zMode,"json",n2)==0 ){ p->mode = MODE_Json; }else{ | | | | | | | | | 25435 25436 25437 25438 25439 25440 25441 25442 25443 25444 25445 25446 25447 25448 25449 25450 25451 25452 25453 25454 25455 25456 25457 25458 25459 25460 25461 25462 25463 25464 25465 25466 25467 25468 25469 25470 25471 25472 25473 25474 25475 25476 25477 25478 25479 | }else if( cli_strncmp(zMode,"count",n2)==0 ){ p->mode = MODE_Count; }else if( cli_strncmp(zMode,"off",n2)==0 ){ p->mode = MODE_Off; }else if( cli_strncmp(zMode,"json",n2)==0 ){ p->mode = MODE_Json; }else{ raw_printf(stderr, "Error: mode should be one of: " "ascii box column csv html insert json line list markdown " "qbox quote table tabs tcl\n"); rc = 1; } p->cMode = p->mode; }else #ifndef SQLITE_SHELL_FIDDLE if( c=='n' && cli_strcmp(azArg[0], "nonce")==0 ){ if( nArg!=2 ){ raw_printf(stderr, "Usage: .nonce NONCE\n"); rc = 1; }else if( p->zNonce==0 || cli_strcmp(azArg[1],p->zNonce)!=0 ){ raw_printf(stderr, "line %d: incorrect nonce: \"%s\"\n", p->lineno, azArg[1]); exit(1); }else{ p->bSafeMode = 0; return 0; /* Return immediately to bypass the safe mode reset ** at the end of this procedure */ } }else #endif /* !defined(SQLITE_SHELL_FIDDLE) */ if( c=='n' && cli_strncmp(azArg[0], "nullvalue", n)==0 ){ if( nArg==2 ){ sqlite3_snprintf(sizeof(p->nullValue), p->nullValue, "%.*s", (int)ArraySize(p->nullValue)-1, azArg[1]); }else{ raw_printf(stderr, "Usage: .nullvalue STRING\n"); rc = 1; } }else if( c=='o' && cli_strncmp(azArg[0], "open", n)==0 && n>=2 ){ const char *zFN = 0; /* Pointer to constant filename */ char *zNewFilename = 0; /* Name of the database file to open */ |
︙ | ︙ | |||
27512 27513 27514 27515 27516 27517 27518 | openMode = SHELL_OPEN_HEXDB; }else if( optionMatch(z, "maxsize") && iName+1<nArg ){ p->szMax = integerValue(azArg[++iName]); #endif /* SQLITE_OMIT_DESERIALIZE */ }else #endif /* !SQLITE_SHELL_FIDDLE */ if( z[0]=='-' ){ | | | | 25504 25505 25506 25507 25508 25509 25510 25511 25512 25513 25514 25515 25516 25517 25518 25519 25520 25521 25522 | openMode = SHELL_OPEN_HEXDB; }else if( optionMatch(z, "maxsize") && iName+1<nArg ){ p->szMax = integerValue(azArg[++iName]); #endif /* SQLITE_OMIT_DESERIALIZE */ }else #endif /* !SQLITE_SHELL_FIDDLE */ if( z[0]=='-' ){ utf8_printf(stderr, "unknown option: %s\n", z); rc = 1; goto meta_command_exit; }else if( zFN ){ utf8_printf(stderr, "extra argument: \"%s\"\n", z); rc = 1; goto meta_command_exit; }else{ zFN = z; } } |
︙ | ︙ | |||
27558 27559 27560 27561 27562 27563 27564 | shell_check_oom(zNewFilename); }else{ zNewFilename = 0; } p->pAuxDb->zDbFilename = zNewFilename; open_db(p, OPEN_DB_KEEPALIVE); if( p->db==0 ){ | | | 25550 25551 25552 25553 25554 25555 25556 25557 25558 25559 25560 25561 25562 25563 25564 | shell_check_oom(zNewFilename); }else{ zNewFilename = 0; } p->pAuxDb->zDbFilename = zNewFilename; open_db(p, OPEN_DB_KEEPALIVE); if( p->db==0 ){ utf8_printf(stderr, "Error: cannot open '%s'\n", zNewFilename); sqlite3_free(zNewFilename); }else{ p->pAuxDb->zFreeOnClose = zNewFilename; } } if( p->db==0 ){ /* As a fall-back open a TEMP database */ |
︙ | ︙ | |||
27582 27583 27584 27585 27586 27587 27588 | || (c=='e' && n==5 && cli_strcmp(azArg[0],"excel")==0) ){ char *zFile = 0; int bTxtMode = 0; int i; int eMode = 0; int bOnce = 0; /* 0: .output, 1: .once, 2: .excel */ | < | > > > > | | > | > | 25574 25575 25576 25577 25578 25579 25580 25581 25582 25583 25584 25585 25586 25587 25588 25589 25590 25591 25592 25593 25594 25595 25596 25597 25598 25599 25600 25601 25602 25603 25604 25605 25606 25607 25608 25609 25610 25611 25612 25613 25614 25615 25616 25617 25618 25619 25620 25621 25622 25623 25624 25625 25626 | || (c=='e' && n==5 && cli_strcmp(azArg[0],"excel")==0) ){ char *zFile = 0; int bTxtMode = 0; int i; int eMode = 0; int bOnce = 0; /* 0: .output, 1: .once, 2: .excel */ unsigned char zBOM[4]; /* Byte-order mark to using if --bom is present */ zBOM[0] = 0; failIfSafeMode(p, "cannot run .%s in safe mode", azArg[0]); if( c=='e' ){ eMode = 'x'; bOnce = 2; }else if( cli_strncmp(azArg[0],"once",n)==0 ){ bOnce = 1; } for(i=1; i<nArg; i++){ char *z = azArg[i]; if( z[0]=='-' ){ if( z[1]=='-' ) z++; if( cli_strcmp(z,"-bom")==0 ){ zBOM[0] = 0xef; zBOM[1] = 0xbb; zBOM[2] = 0xbf; zBOM[3] = 0; }else if( c!='e' && cli_strcmp(z,"-x")==0 ){ eMode = 'x'; /* spreadsheet */ }else if( c!='e' && cli_strcmp(z,"-e")==0 ){ eMode = 'e'; /* text editor */ }else{ utf8_printf(p->out, "ERROR: unknown option: \"%s\". Usage:\n", azArg[i]); showHelp(p->out, azArg[0]); rc = 1; goto meta_command_exit; } }else if( zFile==0 && eMode!='e' && eMode!='x' ){ zFile = sqlite3_mprintf("%s", z); if( zFile && zFile[0]=='|' ){ while( i+1<nArg ) zFile = sqlite3_mprintf("%z %s", zFile, azArg[++i]); break; } }else{ utf8_printf(p->out,"ERROR: extra parameter: \"%s\". Usage:\n", azArg[i]); showHelp(p->out, azArg[0]); rc = 1; sqlite3_free(zFile); goto meta_command_exit; } } if( zFile==0 ){ |
︙ | ︙ | |||
27654 27655 27656 27657 27658 27659 27660 | sqlite3_free(zFile); zFile = sqlite3_mprintf("%s", p->zTempFile); } #endif /* SQLITE_NOHAVE_SYSTEM */ shell_check_oom(zFile); if( zFile[0]=='|' ){ #ifdef SQLITE_OMIT_POPEN | | | | | | > < | | | | > < | | 25651 25652 25653 25654 25655 25656 25657 25658 25659 25660 25661 25662 25663 25664 25665 25666 25667 25668 25669 25670 25671 25672 25673 25674 25675 25676 25677 25678 25679 25680 25681 25682 25683 25684 25685 25686 25687 25688 | sqlite3_free(zFile); zFile = sqlite3_mprintf("%s", p->zTempFile); } #endif /* SQLITE_NOHAVE_SYSTEM */ shell_check_oom(zFile); if( zFile[0]=='|' ){ #ifdef SQLITE_OMIT_POPEN raw_printf(stderr, "Error: pipes are not supported in this OS\n"); rc = 1; p->out = stdout; #else p->out = popen(zFile + 1, "w"); if( p->out==0 ){ utf8_printf(stderr,"Error: cannot open pipe \"%s\"\n", zFile + 1); p->out = stdout; rc = 1; }else{ if( zBOM[0] ) fwrite(zBOM, 1, 3, p->out); sqlite3_snprintf(sizeof(p->outfile), p->outfile, "%s", zFile); } #endif }else{ p->out = output_file_open(zFile, bTxtMode); if( p->out==0 ){ if( cli_strcmp(zFile,"off")!=0 ){ utf8_printf(stderr,"Error: cannot write to \"%s\"\n", zFile); } p->out = stdout; rc = 1; } else { if( zBOM[0] ) fwrite(zBOM, 1, 3, p->out); sqlite3_snprintf(sizeof(p->outfile), p->outfile, "%s", zFile); } } sqlite3_free(zFile); }else #endif /* !defined(SQLITE_SHELL_FIDDLE) */ |
︙ | ︙ | |||
27718 27719 27720 27721 27722 27723 27724 | sqlite3_finalize(pStmt); pStmt = 0; if( len ){ rx = sqlite3_prepare_v2(p->db, "SELECT key, quote(value) " "FROM temp.sqlite_parameters;", -1, &pStmt, 0); while( rx==SQLITE_OK && sqlite3_step(pStmt)==SQLITE_ROW ){ | | | | 25715 25716 25717 25718 25719 25720 25721 25722 25723 25724 25725 25726 25727 25728 25729 25730 | sqlite3_finalize(pStmt); pStmt = 0; if( len ){ rx = sqlite3_prepare_v2(p->db, "SELECT key, quote(value) " "FROM temp.sqlite_parameters;", -1, &pStmt, 0); while( rx==SQLITE_OK && sqlite3_step(pStmt)==SQLITE_ROW ){ utf8_printf(p->out, "%-*s %s\n", len, sqlite3_column_text(pStmt,0), sqlite3_column_text(pStmt,1)); } sqlite3_finalize(pStmt); } }else /* .parameter init ** Make sure the TEMP table used to hold bind parameters exists. |
︙ | ︙ | |||
27763 27764 27765 27766 27767 27768 27769 | zSql = sqlite3_mprintf( "REPLACE INTO temp.sqlite_parameters(key,value)" "VALUES(%Q,%Q);", zKey, zValue); shell_check_oom(zSql); rx = sqlite3_prepare_v2(p->db, zSql, -1, &pStmt, 0); sqlite3_free(zSql); if( rx!=SQLITE_OK ){ | | | 25760 25761 25762 25763 25764 25765 25766 25767 25768 25769 25770 25771 25772 25773 25774 | zSql = sqlite3_mprintf( "REPLACE INTO temp.sqlite_parameters(key,value)" "VALUES(%Q,%Q);", zKey, zValue); shell_check_oom(zSql); rx = sqlite3_prepare_v2(p->db, zSql, -1, &pStmt, 0); sqlite3_free(zSql); if( rx!=SQLITE_OK ){ utf8_printf(p->out, "Error: %s\n", sqlite3_errmsg(p->db)); sqlite3_finalize(pStmt); pStmt = 0; rc = 1; } } sqlite3_step(pStmt); sqlite3_finalize(pStmt); |
︙ | ︙ | |||
27792 27793 27794 27795 27796 27797 27798 | parameter_syntax_error: showHelp(p->out, "parameter"); }else if( c=='p' && n>=3 && cli_strncmp(azArg[0], "print", n)==0 ){ int i; for(i=1; i<nArg; i++){ | | | | | 25789 25790 25791 25792 25793 25794 25795 25796 25797 25798 25799 25800 25801 25802 25803 25804 25805 25806 | parameter_syntax_error: showHelp(p->out, "parameter"); }else if( c=='p' && n>=3 && cli_strncmp(azArg[0], "print", n)==0 ){ int i; for(i=1; i<nArg; i++){ if( i>1 ) raw_printf(p->out, " "); utf8_printf(p->out, "%s", azArg[i]); } raw_printf(p->out, "\n"); }else #ifndef SQLITE_OMIT_PROGRESS_CALLBACK if( c=='p' && n>=3 && cli_strncmp(azArg[0], "progress", n)==0 ){ int i; int nn = 0; p->flgProgress = 0; |
︙ | ︙ | |||
27824 27825 27826 27827 27828 27829 27830 | } if( cli_strcmp(z,"once")==0 ){ p->flgProgress |= SHELL_PROGRESS_ONCE; continue; } if( cli_strcmp(z,"limit")==0 ){ if( i+1>=nArg ){ | | | | 25821 25822 25823 25824 25825 25826 25827 25828 25829 25830 25831 25832 25833 25834 25835 25836 25837 25838 25839 25840 25841 25842 25843 | } if( cli_strcmp(z,"once")==0 ){ p->flgProgress |= SHELL_PROGRESS_ONCE; continue; } if( cli_strcmp(z,"limit")==0 ){ if( i+1>=nArg ){ utf8_printf(stderr, "Error: missing argument on --limit\n"); rc = 1; goto meta_command_exit; }else{ p->mxProgress = (int)integerValue(azArg[++i]); } continue; } utf8_printf(stderr, "Error: unknown option: \"%s\"\n", azArg[i]); rc = 1; goto meta_command_exit; }else{ nn = (int)integerValue(z); } } open_db(p, 0); |
︙ | ︙ | |||
27865 27866 27867 27868 27869 27870 27871 | #ifndef SQLITE_SHELL_FIDDLE if( c=='r' && n>=3 && cli_strncmp(azArg[0], "read", n)==0 ){ FILE *inSaved = p->in; int savedLineno = p->lineno; failIfSafeMode(p, "cannot run .read in safe mode"); if( nArg!=2 ){ | | | | | | 25862 25863 25864 25865 25866 25867 25868 25869 25870 25871 25872 25873 25874 25875 25876 25877 25878 25879 25880 25881 25882 25883 25884 25885 25886 25887 25888 25889 25890 25891 25892 25893 25894 25895 25896 | #ifndef SQLITE_SHELL_FIDDLE if( c=='r' && n>=3 && cli_strncmp(azArg[0], "read", n)==0 ){ FILE *inSaved = p->in; int savedLineno = p->lineno; failIfSafeMode(p, "cannot run .read in safe mode"); if( nArg!=2 ){ raw_printf(stderr, "Usage: .read FILE\n"); rc = 1; goto meta_command_exit; } if( azArg[1][0]=='|' ){ #ifdef SQLITE_OMIT_POPEN raw_printf(stderr, "Error: pipes are not supported in this OS\n"); rc = 1; p->out = stdout; #else p->in = popen(azArg[1]+1, "r"); if( p->in==0 ){ utf8_printf(stderr, "Error: cannot open \"%s\"\n", azArg[1]); rc = 1; }else{ rc = process_input(p); pclose(p->in); } #endif }else if( (p->in = openChrSource(azArg[1]))==0 ){ utf8_printf(stderr,"Error: cannot open \"%s\"\n", azArg[1]); rc = 1; }else{ rc = process_input(p); fclose(p->in); } p->in = inSaved; p->lineno = savedLineno; |
︙ | ︙ | |||
27912 27913 27914 27915 27916 27917 27918 | if( nArg==2 ){ zSrcFile = azArg[1]; zDb = "main"; }else if( nArg==3 ){ zSrcFile = azArg[2]; zDb = azArg[1]; }else{ | | | | | | | | < < < < | | 25909 25910 25911 25912 25913 25914 25915 25916 25917 25918 25919 25920 25921 25922 25923 25924 25925 25926 25927 25928 25929 25930 25931 25932 25933 25934 25935 25936 25937 25938 25939 25940 25941 25942 25943 25944 25945 25946 25947 25948 25949 25950 25951 25952 25953 25954 25955 25956 25957 25958 25959 25960 25961 25962 25963 25964 25965 25966 25967 25968 25969 25970 25971 25972 25973 25974 25975 25976 25977 25978 25979 | if( nArg==2 ){ zSrcFile = azArg[1]; zDb = "main"; }else if( nArg==3 ){ zSrcFile = azArg[2]; zDb = azArg[1]; }else{ raw_printf(stderr, "Usage: .restore ?DB? FILE\n"); rc = 1; goto meta_command_exit; } rc = sqlite3_open(zSrcFile, &pSrc); if( rc!=SQLITE_OK ){ utf8_printf(stderr, "Error: cannot open \"%s\"\n", zSrcFile); close_db(pSrc); return 1; } open_db(p, 0); pBackup = sqlite3_backup_init(p->db, zDb, pSrc, "main"); if( pBackup==0 ){ utf8_printf(stderr, "Error: %s\n", sqlite3_errmsg(p->db)); close_db(pSrc); return 1; } while( (rc = sqlite3_backup_step(pBackup,100))==SQLITE_OK || rc==SQLITE_BUSY ){ if( rc==SQLITE_BUSY ){ if( nTimeout++ >= 3 ) break; sqlite3_sleep(100); } } sqlite3_backup_finish(pBackup); if( rc==SQLITE_DONE ){ rc = 0; }else if( rc==SQLITE_BUSY || rc==SQLITE_LOCKED ){ raw_printf(stderr, "Error: source database is busy\n"); rc = 1; }else{ utf8_printf(stderr, "Error: %s\n", sqlite3_errmsg(p->db)); rc = 1; } close_db(pSrc); }else #endif /* !defined(SQLITE_SHELL_FIDDLE) */ if( c=='s' && cli_strncmp(azArg[0], "scanstats", n)==0 ){ if( nArg==2 ){ if( cli_strcmp(azArg[1], "vm")==0 ){ p->scanstatsOn = 3; }else if( cli_strcmp(azArg[1], "est")==0 ){ p->scanstatsOn = 2; }else{ p->scanstatsOn = (u8)booleanValue(azArg[1]); } open_db(p, 0); sqlite3_db_config( p->db, SQLITE_DBCONFIG_STMT_SCANSTATUS, p->scanstatsOn, (int*)0 ); #ifndef SQLITE_ENABLE_STMT_SCANSTATUS raw_printf(stderr, "Warning: .scanstats not available in this build.\n"); #endif }else{ raw_printf(stderr, "Usage: .scanstats on|off|est\n"); rc = 1; } }else if( c=='s' && cli_strncmp(azArg[0], "schema", n)==0 ){ ShellText sSelect; ShellState data; |
︙ | ︙ | |||
28001 28002 28003 28004 28005 28006 28007 | if( optionMatch(azArg[ii],"indent") ){ data.cMode = data.mode = MODE_Pretty; }else if( optionMatch(azArg[ii],"debug") ){ bDebug = 1; }else if( optionMatch(azArg[ii],"nosys") ){ bNoSystemTabs = 1; }else if( azArg[ii][0]=='-' ){ | | > | | 25994 25995 25996 25997 25998 25999 26000 26001 26002 26003 26004 26005 26006 26007 26008 26009 26010 26011 26012 26013 26014 26015 | if( optionMatch(azArg[ii],"indent") ){ data.cMode = data.mode = MODE_Pretty; }else if( optionMatch(azArg[ii],"debug") ){ bDebug = 1; }else if( optionMatch(azArg[ii],"nosys") ){ bNoSystemTabs = 1; }else if( azArg[ii][0]=='-' ){ utf8_printf(stderr, "Unknown option: \"%s\"\n", azArg[ii]); rc = 1; goto meta_command_exit; }else if( zName==0 ){ zName = azArg[ii]; }else{ raw_printf(stderr, "Usage: .schema ?--indent? ?--nosys? ?LIKE-PATTERN?\n"); rc = 1; goto meta_command_exit; } } if( zName!=0 ){ int isSchema = sqlite3_strlike(zName, "sqlite_master", '\\')==0 || sqlite3_strlike(zName, "sqlite_schema", '\\')==0 |
︙ | ︙ | |||
28040 28041 28042 28043 28044 28045 28046 | } } if( zDiv ){ sqlite3_stmt *pStmt = 0; rc = sqlite3_prepare_v2(p->db, "SELECT name FROM pragma_database_list", -1, &pStmt, 0); if( rc ){ | | | 26034 26035 26036 26037 26038 26039 26040 26041 26042 26043 26044 26045 26046 26047 26048 | } } if( zDiv ){ sqlite3_stmt *pStmt = 0; rc = sqlite3_prepare_v2(p->db, "SELECT name FROM pragma_database_list", -1, &pStmt, 0); if( rc ){ utf8_printf(stderr, "Error: %s\n", sqlite3_errmsg(p->db)); sqlite3_finalize(pStmt); rc = 1; goto meta_command_exit; } appendText(&sSelect, "SELECT sql FROM", 0); iSchema = 0; while( sqlite3_step(pStmt)==SQLITE_ROW ){ |
︙ | ︙ | |||
28102 28103 28104 28105 28106 28107 28108 | } if( bNoSystemTabs ){ appendText(&sSelect, "name NOT LIKE 'sqlite_%%' AND ", 0); } appendText(&sSelect, "sql IS NOT NULL" " ORDER BY snum, rowid", 0); if( bDebug ){ | | | | | 26096 26097 26098 26099 26100 26101 26102 26103 26104 26105 26106 26107 26108 26109 26110 26111 26112 26113 26114 26115 26116 26117 26118 26119 26120 26121 | } if( bNoSystemTabs ){ appendText(&sSelect, "name NOT LIKE 'sqlite_%%' AND ", 0); } appendText(&sSelect, "sql IS NOT NULL" " ORDER BY snum, rowid", 0); if( bDebug ){ utf8_printf(p->out, "SQL: %s;\n", sSelect.z); }else{ rc = sqlite3_exec(p->db, sSelect.z, callback, &data, &zErrMsg); } freeText(&sSelect); } if( zErrMsg ){ utf8_printf(stderr,"Error: %s\n", zErrMsg); sqlite3_free(zErrMsg); rc = 1; }else if( rc != SQLITE_OK ){ raw_printf(stderr,"Error: querying schema information\n"); rc = 1; }else{ rc = 0; } }else if( (c=='s' && n==11 && cli_strncmp(azArg[0], "selecttrace", n)==0) |
︙ | ︙ | |||
28159 28160 28161 28162 28163 28164 28165 | ** Invoke the sqlite3session_attach() interface to attach a particular ** table so that it is never filtered. */ if( cli_strcmp(azCmd[0],"attach")==0 ){ if( nCmd!=2 ) goto session_syntax_error; if( pSession->p==0 ){ session_not_open: | | | | | | | > | 26153 26154 26155 26156 26157 26158 26159 26160 26161 26162 26163 26164 26165 26166 26167 26168 26169 26170 26171 26172 26173 26174 26175 26176 26177 26178 26179 26180 26181 26182 26183 26184 26185 26186 26187 26188 26189 26190 26191 26192 26193 26194 26195 26196 26197 26198 26199 26200 26201 26202 26203 26204 26205 26206 26207 | ** Invoke the sqlite3session_attach() interface to attach a particular ** table so that it is never filtered. */ if( cli_strcmp(azCmd[0],"attach")==0 ){ if( nCmd!=2 ) goto session_syntax_error; if( pSession->p==0 ){ session_not_open: raw_printf(stderr, "ERROR: No sessions are open\n"); }else{ rc = sqlite3session_attach(pSession->p, azCmd[1]); if( rc ){ raw_printf(stderr, "ERROR: sqlite3session_attach() returns %d\n", rc); rc = 0; } } }else /* .session changeset FILE ** .session patchset FILE ** Write a changeset or patchset into a file. The file is overwritten. */ if( cli_strcmp(azCmd[0],"changeset")==0 || cli_strcmp(azCmd[0],"patchset")==0 ){ FILE *out = 0; failIfSafeMode(p, "cannot run \".session %s\" in safe mode", azCmd[0]); if( nCmd!=2 ) goto session_syntax_error; if( pSession->p==0 ) goto session_not_open; out = fopen(azCmd[1], "wb"); if( out==0 ){ utf8_printf(stderr, "ERROR: cannot open \"%s\" for writing\n", azCmd[1]); }else{ int szChng; void *pChng; if( azCmd[0][0]=='c' ){ rc = sqlite3session_changeset(pSession->p, &szChng, &pChng); }else{ rc = sqlite3session_patchset(pSession->p, &szChng, &pChng); } if( rc ){ printf("Error: error code %d\n", rc); rc = 0; } if( pChng && fwrite(pChng, szChng, 1, out)!=1 ){ raw_printf(stderr, "ERROR: Failed to write entire %d-byte output\n", szChng); } sqlite3_free(pChng); fclose(out); } }else /* .session close |
︙ | ︙ | |||
28225 28226 28227 28228 28229 28230 28231 | */ if( cli_strcmp(azCmd[0], "enable")==0 ){ int ii; if( nCmd>2 ) goto session_syntax_error; ii = nCmd==1 ? -1 : booleanValue(azCmd[1]); if( pAuxDb->nSession ){ ii = sqlite3session_enable(pSession->p, ii); | | > | > > > | > | > | | > | | | 26220 26221 26222 26223 26224 26225 26226 26227 26228 26229 26230 26231 26232 26233 26234 26235 26236 26237 26238 26239 26240 26241 26242 26243 26244 26245 26246 26247 26248 26249 26250 26251 26252 26253 26254 26255 26256 26257 26258 26259 26260 26261 26262 26263 26264 26265 26266 26267 26268 26269 26270 26271 26272 26273 26274 26275 26276 26277 26278 26279 26280 26281 26282 26283 26284 26285 26286 26287 26288 26289 26290 26291 26292 26293 26294 26295 26296 26297 26298 26299 26300 26301 26302 26303 26304 26305 26306 26307 26308 26309 26310 26311 26312 26313 26314 26315 26316 26317 26318 26319 26320 26321 26322 26323 | */ if( cli_strcmp(azCmd[0], "enable")==0 ){ int ii; if( nCmd>2 ) goto session_syntax_error; ii = nCmd==1 ? -1 : booleanValue(azCmd[1]); if( pAuxDb->nSession ){ ii = sqlite3session_enable(pSession->p, ii); utf8_printf(p->out, "session %s enable flag = %d\n", pSession->zName, ii); } }else /* .session filter GLOB .... ** Set a list of GLOB patterns of table names to be excluded. */ if( cli_strcmp(azCmd[0], "filter")==0 ){ int ii, nByte; if( nCmd<2 ) goto session_syntax_error; if( pAuxDb->nSession ){ for(ii=0; ii<pSession->nFilter; ii++){ sqlite3_free(pSession->azFilter[ii]); } sqlite3_free(pSession->azFilter); nByte = sizeof(pSession->azFilter[0])*(nCmd-1); pSession->azFilter = sqlite3_malloc( nByte ); if( pSession->azFilter==0 ){ raw_printf(stderr, "Error: out or memory\n"); exit(1); } for(ii=1; ii<nCmd; ii++){ char *x = pSession->azFilter[ii-1] = sqlite3_mprintf("%s", azCmd[ii]); shell_check_oom(x); } pSession->nFilter = ii-1; } }else /* .session indirect ?BOOLEAN? ** Query or set the indirect flag */ if( cli_strcmp(azCmd[0], "indirect")==0 ){ int ii; if( nCmd>2 ) goto session_syntax_error; ii = nCmd==1 ? -1 : booleanValue(azCmd[1]); if( pAuxDb->nSession ){ ii = sqlite3session_indirect(pSession->p, ii); utf8_printf(p->out, "session %s indirect flag = %d\n", pSession->zName, ii); } }else /* .session isempty ** Determine if the session is empty */ if( cli_strcmp(azCmd[0], "isempty")==0 ){ int ii; if( nCmd!=1 ) goto session_syntax_error; if( pAuxDb->nSession ){ ii = sqlite3session_isempty(pSession->p); utf8_printf(p->out, "session %s isempty flag = %d\n", pSession->zName, ii); } }else /* .session list ** List all currently open sessions */ if( cli_strcmp(azCmd[0],"list")==0 ){ for(i=0; i<pAuxDb->nSession; i++){ utf8_printf(p->out, "%d %s\n", i, pAuxDb->aSession[i].zName); } }else /* .session open DB NAME ** Open a new session called NAME on the attached database DB. ** DB is normally "main". */ if( cli_strcmp(azCmd[0],"open")==0 ){ char *zName; if( nCmd!=3 ) goto session_syntax_error; zName = azCmd[2]; if( zName[0]==0 ) goto session_syntax_error; for(i=0; i<pAuxDb->nSession; i++){ if( cli_strcmp(pAuxDb->aSession[i].zName,zName)==0 ){ utf8_printf(stderr, "Session \"%s\" already exists\n", zName); goto meta_command_exit; } } if( pAuxDb->nSession>=ArraySize(pAuxDb->aSession) ){ raw_printf(stderr, "Maximum of %d sessions\n", ArraySize(pAuxDb->aSession)); goto meta_command_exit; } pSession = &pAuxDb->aSession[pAuxDb->nSession]; rc = sqlite3session_create(p->db, azCmd[1], &pSession->p); if( rc ){ raw_printf(stderr, "Cannot open session: error code=%d\n", rc); rc = 0; goto meta_command_exit; } pSession->nFilter = 0; sqlite3session_table_filter(pSession->p, session_filter, pSession); pAuxDb->nSession++; pSession->zName = sqlite3_mprintf("%s", zName); |
︙ | ︙ | |||
28331 28332 28333 28334 28335 28336 28337 | /* Undocumented commands for internal testing. Subject to change ** without notice. */ if( c=='s' && n>=10 && cli_strncmp(azArg[0], "selftest-", 9)==0 ){ if( cli_strncmp(azArg[0]+9, "boolean", n-9)==0 ){ int i, v; for(i=1; i<nArg; i++){ v = booleanValue(azArg[i]); | | | | 26333 26334 26335 26336 26337 26338 26339 26340 26341 26342 26343 26344 26345 26346 26347 26348 26349 26350 26351 26352 26353 26354 26355 26356 | /* Undocumented commands for internal testing. Subject to change ** without notice. */ if( c=='s' && n>=10 && cli_strncmp(azArg[0], "selftest-", 9)==0 ){ if( cli_strncmp(azArg[0]+9, "boolean", n-9)==0 ){ int i, v; for(i=1; i<nArg; i++){ v = booleanValue(azArg[i]); utf8_printf(p->out, "%s: %d 0x%x\n", azArg[i], v, v); } } if( cli_strncmp(azArg[0]+9, "integer", n-9)==0 ){ int i; sqlite3_int64 v; for(i=1; i<nArg; i++){ char zBuf[200]; v = integerValue(azArg[i]); sqlite3_snprintf(sizeof(zBuf),zBuf,"%s: %lld 0x%llx\n", azArg[i],v,v); utf8_printf(p->out, "%s", zBuf); } } }else #endif if( c=='s' && n>=4 && cli_strncmp(azArg[0],"selftest",n)==0 ){ int bIsInit = 0; /* True to initialize the SELFTEST table */ |
︙ | ︙ | |||
28367 28368 28369 28370 28371 28372 28373 | if( cli_strcmp(z,"-init")==0 ){ bIsInit = 1; }else if( cli_strcmp(z,"-v")==0 ){ bVerbose++; }else { | | > | | 26369 26370 26371 26372 26373 26374 26375 26376 26377 26378 26379 26380 26381 26382 26383 26384 26385 | if( cli_strcmp(z,"-init")==0 ){ bIsInit = 1; }else if( cli_strcmp(z,"-v")==0 ){ bVerbose++; }else { utf8_printf(stderr, "Unknown option \"%s\" on \"%s\"\n", azArg[i], azArg[0]); raw_printf(stderr, "Should be one of: --init -v\n"); rc = 1; goto meta_command_exit; } } if( sqlite3_table_column_metadata(p->db,"main","selftest",0,0,0,0,0,0) != SQLITE_OK ){ bSelftestExists = 0; |
︙ | ︙ | |||
28397 28398 28399 28400 28401 28402 28403 | }else{ rc = sqlite3_prepare_v2(p->db, "VALUES(0,'memo','Missing SELFTEST table - default checks only','')," " (1,'run','PRAGMA integrity_check','ok')", -1, &pStmt, 0); } if( rc ){ | | | | | | | | < | > > | | | | 26400 26401 26402 26403 26404 26405 26406 26407 26408 26409 26410 26411 26412 26413 26414 26415 26416 26417 26418 26419 26420 26421 26422 26423 26424 26425 26426 26427 26428 26429 26430 26431 26432 26433 26434 26435 26436 26437 26438 26439 26440 26441 26442 26443 26444 26445 26446 26447 26448 26449 26450 26451 26452 26453 26454 26455 26456 26457 26458 26459 26460 26461 26462 26463 26464 26465 26466 26467 26468 26469 26470 26471 | }else{ rc = sqlite3_prepare_v2(p->db, "VALUES(0,'memo','Missing SELFTEST table - default checks only','')," " (1,'run','PRAGMA integrity_check','ok')", -1, &pStmt, 0); } if( rc ){ raw_printf(stderr, "Error querying the selftest table\n"); rc = 1; sqlite3_finalize(pStmt); goto meta_command_exit; } for(i=1; sqlite3_step(pStmt)==SQLITE_ROW; i++){ int tno = sqlite3_column_int(pStmt, 0); const char *zOp = (const char*)sqlite3_column_text(pStmt, 1); const char *zSql = (const char*)sqlite3_column_text(pStmt, 2); const char *zAns = (const char*)sqlite3_column_text(pStmt, 3); if( zOp==0 ) continue; if( zSql==0 ) continue; if( zAns==0 ) continue; k = 0; if( bVerbose>0 ){ printf("%d: %s %s\n", tno, zOp, zSql); } if( cli_strcmp(zOp,"memo")==0 ){ utf8_printf(p->out, "%s\n", zSql); }else if( cli_strcmp(zOp,"run")==0 ){ char *zErrMsg = 0; str.n = 0; str.z[0] = 0; rc = sqlite3_exec(p->db, zSql, captureOutputCallback, &str, &zErrMsg); nTest++; if( bVerbose ){ utf8_printf(p->out, "Result: %s\n", str.z); } if( rc || zErrMsg ){ nErr++; rc = 1; utf8_printf(p->out, "%d: error-code-%d: %s\n", tno, rc, zErrMsg); sqlite3_free(zErrMsg); }else if( cli_strcmp(zAns,str.z)!=0 ){ nErr++; rc = 1; utf8_printf(p->out, "%d: Expected: [%s]\n", tno, zAns); utf8_printf(p->out, "%d: Got: [%s]\n", tno, str.z); } }else { utf8_printf(stderr, "Unknown operation \"%s\" on selftest line %d\n", zOp, tno); rc = 1; break; } } /* End loop over rows of content from SELFTEST */ sqlite3_finalize(pStmt); } /* End loop over k */ freeText(&str); utf8_printf(p->out, "%d errors out of %d tests\n", nErr, nTest); }else if( c=='s' && cli_strncmp(azArg[0], "separator", n)==0 ){ if( nArg<2 || nArg>3 ){ raw_printf(stderr, "Usage: .separator COL ?ROW?\n"); rc = 1; } if( nArg>=2 ){ sqlite3_snprintf(sizeof(p->colSeparator), p->colSeparator, "%.*s", (int)ArraySize(p->colSeparator)-1, azArg[1]); } if( nArg>=3 ){ |
︙ | ︙ | |||
28496 28497 28498 28499 28500 28501 28502 | ){ iSize = atoi(&z[5]); }else if( cli_strcmp(z,"debug")==0 ){ bDebug = 1; }else { | | > | | 26500 26501 26502 26503 26504 26505 26506 26507 26508 26509 26510 26511 26512 26513 26514 26515 26516 26517 26518 26519 26520 26521 | ){ iSize = atoi(&z[5]); }else if( cli_strcmp(z,"debug")==0 ){ bDebug = 1; }else { utf8_printf(stderr, "Unknown option \"%s\" on \"%s\"\n", azArg[i], azArg[0]); showHelp(p->out, azArg[0]); rc = 1; goto meta_command_exit; } }else if( zLike ){ raw_printf(stderr, "Usage: .sha3sum ?OPTIONS? ?LIKE-PATTERN?\n"); rc = 1; goto meta_command_exit; }else{ zLike = z; bSeparate = 1; if( sqlite3_strlike("sqlite\\_%", zLike, '\\')==0 ) bSchema = 1; } |
︙ | ︙ | |||
28574 28575 28576 28577 28578 28579 28580 | " FROM [sha3sum$query]", sSql.z, iSize); } shell_check_oom(zSql); freeText(&sQuery); freeText(&sSql); if( bDebug ){ | | | 26579 26580 26581 26582 26583 26584 26585 26586 26587 26588 26589 26590 26591 26592 26593 | " FROM [sha3sum$query]", sSql.z, iSize); } shell_check_oom(zSql); freeText(&sQuery); freeText(&sSql); if( bDebug ){ utf8_printf(p->out, "%s\n", zSql); }else{ shell_exec(p, zSql, 0); } #if !defined(SQLITE_OMIT_SCHEMA_PRAGMAS) && !defined(SQLITE_OMIT_VIRTUALTABLE) { int lrc; char *zRevText = /* Query for reversible to-blob-to-text check */ |
︙ | ︙ | |||
28604 28605 28606 28607 28608 28609 28610 | " from (select 'SELECT COUNT(*) AS bad_text_count\n" "FROM '||tname||' WHERE '\n" "||group_concat('CAST(CAST('||cname||' AS BLOB) AS TEXT)<>'||cname\n" "|| ' AND typeof('||cname||')=''text'' ',\n" "' OR ') as query, tname from tabcols group by tname)" , zRevText); shell_check_oom(zRevText); | | | > | | | | < < | | | | | | | | > | | | | | | | | | | | | | | | | | | | | | | | | 26609 26610 26611 26612 26613 26614 26615 26616 26617 26618 26619 26620 26621 26622 26623 26624 26625 26626 26627 26628 26629 26630 26631 26632 26633 26634 26635 26636 26637 26638 26639 26640 26641 26642 26643 26644 26645 26646 26647 26648 26649 26650 26651 26652 26653 26654 26655 26656 26657 26658 26659 26660 26661 26662 26663 26664 26665 26666 26667 26668 26669 26670 26671 26672 26673 26674 26675 26676 26677 26678 26679 26680 26681 26682 26683 26684 26685 26686 26687 26688 26689 26690 26691 26692 26693 26694 26695 26696 26697 26698 26699 26700 26701 26702 26703 26704 26705 26706 26707 26708 26709 26710 26711 26712 26713 26714 26715 26716 26717 26718 26719 26720 26721 26722 26723 26724 26725 26726 26727 26728 26729 26730 26731 26732 26733 26734 26735 26736 26737 26738 26739 26740 26741 26742 26743 26744 26745 26746 26747 26748 26749 26750 | " from (select 'SELECT COUNT(*) AS bad_text_count\n" "FROM '||tname||' WHERE '\n" "||group_concat('CAST(CAST('||cname||' AS BLOB) AS TEXT)<>'||cname\n" "|| ' AND typeof('||cname||')=''text'' ',\n" "' OR ') as query, tname from tabcols group by tname)" , zRevText); shell_check_oom(zRevText); if( bDebug ) utf8_printf(p->out, "%s\n", zRevText); lrc = sqlite3_prepare_v2(p->db, zRevText, -1, &pStmt, 0); if( lrc!=SQLITE_OK ){ /* assert(lrc==SQLITE_NOMEM); // might also be SQLITE_ERROR if the ** user does cruel and unnatural things like ".limit expr_depth 0". */ rc = 1; }else{ if( zLike ) sqlite3_bind_text(pStmt,1,zLike,-1,SQLITE_STATIC); lrc = SQLITE_ROW==sqlite3_step(pStmt); if( lrc ){ const char *zGenQuery = (char*)sqlite3_column_text(pStmt,0); sqlite3_stmt *pCheckStmt; lrc = sqlite3_prepare_v2(p->db, zGenQuery, -1, &pCheckStmt, 0); if( bDebug ) utf8_printf(p->out, "%s\n", zGenQuery); if( lrc!=SQLITE_OK ){ rc = 1; }else{ if( SQLITE_ROW==sqlite3_step(pCheckStmt) ){ double countIrreversible = sqlite3_column_double(pCheckStmt, 0); if( countIrreversible>0 ){ int sz = (int)(countIrreversible + 0.5); utf8_printf(stderr, "Digest includes %d invalidly encoded text field%s.\n", sz, (sz>1)? "s": ""); } } sqlite3_finalize(pCheckStmt); } sqlite3_finalize(pStmt); } } if( rc ) utf8_printf(stderr, ".sha3sum failed.\n"); sqlite3_free(zRevText); } #endif /* !defined(*_OMIT_SCHEMA_PRAGMAS) && !defined(*_OMIT_VIRTUALTABLE) */ sqlite3_free(zSql); }else #if !defined(SQLITE_NOHAVE_SYSTEM) && !defined(SQLITE_SHELL_FIDDLE) if( c=='s' && (cli_strncmp(azArg[0], "shell", n)==0 || cli_strncmp(azArg[0],"system",n)==0) ){ char *zCmd; int i, x; failIfSafeMode(p, "cannot run .%s in safe mode", azArg[0]); if( nArg<2 ){ raw_printf(stderr, "Usage: .system COMMAND\n"); rc = 1; goto meta_command_exit; } zCmd = sqlite3_mprintf(strchr(azArg[1],' ')==0?"%s":"\"%s\"", azArg[1]); for(i=2; i<nArg && zCmd!=0; i++){ zCmd = sqlite3_mprintf(strchr(azArg[i],' ')==0?"%z %s":"%z \"%s\"", zCmd, azArg[i]); } x = zCmd!=0 ? system(zCmd) : 1; sqlite3_free(zCmd); if( x ) raw_printf(stderr, "System command returns %d\n", x); }else #endif /* !defined(SQLITE_NOHAVE_SYSTEM) && !defined(SQLITE_SHELL_FIDDLE) */ if( c=='s' && cli_strncmp(azArg[0], "show", n)==0 ){ static const char *azBool[] = { "off", "on", "trigger", "full"}; const char *zOut; int i; if( nArg!=1 ){ raw_printf(stderr, "Usage: .show\n"); rc = 1; goto meta_command_exit; } utf8_printf(p->out, "%12.12s: %s\n","echo", azBool[ShellHasFlag(p, SHFLG_Echo)]); utf8_printf(p->out, "%12.12s: %s\n","eqp", azBool[p->autoEQP&3]); utf8_printf(p->out, "%12.12s: %s\n","explain", p->mode==MODE_Explain ? "on" : p->autoExplain ? "auto" : "off"); utf8_printf(p->out,"%12.12s: %s\n","headers", azBool[p->showHeader!=0]); if( p->mode==MODE_Column || (p->mode>=MODE_Markdown && p->mode<=MODE_Box) ){ utf8_printf (p->out, "%12.12s: %s --wrap %d --wordwrap %s --%squote\n", "mode", modeDescr[p->mode], p->cmOpts.iWrap, p->cmOpts.bWordWrap ? "on" : "off", p->cmOpts.bQuote ? "" : "no"); }else{ utf8_printf(p->out, "%12.12s: %s\n","mode", modeDescr[p->mode]); } utf8_printf(p->out, "%12.12s: ", "nullvalue"); output_c_string(p->out, p->nullValue); raw_printf(p->out, "\n"); utf8_printf(p->out,"%12.12s: %s\n","output", strlen30(p->outfile) ? p->outfile : "stdout"); utf8_printf(p->out,"%12.12s: ", "colseparator"); output_c_string(p->out, p->colSeparator); raw_printf(p->out, "\n"); utf8_printf(p->out,"%12.12s: ", "rowseparator"); output_c_string(p->out, p->rowSeparator); raw_printf(p->out, "\n"); switch( p->statsOn ){ case 0: zOut = "off"; break; default: zOut = "on"; break; case 2: zOut = "stmt"; break; case 3: zOut = "vmstep"; break; } utf8_printf(p->out, "%12.12s: %s\n","stats", zOut); utf8_printf(p->out, "%12.12s: ", "width"); for (i=0;i<p->nWidth;i++) { raw_printf(p->out, "%d ", p->colWidth[i]); } raw_printf(p->out, "\n"); utf8_printf(p->out, "%12.12s: %s\n", "filename", p->pAuxDb->zDbFilename ? p->pAuxDb->zDbFilename : ""); }else if( c=='s' && cli_strncmp(azArg[0], "stats", n)==0 ){ if( nArg==2 ){ if( cli_strcmp(azArg[1],"stmt")==0 ){ p->statsOn = 2; }else if( cli_strcmp(azArg[1],"vmstep")==0 ){ p->statsOn = 3; }else{ p->statsOn = (u8)booleanValue(azArg[1]); } }else if( nArg==1 ){ display_stats(p->db, p, 0); }else{ raw_printf(stderr, "Usage: .stats ?on|off|stmt|vmstep?\n"); rc = 1; } }else if( (c=='t' && n>1 && cli_strncmp(azArg[0], "tables", n)==0) || (c=='i' && (cli_strncmp(azArg[0], "indices", n)==0 || cli_strncmp(azArg[0], "indexes", n)==0) ) |
︙ | ︙ | |||
28757 28758 28759 28760 28761 28762 28763 | return shellDatabaseError(p->db); } if( nArg>2 && c=='i' ){ /* It is an historical accident that the .indexes command shows an error ** when called with the wrong number of arguments whereas the .tables ** command does not. */ | | | 26762 26763 26764 26765 26766 26767 26768 26769 26770 26771 26772 26773 26774 26775 26776 | return shellDatabaseError(p->db); } if( nArg>2 && c=='i' ){ /* It is an historical accident that the .indexes command shows an error ** when called with the wrong number of arguments whereas the .tables ** command does not. */ raw_printf(stderr, "Usage: .indexes ?LIKE-PATTERN?\n"); rc = 1; sqlite3_finalize(pStmt); goto meta_command_exit; } for(ii=0; sqlite3_step(pStmt)==SQLITE_ROW; ii++){ const char *zDbName = (const char*)sqlite3_column_text(pStmt, 1); if( zDbName==0 ) continue; |
︙ | ︙ | |||
28833 28834 28835 28836 28837 28838 28839 | } nPrintCol = 80/(maxlen+2); if( nPrintCol<1 ) nPrintCol = 1; nPrintRow = (nRow + nPrintCol - 1)/nPrintCol; for(i=0; i<nPrintRow; i++){ for(j=i; j<nRow; j+=nPrintRow){ char *zSp = j<nPrintRow ? "" : " "; | > | | | | 26838 26839 26840 26841 26842 26843 26844 26845 26846 26847 26848 26849 26850 26851 26852 26853 26854 26855 26856 26857 26858 26859 26860 26861 26862 26863 26864 26865 26866 26867 26868 26869 | } nPrintCol = 80/(maxlen+2); if( nPrintCol<1 ) nPrintCol = 1; nPrintRow = (nRow + nPrintCol - 1)/nPrintCol; for(i=0; i<nPrintRow; i++){ for(j=i; j<nRow; j+=nPrintRow){ char *zSp = j<nPrintRow ? "" : " "; utf8_printf(p->out, "%s%-*s", zSp, maxlen, azResult[j] ? azResult[j]:""); } raw_printf(p->out, "\n"); } } for(ii=0; ii<nRow; ii++) sqlite3_free(azResult[ii]); sqlite3_free(azResult); }else #ifndef SQLITE_SHELL_FIDDLE /* Begin redirecting output to the file "testcase-out.txt" */ if( c=='t' && cli_strcmp(azArg[0],"testcase")==0 ){ output_reset(p); p->out = output_file_open("testcase-out.txt", 0); if( p->out==0 ){ raw_printf(stderr, "Error: cannot open 'testcase-out.txt'\n"); } if( nArg>=2 ){ sqlite3_snprintf(sizeof(p->zTestcase), p->zTestcase, "%s", azArg[1]); }else{ sqlite3_snprintf(sizeof(p->zTestcase), p->zTestcase, "?"); } }else |
︙ | ︙ | |||
28873 28874 28875 28876 28877 28878 28879 | } aCtrl[] = { {"always", SQLITE_TESTCTRL_ALWAYS, 1, "BOOLEAN" }, {"assert", SQLITE_TESTCTRL_ASSERT, 1, "BOOLEAN" }, /*{"benign_malloc_hooks",SQLITE_TESTCTRL_BENIGN_MALLOC_HOOKS,1, "" },*/ /*{"bitvec_test", SQLITE_TESTCTRL_BITVEC_TEST, 1, "" },*/ {"byteorder", SQLITE_TESTCTRL_BYTEORDER, 0, "" }, {"extra_schema_checks",SQLITE_TESTCTRL_EXTRA_SCHEMA_CHECKS,0,"BOOLEAN" }, | | < | | | | | | | | | 26879 26880 26881 26882 26883 26884 26885 26886 26887 26888 26889 26890 26891 26892 26893 26894 26895 26896 26897 26898 26899 26900 26901 26902 26903 26904 26905 26906 26907 26908 26909 26910 26911 26912 26913 26914 26915 26916 26917 26918 26919 26920 26921 26922 26923 26924 26925 26926 26927 26928 26929 26930 26931 26932 26933 26934 26935 26936 26937 26938 26939 26940 26941 26942 26943 26944 26945 26946 26947 26948 26949 26950 26951 26952 26953 26954 26955 26956 26957 26958 26959 | } aCtrl[] = { {"always", SQLITE_TESTCTRL_ALWAYS, 1, "BOOLEAN" }, {"assert", SQLITE_TESTCTRL_ASSERT, 1, "BOOLEAN" }, /*{"benign_malloc_hooks",SQLITE_TESTCTRL_BENIGN_MALLOC_HOOKS,1, "" },*/ /*{"bitvec_test", SQLITE_TESTCTRL_BITVEC_TEST, 1, "" },*/ {"byteorder", SQLITE_TESTCTRL_BYTEORDER, 0, "" }, {"extra_schema_checks",SQLITE_TESTCTRL_EXTRA_SCHEMA_CHECKS,0,"BOOLEAN" }, /*{"fault_install", SQLITE_TESTCTRL_FAULT_INSTALL, 1,"" },*/ {"fk_no_action", SQLITE_TESTCTRL_FK_NO_ACTION, 0, "BOOLEAN" }, {"imposter", SQLITE_TESTCTRL_IMPOSTER,1,"SCHEMA ON/OFF ROOTPAGE"}, {"internal_functions", SQLITE_TESTCTRL_INTERNAL_FUNCTIONS,0,"" }, {"localtime_fault", SQLITE_TESTCTRL_LOCALTIME_FAULT,0,"BOOLEAN" }, {"never_corrupt", SQLITE_TESTCTRL_NEVER_CORRUPT,1, "BOOLEAN" }, {"optimizations", SQLITE_TESTCTRL_OPTIMIZATIONS,0,"DISABLE-MASK" }, #ifdef YYCOVERAGE {"parser_coverage", SQLITE_TESTCTRL_PARSER_COVERAGE,0,"" }, #endif {"pending_byte", SQLITE_TESTCTRL_PENDING_BYTE,0, "OFFSET " }, {"prng_restore", SQLITE_TESTCTRL_PRNG_RESTORE,0, "" }, {"prng_save", SQLITE_TESTCTRL_PRNG_SAVE, 0, "" }, {"prng_seed", SQLITE_TESTCTRL_PRNG_SEED, 0, "SEED ?db?" }, {"seek_count", SQLITE_TESTCTRL_SEEK_COUNT, 0, "" }, {"sorter_mmap", SQLITE_TESTCTRL_SORTER_MMAP, 0, "NMAX" }, {"tune", SQLITE_TESTCTRL_TUNE, 1, "ID VALUE" }, {"uselongdouble", SQLITE_TESTCTRL_USELONGDOUBLE,0,"?BOOLEAN|\"default\"?"}, }; int testctrl = -1; int iCtrl = -1; int rc2 = 0; /* 0: usage. 1: %d 2: %x 3: no-output */ int isOk = 0; int i, n2; const char *zCmd = 0; open_db(p, 0); zCmd = nArg>=2 ? azArg[1] : "help"; /* The argument can optionally begin with "-" or "--" */ if( zCmd[0]=='-' && zCmd[1] ){ zCmd++; if( zCmd[0]=='-' && zCmd[1] ) zCmd++; } /* --help lists all test-controls */ if( cli_strcmp(zCmd,"help")==0 ){ utf8_printf(p->out, "Available test-controls:\n"); for(i=0; i<ArraySize(aCtrl); i++){ if( aCtrl[i].unSafe && !ShellHasFlag(p,SHFLG_TestingMode) ) continue; utf8_printf(p->out, " .testctrl %s %s\n", aCtrl[i].zCtrlName, aCtrl[i].zUsage); } rc = 1; goto meta_command_exit; } /* convert testctrl text option to value. allow any unique prefix ** of the option name, or a numerical value. */ n2 = strlen30(zCmd); for(i=0; i<ArraySize(aCtrl); i++){ if( aCtrl[i].unSafe && !ShellHasFlag(p,SHFLG_TestingMode) ) continue; if( cli_strncmp(zCmd, aCtrl[i].zCtrlName, n2)==0 ){ if( testctrl<0 ){ testctrl = aCtrl[i].ctrlCode; iCtrl = i; }else{ utf8_printf(stderr, "Error: ambiguous test-control: \"%s\"\n" "Use \".testctrl --help\" for help\n", zCmd); rc = 1; goto meta_command_exit; } } } if( testctrl<0 ){ utf8_printf(stderr,"Error: unknown test-control: %s\n" "Use \".testctrl --help\" for help\n", zCmd); }else{ switch(testctrl){ /* sqlite3_test_control(int, db, int) */ case SQLITE_TESTCTRL_OPTIMIZATIONS: case SQLITE_TESTCTRL_FK_NO_ACTION: if( nArg==3 ){ |
︙ | ︙ | |||
28980 28981 28982 28983 28984 28985 28986 | /* sqlite3_test_control(int, int, sqlite3*) */ case SQLITE_TESTCTRL_PRNG_SEED: if( nArg==3 || nArg==4 ){ int ii = (int)integerValue(azArg[2]); sqlite3 *db; if( ii==0 && cli_strcmp(azArg[2],"random")==0 ){ sqlite3_randomness(sizeof(ii),&ii); | | | 26985 26986 26987 26988 26989 26990 26991 26992 26993 26994 26995 26996 26997 26998 26999 | /* sqlite3_test_control(int, int, sqlite3*) */ case SQLITE_TESTCTRL_PRNG_SEED: if( nArg==3 || nArg==4 ){ int ii = (int)integerValue(azArg[2]); sqlite3 *db; if( ii==0 && cli_strcmp(azArg[2],"random")==0 ){ sqlite3_randomness(sizeof(ii),&ii); printf("-- random seed: %d\n", ii); } if( nArg==3 ){ db = 0; }else{ db = p->db; /* Make sure the schema has been loaded */ sqlite3_table_column_metadata(db, 0, "x", 0, 0, 0, 0, 0, 0); |
︙ | ︙ | |||
29048 29049 29050 29051 29052 29053 29054 | isOk = 3; } break; case SQLITE_TESTCTRL_SEEK_COUNT: { u64 x = 0; rc2 = sqlite3_test_control(testctrl, p->db, &x); | | | 27053 27054 27055 27056 27057 27058 27059 27060 27061 27062 27063 27064 27065 27066 27067 | isOk = 3; } break; case SQLITE_TESTCTRL_SEEK_COUNT: { u64 x = 0; rc2 = sqlite3_test_control(testctrl, p->db, &x); utf8_printf(p->out, "%llu\n", x); isOk = 3; break; } #ifdef YYCOVERAGE case SQLITE_TESTCTRL_PARSER_COVERAGE: { if( nArg==2 ){ |
︙ | ︙ | |||
29079 29080 29081 29082 29083 29084 29085 | isOk = 1; }else if( nArg==2 ){ int id = 1; while(1){ int val = 0; rc2 = sqlite3_test_control(testctrl, -id, &val); if( rc2!=SQLITE_OK ) break; | | | | < < < < < < < | < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < | < < < < < < < < < < < < < < < < < < < < | | | | | | 27084 27085 27086 27087 27088 27089 27090 27091 27092 27093 27094 27095 27096 27097 27098 27099 27100 27101 27102 27103 27104 27105 27106 27107 27108 27109 27110 27111 27112 27113 27114 27115 27116 27117 27118 27119 27120 27121 27122 27123 27124 27125 27126 27127 27128 27129 27130 27131 27132 27133 27134 27135 27136 27137 27138 27139 27140 27141 | isOk = 1; }else if( nArg==2 ){ int id = 1; while(1){ int val = 0; rc2 = sqlite3_test_control(testctrl, -id, &val); if( rc2!=SQLITE_OK ) break; if( id>1 ) utf8_printf(p->out, " "); utf8_printf(p->out, "%d: %d", id, val); id++; } if( id>1 ) utf8_printf(p->out, "\n"); isOk = 3; } break; } #endif case SQLITE_TESTCTRL_SORTER_MMAP: if( nArg==3 ){ int opt = (unsigned int)integerValue(azArg[2]); rc2 = sqlite3_test_control(testctrl, p->db, opt); isOk = 3; } break; } } if( isOk==0 && iCtrl>=0 ){ utf8_printf(p->out, "Usage: .testctrl %s %s\n", zCmd,aCtrl[iCtrl].zUsage); rc = 1; }else if( isOk==1 ){ raw_printf(p->out, "%d\n", rc2); }else if( isOk==2 ){ raw_printf(p->out, "0x%08x\n", rc2); } }else #endif /* !defined(SQLITE_UNTESTABLE) */ if( c=='t' && n>4 && cli_strncmp(azArg[0], "timeout", n)==0 ){ open_db(p, 0); sqlite3_busy_timeout(p->db, nArg>=2 ? (int)integerValue(azArg[1]) : 0); }else if( c=='t' && n>=5 && cli_strncmp(azArg[0], "timer", n)==0 ){ if( nArg==2 ){ enableTimer = booleanValue(azArg[1]); if( enableTimer && !HAS_TIMER ){ raw_printf(stderr, "Error: timer not available on this system.\n"); enableTimer = 0; } }else{ raw_printf(stderr, "Usage: .timer on|off\n"); rc = 1; } }else #ifndef SQLITE_OMIT_TRACE if( c=='t' && cli_strncmp(azArg[0], "trace", n)==0 ){ int mType = 0; |
︙ | ︙ | |||
29227 29228 29229 29230 29231 29232 29233 | else if( optionMatch(z, "stmt") ){ mType |= SQLITE_TRACE_STMT; } else if( optionMatch(z, "close") ){ mType |= SQLITE_TRACE_CLOSE; } else { | | | 27164 27165 27166 27167 27168 27169 27170 27171 27172 27173 27174 27175 27176 27177 27178 | else if( optionMatch(z, "stmt") ){ mType |= SQLITE_TRACE_STMT; } else if( optionMatch(z, "close") ){ mType |= SQLITE_TRACE_CLOSE; } else { raw_printf(stderr, "Unknown option \"%s\" on \".trace\"\n", z); rc = 1; goto meta_command_exit; } }else{ output_file_close(p->traceOut); p->traceOut = output_file_open(z, 0); } |
︙ | ︙ | |||
29251 29252 29253 29254 29255 29256 29257 | #if defined(SQLITE_DEBUG) && !defined(SQLITE_OMIT_VIRTUALTABLE) if( c=='u' && cli_strncmp(azArg[0], "unmodule", n)==0 ){ int ii; int lenOpt; char *zOpt; if( nArg<2 ){ | | | 27188 27189 27190 27191 27192 27193 27194 27195 27196 27197 27198 27199 27200 27201 27202 | #if defined(SQLITE_DEBUG) && !defined(SQLITE_OMIT_VIRTUALTABLE) if( c=='u' && cli_strncmp(azArg[0], "unmodule", n)==0 ){ int ii; int lenOpt; char *zOpt; if( nArg<2 ){ raw_printf(stderr, "Usage: .unmodule [--allexcept] NAME ...\n"); rc = 1; goto meta_command_exit; } open_db(p, 0); zOpt = azArg[1]; if( zOpt[0]=='-' && zOpt[1]=='-' && zOpt[2]!=0 ) zOpt++; lenOpt = (int)strlen(zOpt); |
︙ | ︙ | |||
29273 29274 29275 29276 29277 29278 29279 | } }else #endif #if SQLITE_USER_AUTHENTICATION if( c=='u' && cli_strncmp(azArg[0], "user", n)==0 ){ if( nArg<2 ){ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | 27210 27211 27212 27213 27214 27215 27216 27217 27218 27219 27220 27221 27222 27223 27224 27225 27226 27227 27228 27229 27230 27231 27232 27233 27234 27235 27236 27237 27238 27239 27240 27241 27242 27243 27244 27245 27246 27247 27248 27249 27250 27251 27252 27253 27254 27255 27256 27257 27258 27259 27260 27261 27262 27263 27264 27265 27266 27267 27268 27269 27270 27271 27272 27273 27274 27275 27276 27277 27278 27279 27280 27281 27282 27283 27284 27285 27286 27287 27288 27289 27290 27291 27292 27293 27294 27295 27296 27297 27298 27299 27300 27301 27302 27303 27304 27305 27306 27307 27308 27309 27310 27311 27312 27313 27314 27315 27316 27317 27318 27319 27320 27321 27322 27323 27324 27325 27326 27327 27328 27329 27330 27331 27332 27333 27334 27335 27336 27337 27338 27339 27340 27341 27342 | } }else #endif #if SQLITE_USER_AUTHENTICATION if( c=='u' && cli_strncmp(azArg[0], "user", n)==0 ){ if( nArg<2 ){ raw_printf(stderr, "Usage: .user SUBCOMMAND ...\n"); rc = 1; goto meta_command_exit; } open_db(p, 0); if( cli_strcmp(azArg[1],"login")==0 ){ if( nArg!=4 ){ raw_printf(stderr, "Usage: .user login USER PASSWORD\n"); rc = 1; goto meta_command_exit; } rc = sqlite3_user_authenticate(p->db, azArg[2], azArg[3], strlen30(azArg[3])); if( rc ){ utf8_printf(stderr, "Authentication failed for user %s\n", azArg[2]); rc = 1; } }else if( cli_strcmp(azArg[1],"add")==0 ){ if( nArg!=5 ){ raw_printf(stderr, "Usage: .user add USER PASSWORD ISADMIN\n"); rc = 1; goto meta_command_exit; } rc = sqlite3_user_add(p->db, azArg[2], azArg[3], strlen30(azArg[3]), booleanValue(azArg[4])); if( rc ){ raw_printf(stderr, "User-Add failed: %d\n", rc); rc = 1; } }else if( cli_strcmp(azArg[1],"edit")==0 ){ if( nArg!=5 ){ raw_printf(stderr, "Usage: .user edit USER PASSWORD ISADMIN\n"); rc = 1; goto meta_command_exit; } rc = sqlite3_user_change(p->db, azArg[2], azArg[3], strlen30(azArg[3]), booleanValue(azArg[4])); if( rc ){ raw_printf(stderr, "User-Edit failed: %d\n", rc); rc = 1; } }else if( cli_strcmp(azArg[1],"delete")==0 ){ if( nArg!=3 ){ raw_printf(stderr, "Usage: .user delete USER\n"); rc = 1; goto meta_command_exit; } rc = sqlite3_user_delete(p->db, azArg[2]); if( rc ){ raw_printf(stderr, "User-Delete failed: %d\n", rc); rc = 1; } }else{ raw_printf(stderr, "Usage: .user login|add|edit|delete ...\n"); rc = 1; goto meta_command_exit; } }else #endif /* SQLITE_USER_AUTHENTICATION */ if( c=='v' && cli_strncmp(azArg[0], "version", n)==0 ){ char *zPtrSz = sizeof(void*)==8 ? "64-bit" : "32-bit"; utf8_printf(p->out, "SQLite %s %s\n" /*extra-version-info*/, sqlite3_libversion(), sqlite3_sourceid()); #if SQLITE_HAVE_ZLIB utf8_printf(p->out, "zlib version %s\n", zlibVersion()); #endif #define CTIMEOPT_VAL_(opt) #opt #define CTIMEOPT_VAL(opt) CTIMEOPT_VAL_(opt) #if defined(__clang__) && defined(__clang_major__) utf8_printf(p->out, "clang-" CTIMEOPT_VAL(__clang_major__) "." CTIMEOPT_VAL(__clang_minor__) "." CTIMEOPT_VAL(__clang_patchlevel__) " (%s)\n", zPtrSz); #elif defined(_MSC_VER) utf8_printf(p->out, "msvc-" CTIMEOPT_VAL(_MSC_VER) " (%s)\n", zPtrSz); #elif defined(__GNUC__) && defined(__VERSION__) utf8_printf(p->out, "gcc-" __VERSION__ " (%s)\n", zPtrSz); #endif }else if( c=='v' && cli_strncmp(azArg[0], "vfsinfo", n)==0 ){ const char *zDbName = nArg==2 ? azArg[1] : "main"; sqlite3_vfs *pVfs = 0; if( p->db ){ sqlite3_file_control(p->db, zDbName, SQLITE_FCNTL_VFS_POINTER, &pVfs); if( pVfs ){ utf8_printf(p->out, "vfs.zName = \"%s\"\n", pVfs->zName); raw_printf(p->out, "vfs.iVersion = %d\n", pVfs->iVersion); raw_printf(p->out, "vfs.szOsFile = %d\n", pVfs->szOsFile); raw_printf(p->out, "vfs.mxPathname = %d\n", pVfs->mxPathname); } } }else if( c=='v' && cli_strncmp(azArg[0], "vfslist", n)==0 ){ sqlite3_vfs *pVfs; sqlite3_vfs *pCurrent = 0; if( p->db ){ sqlite3_file_control(p->db, "main", SQLITE_FCNTL_VFS_POINTER, &pCurrent); } for(pVfs=sqlite3_vfs_find(0); pVfs; pVfs=pVfs->pNext){ utf8_printf(p->out, "vfs.zName = \"%s\"%s\n", pVfs->zName, pVfs==pCurrent ? " <--- CURRENT" : ""); raw_printf(p->out, "vfs.iVersion = %d\n", pVfs->iVersion); raw_printf(p->out, "vfs.szOsFile = %d\n", pVfs->szOsFile); raw_printf(p->out, "vfs.mxPathname = %d\n", pVfs->mxPathname); if( pVfs->pNext ){ raw_printf(p->out, "-----------------------------------\n"); } } }else if( c=='v' && cli_strncmp(azArg[0], "vfsname", n)==0 ){ const char *zDbName = nArg==2 ? azArg[1] : "main"; char *zVfsName = 0; if( p->db ){ sqlite3_file_control(p->db, zDbName, SQLITE_FCNTL_VFSNAME, &zVfsName); if( zVfsName ){ utf8_printf(p->out, "%s\n", zVfsName); sqlite3_free(zVfsName); } } }else if( c=='w' && cli_strncmp(azArg[0], "wheretrace", n)==0 ){ unsigned int x = nArg>=2? (unsigned int)integerValue(azArg[1]) : 0xffffffff; |
︙ | ︙ | |||
29415 29416 29417 29418 29419 29420 29421 | if( p->nWidth ) p->actualWidth = &p->colWidth[p->nWidth]; for(j=1; j<nArg; j++){ p->colWidth[j-1] = (int)integerValue(azArg[j]); } }else { | | | | 27352 27353 27354 27355 27356 27357 27358 27359 27360 27361 27362 27363 27364 27365 27366 27367 | if( p->nWidth ) p->actualWidth = &p->colWidth[p->nWidth]; for(j=1; j<nArg; j++){ p->colWidth[j-1] = (int)integerValue(azArg[j]); } }else { utf8_printf(stderr, "Error: unknown command or invalid arguments: " " \"%s\". Enter \".help\" for help\n", azArg[0]); rc = 1; } meta_command_exit: if( p->outCount ){ p->outCount--; if( p->outCount==0 ) output_reset(p); |
︙ | ︙ | |||
29570 29571 29572 29573 29574 29575 29576 | zSql[nSql] = ';'; zSql[nSql+1] = 0; rc = sqlite3_complete(zSql); zSql[nSql] = 0; return rc; } | < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < | 27507 27508 27509 27510 27511 27512 27513 27514 27515 27516 27517 27518 27519 27520 | zSql[nSql] = ';'; zSql[nSql+1] = 0; rc = sqlite3_complete(zSql); zSql[nSql] = 0; return rc; } /* ** Run a single line of SQL. Return the number of errors. */ static int runOneSqlLine(ShellState *p, char *zSql, FILE *in, int startline){ int rc; char *zErrMsg = 0; |
︙ | ︙ | |||
29688 29689 29690 29691 29692 29693 29694 | } if( in!=0 || !stdin_is_interactive ){ sqlite3_snprintf(sizeof(zPrefix), zPrefix, "%s near line %d:", zErrorType, startline); }else{ sqlite3_snprintf(sizeof(zPrefix), zPrefix, "%s:", zErrorType); } | | | < < | | 27543 27544 27545 27546 27547 27548 27549 27550 27551 27552 27553 27554 27555 27556 27557 27558 27559 27560 27561 27562 27563 27564 27565 27566 27567 27568 27569 27570 27571 27572 | } if( in!=0 || !stdin_is_interactive ){ sqlite3_snprintf(sizeof(zPrefix), zPrefix, "%s near line %d:", zErrorType, startline); }else{ sqlite3_snprintf(sizeof(zPrefix), zPrefix, "%s:", zErrorType); } utf8_printf(stderr, "%s %s\n", zPrefix, zErrorTail); sqlite3_free(zErrMsg); zErrMsg = 0; return 1; }else if( ShellHasFlag(p, SHFLG_CountChanges) ){ char zLineBuf[2000]; sqlite3_snprintf(sizeof(zLineBuf), zLineBuf, "changes: %lld total_changes: %lld", sqlite3_changes64(p->db), sqlite3_total_changes64(p->db)); raw_printf(p->out, "%s\n", zLineBuf); } return 0; } static void echo_group_input(ShellState *p, const char *zDo){ if( ShellHasFlag(p, SHFLG_Echo) ) utf8_printf(p->out, "%s\n", zDo); } #ifdef SQLITE_SHELL_FIDDLE /* ** Alternate one_input_line() impl for wasm mode. This is not in the primary ** impl because we need the global shellState and cannot access it from that ** function without moving lots of code around (creating a larger/messier diff). |
︙ | ︙ | |||
29763 29764 29765 29766 29767 29768 29769 | int rc; /* Error code */ int errCnt = 0; /* Number of errors seen */ i64 startline = 0; /* Line number for start of current input */ QuickScanState qss = QSS_Start; /* Accumulated line status (so far) */ if( p->inputNesting==MAX_INPUT_NESTING ){ /* This will be more informative in a later version. */ | | | | | 27616 27617 27618 27619 27620 27621 27622 27623 27624 27625 27626 27627 27628 27629 27630 27631 27632 27633 27634 27635 27636 27637 27638 27639 27640 27641 27642 | int rc; /* Error code */ int errCnt = 0; /* Number of errors seen */ i64 startline = 0; /* Line number for start of current input */ QuickScanState qss = QSS_Start; /* Accumulated line status (so far) */ if( p->inputNesting==MAX_INPUT_NESTING ){ /* This will be more informative in a later version. */ utf8_printf(stderr,"Input nesting limit (%d) reached at line %d." " Check recursion.\n", MAX_INPUT_NESTING, p->lineno); return 1; } ++p->inputNesting; p->lineno = 0; CONTINUE_PROMPT_RESET; while( errCnt==0 || !bail_on_error || (p->in==0 && stdin_is_interactive) ){ fflush(p->out); zLine = one_input_line(p->in, zLine, nSql>0); if( zLine==0 ){ /* End of input */ if( p->in==0 && stdin_is_interactive ) printf("\n"); break; } if( seenInterrupt ){ if( p->in!=0 ) break; seenInterrupt = 0; } p->lineno++; |
︙ | ︙ | |||
29985 29986 29987 29988 29989 29990 29991 | if( sqliterc == NULL ){ sqliterc = find_xdg_config(); } if( sqliterc == NULL ){ home_dir = find_home_dir(0); if( home_dir==0 ){ | | | | | | 27838 27839 27840 27841 27842 27843 27844 27845 27846 27847 27848 27849 27850 27851 27852 27853 27854 27855 27856 27857 27858 27859 27860 27861 27862 27863 27864 27865 27866 27867 27868 | if( sqliterc == NULL ){ sqliterc = find_xdg_config(); } if( sqliterc == NULL ){ home_dir = find_home_dir(0); if( home_dir==0 ){ raw_printf(stderr, "-- warning: cannot find home directory;" " cannot read ~/.sqliterc\n"); return; } zBuf = sqlite3_mprintf("%s/.sqliterc",home_dir); shell_check_oom(zBuf); sqliterc = zBuf; } p->in = fopen(sqliterc,"rb"); if( p->in ){ if( stdin_is_interactive ){ utf8_printf(stderr,"-- Loading resources from %s\n",sqliterc); } if( process_input(p) && bail_on_error ) exit(1); fclose(p->in); }else if( sqliterc_override!=0 ){ utf8_printf(stderr,"cannot open: \"%s\"\n", sqliterc); if( bail_on_error ) exit(1); } p->in = inSaved; p->lineno = savedLineno; sqlite3_free(zBuf); } |
︙ | ︙ | |||
30051 30052 30053 30054 30055 30056 30057 30058 30059 | #endif " -memtrace trace all memory allocations and deallocations\n" " -mmap N default mmap size set to N\n" #ifdef SQLITE_ENABLE_MULTIPLEX " -multiplex enable the multiplexor VFS\n" #endif " -newline SEP set output row separator. Default: '\\n'\n" " -nofollow refuse to open symbolic links to database files\n" " -nonce STRING set the safe-mode escape nonce\n" | > > > < > > > > | | | | | | | | | 27904 27905 27906 27907 27908 27909 27910 27911 27912 27913 27914 27915 27916 27917 27918 27919 27920 27921 27922 27923 27924 27925 27926 27927 27928 27929 27930 27931 27932 27933 27934 27935 27936 27937 27938 27939 27940 27941 27942 27943 27944 27945 27946 27947 27948 27949 27950 27951 27952 27953 27954 27955 27956 27957 27958 27959 27960 27961 27962 27963 27964 27965 27966 27967 27968 27969 | #endif " -memtrace trace all memory allocations and deallocations\n" " -mmap N default mmap size set to N\n" #ifdef SQLITE_ENABLE_MULTIPLEX " -multiplex enable the multiplexor VFS\n" #endif " -newline SEP set output row separator. Default: '\\n'\n" #if SHELL_WIN_UTF8_OPT " -no-utf8 do not try to set up UTF-8 output (for legacy)\n" #endif " -nofollow refuse to open symbolic links to database files\n" " -nonce STRING set the safe-mode escape nonce\n" " -nullvalue TEXT set text string for NULL values. Default ''\n" " -pagecache SIZE N use N slots of SZ bytes each for page cache memory\n" " -pcachetrace trace all page cache operations\n" " -quote set output mode to 'quote'\n" " -readonly open the database read-only\n" " -safe enable safe-mode\n" " -separator SEP set output column separator. Default: '|'\n" #ifdef SQLITE_ENABLE_SORTER_REFERENCES " -sorterref SIZE sorter references threshold size\n" #endif " -stats print memory stats before each finalize\n" " -table set output mode to 'table'\n" " -tabs set output mode to 'tabs'\n" " -unsafe-testing allow unsafe commands and modes for testing\n" #if SHELL_WIN_UTF8_OPT && 0 /* Option is accepted, but is now the default. */ " -utf8 setup interactive console code page for UTF-8\n" #endif " -version show SQLite version\n" " -vfs NAME use NAME as the default VFS\n" #ifdef SQLITE_ENABLE_VFSTRACE " -vfstrace enable tracing of all VFS calls\n" #endif #ifdef SQLITE_HAVE_ZLIB " -zip open the file as a ZIP Archive\n" #endif ; static void usage(int showDetail){ utf8_printf(stderr, "Usage: %s [OPTIONS] [FILENAME [SQL]]\n" "FILENAME is the name of an SQLite database. A new database is created\n" "if the file does not previously exist. Defaults to :memory:.\n", Argv0); if( showDetail ){ utf8_printf(stderr, "OPTIONS include:\n%s", zOptions); }else{ raw_printf(stderr, "Use the -help option for additional information\n"); } exit(1); } /* ** Internal check: Verify that the SQLite is uninitialized. Print a ** error message if it is initialized. */ static void verify_uninitialized(void){ if( sqlite3_config(-1)==SQLITE_MISUSE ){ utf8_printf(stdout, "WARNING: attempt to configure SQLite after" " initialization.\n"); } } /* ** Initialize the state information in data */ static void main_init(ShellState *data) { |
︙ | ︙ | |||
30125 30126 30127 30128 30129 30130 30131 | sqlite3_snprintf(sizeof(mainPrompt), mainPrompt,"sqlite> "); sqlite3_snprintf(sizeof(continuePrompt), continuePrompt," ...> "); } /* ** Output text to the console in a font that attracts extra attention. */ | | | | | > | | 27984 27985 27986 27987 27988 27989 27990 27991 27992 27993 27994 27995 27996 27997 27998 27999 28000 28001 28002 28003 28004 28005 28006 28007 28008 28009 28010 28011 28012 28013 28014 28015 28016 28017 28018 28019 28020 28021 28022 28023 28024 28025 28026 28027 28028 28029 28030 28031 28032 28033 | sqlite3_snprintf(sizeof(mainPrompt), mainPrompt,"sqlite> "); sqlite3_snprintf(sizeof(continuePrompt), continuePrompt," ...> "); } /* ** Output text to the console in a font that attracts extra attention. */ #ifdef _WIN32 static void printBold(const char *zText){ #if !SQLITE_OS_WINRT HANDLE out = GetStdHandle(STD_OUTPUT_HANDLE); CONSOLE_SCREEN_BUFFER_INFO defaultScreenInfo; GetConsoleScreenBufferInfo(out, &defaultScreenInfo); SetConsoleTextAttribute(out, FOREGROUND_RED|FOREGROUND_INTENSITY ); #endif printf("%s", zText); #if !SQLITE_OS_WINRT SetConsoleTextAttribute(out, defaultScreenInfo.wAttributes); #endif } #else static void printBold(const char *zText){ printf("\033[1m%s\033[0m", zText); } #endif /* ** Get the argument to an --option. Throw an error and die if no argument ** is available. */ static char *cmdline_option_value(int argc, char **argv, int i){ if( i==argc ){ utf8_printf(stderr, "%s: Error: missing argument to %s\n", argv[0], argv[argc-1]); exit(1); } return argv[i]; } static void sayAbnormalExit(void){ if( seenInterrupt ) fprintf(stderr, "Program interrupted.\n"); } #ifndef SQLITE_SHELL_IS_UTF8 # if (defined(_WIN32) || defined(WIN32)) \ && (defined(_MSC_VER) || (defined(UNICODE) && defined(__GNUC__))) # define SQLITE_SHELL_IS_UTF8 (0) # else |
︙ | ︙ | |||
30189 30190 30191 30192 30193 30194 30195 | sqlite3_int64 mem_main_enter = 0; #endif char *zErrMsg = 0; #ifdef SQLITE_SHELL_FIDDLE # define data shellState #else ShellState data; | < | 28049 28050 28051 28052 28053 28054 28055 28056 28057 28058 28059 28060 28061 28062 | sqlite3_int64 mem_main_enter = 0; #endif char *zErrMsg = 0; #ifdef SQLITE_SHELL_FIDDLE # define data shellState #else ShellState data; #endif const char *zInitFile = 0; int i; int rc = 0; int warnInmemoryDb = 0; int readStdin = 1; int nCmd = 0; |
︙ | ︙ | |||
30211 30212 30213 30214 30215 30216 30217 | setvbuf(stderr, 0, _IONBF, 0); /* Make sure stderr is unbuffered */ #ifdef SQLITE_SHELL_FIDDLE stdin_is_interactive = 0; stdout_is_console = 1; data.wasm.zDefaultDbName = "/fiddle.sqlite3"; #else | < | | > > > | > | | | | | | 28070 28071 28072 28073 28074 28075 28076 28077 28078 28079 28080 28081 28082 28083 28084 28085 28086 28087 28088 28089 28090 28091 28092 28093 28094 28095 28096 28097 28098 28099 28100 28101 28102 28103 28104 28105 28106 28107 28108 28109 28110 28111 28112 28113 28114 28115 28116 28117 28118 28119 28120 28121 28122 28123 28124 28125 28126 28127 | setvbuf(stderr, 0, _IONBF, 0); /* Make sure stderr is unbuffered */ #ifdef SQLITE_SHELL_FIDDLE stdin_is_interactive = 0; stdout_is_console = 1; data.wasm.zDefaultDbName = "/fiddle.sqlite3"; #else stdin_is_interactive = isatty(0); stdout_is_console = isatty(1); #endif #if SHELL_WIN_UTF8_OPT probe_console(); /* Check for console I/O and UTF-8 capability. */ if( !mbcs_opted ) atexit(console_restore); #endif atexit(sayAbnormalExit); #ifdef SQLITE_DEBUG mem_main_enter = sqlite3_memory_used(); #endif #if !defined(_WIN32_WCE) if( getenv("SQLITE_DEBUG_BREAK") ){ if( isatty(0) && isatty(2) ){ fprintf(stderr, "attach debugger to process %d and press any key to continue.\n", GETPID()); fgetc(stdin); }else{ #if defined(_WIN32) || defined(WIN32) #if SQLITE_OS_WINRT __debugbreak(); #else DebugBreak(); #endif #elif defined(SIGTRAP) raise(SIGTRAP); #endif } } #endif /* Register a valid signal handler early, before much else is done. */ #ifdef SIGINT signal(SIGINT, interrupt_handler); #elif (defined(_WIN32) || defined(WIN32)) && !defined(_WIN32_WCE) if( !SetConsoleCtrlHandler(ConsoleCtrlHandler, TRUE) ){ fprintf(stderr, "No ^C handler.\n"); } #endif #if USE_SYSTEM_SQLITE+0!=1 if( cli_strncmp(sqlite3_sourceid(),SQLITE_SOURCE_ID,60)!=0 ){ utf8_printf(stderr, "SQLite header and source version mismatch\n%s\n%s\n", sqlite3_sourceid(), SQLITE_SOURCE_ID); exit(1); } #endif main_init(&data); /* On Windows, we must translate command-line arguments into UTF-8. ** The SQLite memory allocator subsystem has to be enabled in order to |
︙ | ︙ | |||
30336 30337 30338 30339 30340 30341 30342 30343 30344 30345 30346 30347 30348 30349 30350 | || cli_strcmp(z,"-newline")==0 || cli_strcmp(z,"-cmd")==0 ){ (void)cmdline_option_value(argc, argv, ++i); }else if( cli_strcmp(z,"-init")==0 ){ zInitFile = cmdline_option_value(argc, argv, ++i); }else if( cli_strcmp(z,"-interactive")==0 ){ }else if( cli_strcmp(z,"-batch")==0 ){ /* Need to check for batch mode here to so we can avoid printing ** informational messages (like from process_sqliterc) before ** we do the actual processing of arguments later in a second pass. */ stdin_is_interactive = 0; }else if( cli_strcmp(z,"-utf8")==0 ){ }else if( cli_strcmp(z,"-no-utf8")==0 ){ | > > > > > > > > < > | < < > | 28198 28199 28200 28201 28202 28203 28204 28205 28206 28207 28208 28209 28210 28211 28212 28213 28214 28215 28216 28217 28218 28219 28220 28221 28222 28223 28224 28225 28226 28227 28228 28229 28230 | || cli_strcmp(z,"-newline")==0 || cli_strcmp(z,"-cmd")==0 ){ (void)cmdline_option_value(argc, argv, ++i); }else if( cli_strcmp(z,"-init")==0 ){ zInitFile = cmdline_option_value(argc, argv, ++i); }else if( cli_strcmp(z,"-interactive")==0 ){ /* Need to check for interactive override here to so that it can ** affect console setup (for Windows only) and testing thereof. */ stdin_is_interactive = 1; }else if( cli_strcmp(z,"-batch")==0 ){ /* Need to check for batch mode here to so we can avoid printing ** informational messages (like from process_sqliterc) before ** we do the actual processing of arguments later in a second pass. */ stdin_is_interactive = 0; }else if( cli_strcmp(z,"-utf8")==0 ){ #if SHELL_WIN_UTF8_OPT /* Option accepted, but is ignored except for this diagnostic. */ if( mbcs_opted ) fprintf(stderr, "Cannot do UTF-8 at this console.\n"); #endif /* SHELL_WIN_UTF8_OPT */ }else if( cli_strcmp(z,"-no-utf8")==0 ){ #if SHELL_WIN_UTF8_OPT mbcs_opted = 1; #endif /* SHELL_WIN_UTF8_OPT */ }else if( cli_strcmp(z,"-heap")==0 ){ #if defined(SQLITE_ENABLE_MEMSYS3) || defined(SQLITE_ENABLE_MEMSYS5) const char *zSize; sqlite3_int64 szHeap; zSize = cmdline_option_value(argc, argv, ++i); szHeap = integerValue(zSize); |
︙ | ︙ | |||
30482 30483 30484 30485 30486 30487 30488 | #endif if( zVfs ){ sqlite3_vfs *pVfs = sqlite3_vfs_find(zVfs); if( pVfs ){ sqlite3_vfs_register(pVfs, 1); }else{ | | > > > > > > > > > | | 28351 28352 28353 28354 28355 28356 28357 28358 28359 28360 28361 28362 28363 28364 28365 28366 28367 28368 28369 28370 28371 28372 28373 28374 28375 28376 28377 28378 28379 28380 28381 28382 28383 28384 | #endif if( zVfs ){ sqlite3_vfs *pVfs = sqlite3_vfs_find(zVfs); if( pVfs ){ sqlite3_vfs_register(pVfs, 1); }else{ utf8_printf(stderr, "no such VFS: \"%s\"\n", zVfs); exit(1); } } #if SHELL_WIN_UTF8_OPT /* Get indicated Windows console setup done before running invocation commands. */ if( in_console || out_console ){ console_prepare_utf8(); } if( !in_console ){ setBinaryMode(stdin, 0); } #endif if( data.pAuxDb->zDbFilename==0 ){ #ifndef SQLITE_OMIT_MEMORYDB data.pAuxDb->zDbFilename = ":memory:"; warnInmemoryDb = argc==1; #else utf8_printf(stderr,"%s: Error: no database filename specified\n", Argv0); return 1; #endif } data.out = stdout; #ifndef SQLITE_SHELL_FIDDLE sqlite3_appendvfs_init(0,0,0); #endif |
︙ | ︙ | |||
30609 30610 30611 30612 30613 30614 30615 | ** prior to sending the SQL into SQLite. Useful for injecting ** crazy bytes in the middle of SQL statements for testing and debugging. */ ShellSetFlag(&data, SHFLG_Backslash); }else if( cli_strcmp(z,"-bail")==0 ){ /* No-op. The bail_on_error flag should already be set. */ }else if( cli_strcmp(z,"-version")==0 ){ | | | | < < < < < | 28487 28488 28489 28490 28491 28492 28493 28494 28495 28496 28497 28498 28499 28500 28501 28502 28503 28504 28505 28506 28507 28508 28509 28510 28511 | ** prior to sending the SQL into SQLite. Useful for injecting ** crazy bytes in the middle of SQL statements for testing and debugging. */ ShellSetFlag(&data, SHFLG_Backslash); }else if( cli_strcmp(z,"-bail")==0 ){ /* No-op. The bail_on_error flag should already be set. */ }else if( cli_strcmp(z,"-version")==0 ){ printf("%s %s (%d-bit)\n", sqlite3_libversion(), sqlite3_sourceid(), 8*(int)sizeof(char*)); return 0; }else if( cli_strcmp(z,"-interactive")==0 ){ /* already handled */ }else if( cli_strcmp(z,"-batch")==0 ){ /* already handled */ }else if( cli_strcmp(z,"-utf8")==0 ){ /* already handled */ }else if( cli_strcmp(z,"-no-utf8")==0 ){ /* already handled */ }else if( cli_strcmp(z,"-heap")==0 ){ i++; }else if( cli_strcmp(z,"-pagecache")==0 ){ i+=2; }else if( cli_strcmp(z,"-lookaside")==0 ){ i+=2; }else if( cli_strcmp(z,"-threadsafe")==0 ){ |
︙ | ︙ | |||
30671 30672 30673 30674 30675 30676 30677 | if( z[0]=='.' ){ rc = do_meta_command(z, &data); if( rc && bail_on_error ) return rc==2 ? 0 : rc; }else{ open_db(&data, 0); rc = shell_exec(&data, z, &zErrMsg); if( zErrMsg!=0 ){ | | | | | | | | 28544 28545 28546 28547 28548 28549 28550 28551 28552 28553 28554 28555 28556 28557 28558 28559 28560 28561 28562 28563 28564 28565 28566 28567 28568 28569 28570 28571 28572 28573 28574 28575 28576 28577 28578 28579 28580 28581 28582 28583 28584 28585 28586 28587 28588 | if( z[0]=='.' ){ rc = do_meta_command(z, &data); if( rc && bail_on_error ) return rc==2 ? 0 : rc; }else{ open_db(&data, 0); rc = shell_exec(&data, z, &zErrMsg); if( zErrMsg!=0 ){ utf8_printf(stderr,"Error: %s\n", zErrMsg); if( bail_on_error ) return rc!=0 ? rc : 1; }else if( rc!=0 ){ utf8_printf(stderr,"Error: unable to process SQL \"%s\"\n", z); if( bail_on_error ) return rc; } } #if !defined(SQLITE_OMIT_VIRTUALTABLE) && defined(SQLITE_HAVE_ZLIB) }else if( cli_strncmp(z, "-A", 2)==0 ){ if( nCmd>0 ){ utf8_printf(stderr, "Error: cannot mix regular SQL or dot-commands" " with \"%s\"\n", z); return 1; } open_db(&data, OPEN_DB_ZIPFILE); if( z[2] ){ argv[i] = &z[2]; arDotCommand(&data, 1, argv+(i-1), argc-(i-1)); }else{ arDotCommand(&data, 1, argv+i, argc-i); } readStdin = 0; break; #endif }else if( cli_strcmp(z,"-safe")==0 ){ data.bSafeMode = data.bSafeModePersist = 1; }else if( cli_strcmp(z,"-unsafe-testing")==0 ){ /* Acted upon in first pass. */ }else{ utf8_printf(stderr,"%s: Error: unknown option: %s\n", Argv0, z); raw_printf(stderr,"Use -help for a list of options.\n"); return 1; } data.cMode = data.mode; } if( !readStdin ){ /* Run all arguments that do not begin with '-' as if they were separate |
︙ | ︙ | |||
30725 30726 30727 30728 30729 30730 30731 | } }else{ open_db(&data, 0); echo_group_input(&data, azCmd[i]); rc = shell_exec(&data, azCmd[i], &zErrMsg); if( zErrMsg || rc ){ if( zErrMsg!=0 ){ | | | > | | | < > > > > > | | | > | | | | 28598 28599 28600 28601 28602 28603 28604 28605 28606 28607 28608 28609 28610 28611 28612 28613 28614 28615 28616 28617 28618 28619 28620 28621 28622 28623 28624 28625 28626 28627 28628 28629 28630 28631 28632 28633 28634 28635 28636 28637 28638 28639 28640 28641 28642 28643 28644 28645 28646 28647 | } }else{ open_db(&data, 0); echo_group_input(&data, azCmd[i]); rc = shell_exec(&data, azCmd[i], &zErrMsg); if( zErrMsg || rc ){ if( zErrMsg!=0 ){ utf8_printf(stderr,"Error: %s\n", zErrMsg); }else{ utf8_printf(stderr,"Error: unable to process SQL: %s\n", azCmd[i]); } sqlite3_free(zErrMsg); free(azCmd); return rc!=0 ? rc : 1; } } } }else{ /* Run commands received from standard input */ if( stdin_is_interactive ){ char *zHome; char *zHistory; const char *zCharset = ""; int nHistory; #if SHELL_WIN_UTF8_OPT switch( console_utf8_in+2*console_utf8_out ){ default: case 0: break; case 1: zCharset = " (utf8 in)"; break; case 2: zCharset = " (utf8 out)"; break; case 3: zCharset = " (utf8 I/O)"; break; } #endif printf( "SQLite version %s %.19s%s\n" /*extra-version-info*/ "Enter \".help\" for usage hints.\n", sqlite3_libversion(), sqlite3_sourceid(), zCharset ); if( warnInmemoryDb ){ printf("Connected to a "); printBold("transient in-memory database"); printf(".\nUse \".open FILENAME\" to reopen on a " "persistent database.\n"); } zHistory = getenv("SQLITE_HISTORY"); if( zHistory ){ zHistory = strdup(zHistory); }else if( (zHome = find_home_dir(0))!=0 ){ nHistory = strlen30(zHome) + 20; if( (zHistory = malloc(nHistory))!=0 ){ |
︙ | ︙ | |||
30786 30787 30788 30789 30790 30791 30792 | data.in = stdin; rc = process_input(&data); } } #ifndef SQLITE_SHELL_FIDDLE /* In WASM mode we have to leave the db state in place so that ** client code can "push" SQL into it after this call returns. */ | < < < < < | 28665 28666 28667 28668 28669 28670 28671 28672 28673 28674 28675 28676 28677 28678 | data.in = stdin; rc = process_input(&data); } } #ifndef SQLITE_SHELL_FIDDLE /* In WASM mode we have to leave the db state in place so that ** client code can "push" SQL into it after this call returns. */ free(azCmd); set_table_name(&data, 0); if( data.db ){ session_close_all(&data, -1); close_db(data.db); } for(i=0; i<ArraySize(data.aAuxDb); i++){ |
︙ | ︙ | |||
30819 30820 30821 30822 30823 30824 30825 | free(data.colWidth); free(data.zNonce); /* Clear the global data structure so that valgrind will detect memory ** leaks */ memset(&data, 0, sizeof(data)); #ifdef SQLITE_DEBUG if( sqlite3_memory_used()>mem_main_enter ){ | | | | 28693 28694 28695 28696 28697 28698 28699 28700 28701 28702 28703 28704 28705 28706 28707 28708 | free(data.colWidth); free(data.zNonce); /* Clear the global data structure so that valgrind will detect memory ** leaks */ memset(&data, 0, sizeof(data)); #ifdef SQLITE_DEBUG if( sqlite3_memory_used()>mem_main_enter ){ utf8_printf(stderr, "Memory leaked: %u bytes\n", (unsigned int)(sqlite3_memory_used()-mem_main_enter)); } #endif #endif /* !SQLITE_SHELL_FIDDLE */ return rc; } |
︙ | ︙ | |||
30857 30858 30859 30860 30861 30862 30863 | SQLITE_FCNTL_VFS_POINTER, &pVfs); } return pVfs; } /* Only for emcc experimentation purposes. */ sqlite3 * fiddle_db_arg(sqlite3 *arg){ | | | 28731 28732 28733 28734 28735 28736 28737 28738 28739 28740 28741 28742 28743 28744 28745 | SQLITE_FCNTL_VFS_POINTER, &pVfs); } return pVfs; } /* Only for emcc experimentation purposes. */ sqlite3 * fiddle_db_arg(sqlite3 *arg){ printf("fiddle_db_arg(%p)\n", (const void*)arg); return arg; } /* ** Intended to be called via a SharedWorker() while a separate ** SharedWorker() (which manages the wasm module) is performing work ** which should be interrupted. Unfortunately, SharedWorker is not |
︙ | ︙ | |||
30883 30884 30885 30886 30887 30888 30889 | return globalDb ? sqlite3_db_filename(globalDb, zDbName ? zDbName : "main") : NULL; } /* ** Completely wipes out the contents of the currently-opened database | | < < < < < < < < < < | | | 28757 28758 28759 28760 28761 28762 28763 28764 28765 28766 28767 28768 28769 28770 28771 28772 28773 28774 28775 28776 | return globalDb ? sqlite3_db_filename(globalDb, zDbName ? zDbName : "main") : NULL; } /* ** Completely wipes out the contents of the currently-opened database ** but leaves its storage intact for reuse. */ void fiddle_reset_db(void){ if( globalDb ){ int rc = sqlite3_db_config(globalDb, SQLITE_DBCONFIG_RESET_DATABASE, 1, 0); if( 0==rc ) rc = sqlite3_exec(globalDb, "VACUUM", 0, 0, 0); sqlite3_db_config(globalDb, SQLITE_DBCONFIG_RESET_DATABASE, 0, 0); } } /* ** Uses the current database's VFS xRead to stream the db file's ** contents out to the given callback. The callback gets a single |
︙ | ︙ |
Changes to extsrc/sqlite3.c.
more than 10,000 changes
Changes to extsrc/sqlite3.h.
︙ | ︙ | |||
142 143 144 145 146 147 148 | ** been edited in any way since it was last checked in, then the last ** four hexadecimal digits of the hash may be modified. ** ** See also: [sqlite3_libversion()], ** [sqlite3_libversion_number()], [sqlite3_sourceid()], ** [sqlite_version()] and [sqlite_source_id()]. */ | | | | | 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 | ** been edited in any way since it was last checked in, then the last ** four hexadecimal digits of the hash may be modified. ** ** See also: [sqlite3_libversion()], ** [sqlite3_libversion_number()], [sqlite3_sourceid()], ** [sqlite_version()] and [sqlite_source_id()]. */ #define SQLITE_VERSION "3.44.0" #define SQLITE_VERSION_NUMBER 3044000 #define SQLITE_SOURCE_ID "2023-11-01 11:23:50 17129ba1ff7f0daf37100ee82d507aef7827cf38de1866e2633096ae6ad81301" /* ** CAPI3REF: Run-Time Library Version Numbers ** KEYWORDS: sqlite3_version sqlite3_sourceid ** ** These interfaces provide the same information as the [SQLITE_VERSION], ** [SQLITE_VERSION_NUMBER], and [SQLITE_SOURCE_ID] C preprocessor macros |
︙ | ︙ | |||
416 417 418 419 420 421 422 | ** <ul> ** <li> The application must ensure that the 1st parameter to sqlite3_exec() ** is a valid and open [database connection]. ** <li> The application must not close the [database connection] specified by ** the 1st parameter to sqlite3_exec() while sqlite3_exec() is running. ** <li> The application must not modify the SQL statement text passed into ** the 2nd parameter of sqlite3_exec() while sqlite3_exec() is running. | < < | 416 417 418 419 420 421 422 423 424 425 426 427 428 429 | ** <ul> ** <li> The application must ensure that the 1st parameter to sqlite3_exec() ** is a valid and open [database connection]. ** <li> The application must not close the [database connection] specified by ** the 1st parameter to sqlite3_exec() while sqlite3_exec() is running. ** <li> The application must not modify the SQL statement text passed into ** the 2nd parameter of sqlite3_exec() while sqlite3_exec() is running. ** </ul> */ SQLITE_API int sqlite3_exec( sqlite3*, /* An open database */ const char *sql, /* SQL to be evaluated */ int (*callback)(void*,int,char**,char**), /* Callback function */ void *, /* 1st argument to callback */ |
︙ | ︙ | |||
760 761 762 763 764 765 766 | ** <li> [SQLITE_LOCK_SHARED], ** <li> [SQLITE_LOCK_RESERVED], ** <li> [SQLITE_LOCK_PENDING], or ** <li> [SQLITE_LOCK_EXCLUSIVE]. ** </ul> ** xLock() upgrades the database file lock. In other words, xLock() moves the ** database file lock in the direction NONE toward EXCLUSIVE. The argument to | | | | 758 759 760 761 762 763 764 765 766 767 768 769 770 771 772 773 774 775 776 | ** <li> [SQLITE_LOCK_SHARED], ** <li> [SQLITE_LOCK_RESERVED], ** <li> [SQLITE_LOCK_PENDING], or ** <li> [SQLITE_LOCK_EXCLUSIVE]. ** </ul> ** xLock() upgrades the database file lock. In other words, xLock() moves the ** database file lock in the direction NONE toward EXCLUSIVE. The argument to ** xLock() is always on of SHARED, RESERVED, PENDING, or EXCLUSIVE, never ** SQLITE_LOCK_NONE. If the database file lock is already at or above the ** requested lock, then the call to xLock() is a no-op. ** xUnlock() downgrades the database file lock to either SHARED or NONE. * If the lock is already at or below the requested lock state, then the call ** to xUnlock() is a no-op. ** The xCheckReservedLock() method checks whether any database connection, ** either in this process or in some other process, is holding a RESERVED, ** PENDING, or EXCLUSIVE lock on the file. It returns true ** if such a lock exists and false otherwise. ** ** The xFileControl() method is a generic interface that allows custom |
︙ | ︙ | |||
2139 2140 2141 2142 2143 2144 2145 | ** [sqlite3_int64] parameter which is the default maximum size for an in-memory ** database created using [sqlite3_deserialize()]. This default maximum ** size can be adjusted up or down for individual databases using the ** [SQLITE_FCNTL_SIZE_LIMIT] [sqlite3_file_control|file-control]. If this ** configuration setting is never used, then the default maximum is determined ** by the [SQLITE_MEMDB_DEFAULT_MAXSIZE] compile-time option. If that ** compile-time option is not set, then the default maximum is 1073741824. | < < < < < < < < < < < < < < < < | 2137 2138 2139 2140 2141 2142 2143 2144 2145 2146 2147 2148 2149 2150 | ** [sqlite3_int64] parameter which is the default maximum size for an in-memory ** database created using [sqlite3_deserialize()]. This default maximum ** size can be adjusted up or down for individual databases using the ** [SQLITE_FCNTL_SIZE_LIMIT] [sqlite3_file_control|file-control]. If this ** configuration setting is never used, then the default maximum is determined ** by the [SQLITE_MEMDB_DEFAULT_MAXSIZE] compile-time option. If that ** compile-time option is not set, then the default maximum is 1073741824. ** </dl> */ #define SQLITE_CONFIG_SINGLETHREAD 1 /* nil */ #define SQLITE_CONFIG_MULTITHREAD 2 /* nil */ #define SQLITE_CONFIG_SERIALIZED 3 /* nil */ #define SQLITE_CONFIG_MALLOC 4 /* sqlite3_mem_methods* */ #define SQLITE_CONFIG_GETMALLOC 5 /* sqlite3_mem_methods* */ |
︙ | ︙ | |||
2186 2187 2188 2189 2190 2191 2192 | #define SQLITE_CONFIG_WIN32_HEAPSIZE 23 /* int nByte */ #define SQLITE_CONFIG_PCACHE_HDRSZ 24 /* int *psz */ #define SQLITE_CONFIG_PMASZ 25 /* unsigned int szPma */ #define SQLITE_CONFIG_STMTJRNL_SPILL 26 /* int nByte */ #define SQLITE_CONFIG_SMALL_MALLOC 27 /* boolean */ #define SQLITE_CONFIG_SORTERREF_SIZE 28 /* int nByte */ #define SQLITE_CONFIG_MEMDB_MAXSIZE 29 /* sqlite3_int64 */ | < | 2168 2169 2170 2171 2172 2173 2174 2175 2176 2177 2178 2179 2180 2181 | #define SQLITE_CONFIG_WIN32_HEAPSIZE 23 /* int nByte */ #define SQLITE_CONFIG_PCACHE_HDRSZ 24 /* int *psz */ #define SQLITE_CONFIG_PMASZ 25 /* unsigned int szPma */ #define SQLITE_CONFIG_STMTJRNL_SPILL 26 /* int nByte */ #define SQLITE_CONFIG_SMALL_MALLOC 27 /* boolean */ #define SQLITE_CONFIG_SORTERREF_SIZE 28 /* int nByte */ #define SQLITE_CONFIG_MEMDB_MAXSIZE 29 /* sqlite3_int64 */ /* ** CAPI3REF: Database Connection Configuration Options ** ** These constants are the available integer configuration options that ** can be passed as the second argument to the [sqlite3_db_config()] interface. ** |
︙ | ︙ | |||
3301 3302 3303 3304 3305 3306 3307 | #define SQLITE_DROP_VTABLE 30 /* Table Name Module Name */ #define SQLITE_FUNCTION 31 /* NULL Function Name */ #define SQLITE_SAVEPOINT 32 /* Operation Savepoint Name */ #define SQLITE_COPY 0 /* No longer used */ #define SQLITE_RECURSIVE 33 /* NULL NULL */ /* | | | | 3282 3283 3284 3285 3286 3287 3288 3289 3290 3291 3292 3293 3294 3295 3296 3297 | #define SQLITE_DROP_VTABLE 30 /* Table Name Module Name */ #define SQLITE_FUNCTION 31 /* NULL Function Name */ #define SQLITE_SAVEPOINT 32 /* Operation Savepoint Name */ #define SQLITE_COPY 0 /* No longer used */ #define SQLITE_RECURSIVE 33 /* NULL NULL */ /* ** CAPI3REF: Tracing And Profiling Functions ** METHOD: sqlite3 ** ** These routines are deprecated. Use the [sqlite3_trace_v2()] interface ** instead of the routines described here. ** ** These routines register callback functions that can be used for ** tracing and profiling the execution of SQL statements. ** |
︙ | ︙ | |||
3969 3970 3971 3972 3973 3974 3975 | ** <li> sqlite3_extended_errcode() ** <li> sqlite3_errmsg() ** <li> sqlite3_errmsg16() ** <li> sqlite3_error_offset() ** </ul> ** ** ^The sqlite3_errmsg() and sqlite3_errmsg16() return English-language | | < | | < | 3950 3951 3952 3953 3954 3955 3956 3957 3958 3959 3960 3961 3962 3963 3964 3965 3966 3967 3968 3969 3970 3971 3972 | ** <li> sqlite3_extended_errcode() ** <li> sqlite3_errmsg() ** <li> sqlite3_errmsg16() ** <li> sqlite3_error_offset() ** </ul> ** ** ^The sqlite3_errmsg() and sqlite3_errmsg16() return English-language ** text that describes the error, as either UTF-8 or UTF-16 respectively. ** (See how SQLite handles [invalid UTF] for exceptions to this rule.) ** ^(Memory to hold the error message string is managed internally. ** The application does not need to worry about freeing the result. ** However, the error string might be overwritten or deallocated by ** subsequent calls to other SQLite interface functions.)^ ** ** ^The sqlite3_errstr() interface returns the English-language text ** that describes the [result code], as UTF-8. ** ^(Memory to hold the error message string is managed internally ** and must not be freed by the application)^. ** ** ^If the most recent error references a specific token in the input ** SQL, the sqlite3_error_offset() interface returns the byte offset ** of the start of that token. ^The byte offset returned by ** sqlite3_error_offset() assumes that the input SQL is UTF8. |
︙ | ︙ | |||
5590 5591 5592 5593 5594 5595 5596 | ** are innocuous. Developers are advised to avoid using the ** SQLITE_INNOCUOUS flag for application-defined functions unless the ** function has been carefully audited and found to be free of potentially ** security-adverse side-effects and information-leaks. ** </dd> ** ** [[SQLITE_SUBTYPE]] <dt>SQLITE_SUBTYPE</dt><dd> | | | | | | < | < < < < < < < < < < < < < < | 5569 5570 5571 5572 5573 5574 5575 5576 5577 5578 5579 5580 5581 5582 5583 5584 5585 5586 5587 5588 5589 5590 5591 5592 5593 5594 5595 5596 | ** are innocuous. Developers are advised to avoid using the ** SQLITE_INNOCUOUS flag for application-defined functions unless the ** function has been carefully audited and found to be free of potentially ** security-adverse side-effects and information-leaks. ** </dd> ** ** [[SQLITE_SUBTYPE]] <dt>SQLITE_SUBTYPE</dt><dd> ** The SQLITE_SUBTYPE flag indicates to SQLite that a function may call ** [sqlite3_value_subtype()] to inspect the sub-types of its arguments. ** Specifying this flag makes no difference for scalar or aggregate user ** functions. However, if it is not specified for a user-defined window ** function, then any sub-types belonging to arguments passed to the window ** function may be discarded before the window function is called (i.e. ** sqlite3_value_subtype() will always return 0). ** </dd> ** </dl> */ #define SQLITE_DETERMINISTIC 0x000000800 #define SQLITE_DIRECTONLY 0x000080000 #define SQLITE_SUBTYPE 0x000100000 #define SQLITE_INNOCUOUS 0x000200000 /* ** CAPI3REF: Deprecated Functions ** DEPRECATED ** ** These functions are [deprecated]. In order to maintain ** backwards compatibility with older code, these functions continue |
︙ | ︙ | |||
5815 5816 5817 5818 5819 5820 5821 | ** METHOD: sqlite3_value ** ** The sqlite3_value_subtype(V) function returns the subtype for ** an [application-defined SQL function] argument V. The subtype ** information can be used to pass a limited amount of context from ** one SQL function to another. Use the [sqlite3_result_subtype()] ** routine to set the subtype for the return value of an SQL function. | < < < < < < | 5779 5780 5781 5782 5783 5784 5785 5786 5787 5788 5789 5790 5791 5792 | ** METHOD: sqlite3_value ** ** The sqlite3_value_subtype(V) function returns the subtype for ** an [application-defined SQL function] argument V. The subtype ** information can be used to pass a limited amount of context from ** one SQL function to another. Use the [sqlite3_result_subtype()] ** routine to set the subtype for the return value of an SQL function. */ SQLITE_API unsigned int sqlite3_value_subtype(sqlite3_value*); /* ** CAPI3REF: Copy And Free SQL Values ** METHOD: sqlite3_value ** |
︙ | ︙ | |||
5951 5952 5953 5954 5955 5956 5957 | ** SQLite is free to discard the auxiliary data at any time, including: <ul> ** <li> ^(when the corresponding function parameter changes)^, or ** <li> ^(when [sqlite3_reset()] or [sqlite3_finalize()] is called for the ** SQL statement)^, or ** <li> ^(when sqlite3_set_auxdata() is invoked again on the same ** parameter)^, or ** <li> ^(during the original sqlite3_set_auxdata() call when a memory | | < < < | | < < < < < | 5909 5910 5911 5912 5913 5914 5915 5916 5917 5918 5919 5920 5921 5922 5923 5924 5925 5926 5927 5928 5929 5930 | ** SQLite is free to discard the auxiliary data at any time, including: <ul> ** <li> ^(when the corresponding function parameter changes)^, or ** <li> ^(when [sqlite3_reset()] or [sqlite3_finalize()] is called for the ** SQL statement)^, or ** <li> ^(when sqlite3_set_auxdata() is invoked again on the same ** parameter)^, or ** <li> ^(during the original sqlite3_set_auxdata() call when a memory ** allocation error occurs.)^ </ul> ** ** Note the last bullet in particular. The destructor X in ** sqlite3_set_auxdata(C,N,P,X) might be called immediately, before the ** sqlite3_set_auxdata() interface even returns. Hence sqlite3_set_auxdata() ** should be called near the end of the function implementation and the ** function implementation should not make any use of P after ** sqlite3_set_auxdata() has been called. ** ** ^(In practice, auxiliary data is preserved between function calls for ** function parameters that are compile-time constants, including literal ** values and [parameters] and expressions composed from the same.)^ ** ** The value of the N parameter to these interfaces should be non-negative. ** Future enhancements may make use of negative N values to define new |
︙ | ︙ | |||
6240 6241 6242 6243 6244 6245 6246 | ** The sqlite3_result_subtype(C,T) function causes the subtype of ** the result from the [application-defined SQL function] with ** [sqlite3_context] C to be the value T. Only the lower 8 bits ** of the subtype T are preserved in current versions of SQLite; ** higher order bits are discarded. ** The number of subtype bytes preserved by SQLite might increase ** in future releases of SQLite. | < < < < < < < < < < < < < < | 6190 6191 6192 6193 6194 6195 6196 6197 6198 6199 6200 6201 6202 6203 | ** The sqlite3_result_subtype(C,T) function causes the subtype of ** the result from the [application-defined SQL function] with ** [sqlite3_context] C to be the value T. Only the lower 8 bits ** of the subtype T are preserved in current versions of SQLite; ** higher order bits are discarded. ** The number of subtype bytes preserved by SQLite might increase ** in future releases of SQLite. */ SQLITE_API void sqlite3_result_subtype(sqlite3_context*,unsigned int); /* ** CAPI3REF: Define New Collating Sequences ** METHOD: sqlite3 ** |
︙ | ︙ | |||
6883 6884 6885 6886 6887 6888 6889 | ** ^In the current implementation, the update hook ** is not invoked when conflicting rows are deleted because of an ** [ON CONFLICT | ON CONFLICT REPLACE] clause. ^Nor is the update hook ** invoked when rows are deleted using the [truncate optimization]. ** The exceptions defined in this paragraph might change in a future ** release of SQLite. ** | < < < < < < | 6819 6820 6821 6822 6823 6824 6825 6826 6827 6828 6829 6830 6831 6832 | ** ^In the current implementation, the update hook ** is not invoked when conflicting rows are deleted because of an ** [ON CONFLICT | ON CONFLICT REPLACE] clause. ^Nor is the update hook ** invoked when rows are deleted using the [truncate optimization]. ** The exceptions defined in this paragraph might change in a future ** release of SQLite. ** ** The update hook implementation must not do anything that will modify ** the database connection that invoked the update hook. Any actions ** to modify the database connection must be deferred until after the ** completion of the [sqlite3_step()] call that triggered the update hook. ** Note that [sqlite3_prepare_v2()] and [sqlite3_step()] both modify their ** database connections for the meaning of "modify" in this paragraph. ** |
︙ | ︙ | |||
8060 8061 8062 8063 8064 8065 8066 | ** In such cases, the ** mutex must be exited an equal number of times before another thread ** can enter.)^ If the same thread tries to enter any mutex other ** than an SQLITE_MUTEX_RECURSIVE more than once, the behavior is undefined. ** ** ^(Some systems (for example, Windows 95) do not support the operation ** implemented by sqlite3_mutex_try(). On those systems, sqlite3_mutex_try() | | | | < < | 7990 7991 7992 7993 7994 7995 7996 7997 7998 7999 8000 8001 8002 8003 8004 8005 8006 | ** In such cases, the ** mutex must be exited an equal number of times before another thread ** can enter.)^ If the same thread tries to enter any mutex other ** than an SQLITE_MUTEX_RECURSIVE more than once, the behavior is undefined. ** ** ^(Some systems (for example, Windows 95) do not support the operation ** implemented by sqlite3_mutex_try(). On those systems, sqlite3_mutex_try() ** will always return SQLITE_BUSY. The SQLite core only ever uses ** sqlite3_mutex_try() as an optimization so this is acceptable ** behavior.)^ ** ** ^The sqlite3_mutex_leave() routine exits a mutex that was ** previously entered by the same thread. The behavior ** is undefined if the mutex is not currently entered by the ** calling thread or is not currently allocated. ** ** ^If the argument to sqlite3_mutex_enter(), sqlite3_mutex_try(), |
︙ | ︙ | |||
8323 8324 8325 8326 8327 8328 8329 | #define SQLITE_TESTCTRL_BITVEC_TEST 8 #define SQLITE_TESTCTRL_FAULT_INSTALL 9 #define SQLITE_TESTCTRL_BENIGN_MALLOC_HOOKS 10 #define SQLITE_TESTCTRL_PENDING_BYTE 11 #define SQLITE_TESTCTRL_ASSERT 12 #define SQLITE_TESTCTRL_ALWAYS 13 #define SQLITE_TESTCTRL_RESERVE 14 /* NOT USED */ | < | 8251 8252 8253 8254 8255 8256 8257 8258 8259 8260 8261 8262 8263 8264 | #define SQLITE_TESTCTRL_BITVEC_TEST 8 #define SQLITE_TESTCTRL_FAULT_INSTALL 9 #define SQLITE_TESTCTRL_BENIGN_MALLOC_HOOKS 10 #define SQLITE_TESTCTRL_PENDING_BYTE 11 #define SQLITE_TESTCTRL_ASSERT 12 #define SQLITE_TESTCTRL_ALWAYS 13 #define SQLITE_TESTCTRL_RESERVE 14 /* NOT USED */ #define SQLITE_TESTCTRL_OPTIMIZATIONS 15 #define SQLITE_TESTCTRL_ISKEYWORD 16 /* NOT USED */ #define SQLITE_TESTCTRL_SCRATCHMALLOC 17 /* NOT USED */ #define SQLITE_TESTCTRL_INTERNAL_FUNCTIONS 17 #define SQLITE_TESTCTRL_LOCALTIME_FAULT 18 #define SQLITE_TESTCTRL_EXPLAIN_STMT 19 /* NOT USED */ #define SQLITE_TESTCTRL_ONCE_RESET_THRESHOLD 19 |
︙ | ︙ | |||
8359 8360 8361 8362 8363 8364 8365 | ** recognized by SQLite. Applications can uses these routines to determine ** whether or not a specific identifier needs to be escaped (for example, ** by enclosing in double-quotes) so as not to confuse the parser. ** ** The sqlite3_keyword_count() interface returns the number of distinct ** keywords understood by SQLite. ** | | | 8286 8287 8288 8289 8290 8291 8292 8293 8294 8295 8296 8297 8298 8299 8300 | ** recognized by SQLite. Applications can uses these routines to determine ** whether or not a specific identifier needs to be escaped (for example, ** by enclosing in double-quotes) so as not to confuse the parser. ** ** The sqlite3_keyword_count() interface returns the number of distinct ** keywords understood by SQLite. ** ** The sqlite3_keyword_name(N,Z,L) interface finds the N-th keyword and ** makes *Z point to that keyword expressed as UTF8 and writes the number ** of bytes in the keyword into *L. The string that *Z points to is not ** zero-terminated. The sqlite3_keyword_name(N,Z,L) routine returns ** SQLITE_OK if N is within bounds and SQLITE_ERROR if not. If either Z ** or L are NULL or invalid pointers then calls to ** sqlite3_keyword_name(N,Z,L) result in undefined behavior. ** |
︙ | ︙ | |||
12804 12805 12806 12807 12808 12809 12810 | const unsigned char *b; }; /* ** EXTENSION API FUNCTIONS ** ** xUserData(pFts): | | | | 12731 12732 12733 12734 12735 12736 12737 12738 12739 12740 12741 12742 12743 12744 12745 12746 | const unsigned char *b; }; /* ** EXTENSION API FUNCTIONS ** ** xUserData(pFts): ** Return a copy of the context pointer the extension function was ** registered with. ** ** xColumnTotalSize(pFts, iCol, pnToken): ** If parameter iCol is less than zero, set output variable *pnToken ** to the total number of tokens in the FTS5 table. Or, if iCol is ** non-negative but less than the number of columns in the table, return ** the total number of tokens in column iCol, considering all rows in ** the FTS5 table. |
︙ | ︙ | |||
12837 12838 12839 12840 12841 12842 12843 | ** an OOM condition or IO error), an appropriate SQLite error code is ** returned. ** ** This function may be quite inefficient if used with an FTS5 table ** created with the "columnsize=0" option. ** ** xColumnText: | < < < | | < < | | | < | | | | 12764 12765 12766 12767 12768 12769 12770 12771 12772 12773 12774 12775 12776 12777 12778 12779 12780 12781 12782 12783 12784 12785 12786 12787 12788 12789 12790 12791 12792 12793 12794 12795 12796 12797 12798 12799 12800 12801 12802 12803 12804 12805 12806 12807 12808 12809 12810 12811 | ** an OOM condition or IO error), an appropriate SQLite error code is ** returned. ** ** This function may be quite inefficient if used with an FTS5 table ** created with the "columnsize=0" option. ** ** xColumnText: ** This function attempts to retrieve the text of column iCol of the ** current document. If successful, (*pz) is set to point to a buffer ** containing the text in utf-8 encoding, (*pn) is set to the size in bytes ** (not characters) of the buffer and SQLITE_OK is returned. Otherwise, ** if an error occurs, an SQLite error code is returned and the final values ** of (*pz) and (*pn) are undefined. ** ** xPhraseCount: ** Returns the number of phrases in the current query expression. ** ** xPhraseSize: ** Returns the number of tokens in phrase iPhrase of the query. Phrases ** are numbered starting from zero. ** ** xInstCount: ** Set *pnInst to the total number of occurrences of all phrases within ** the query within the current row. Return SQLITE_OK if successful, or ** an error code (i.e. SQLITE_NOMEM) if an error occurs. ** ** This API can be quite slow if used with an FTS5 table created with the ** "detail=none" or "detail=column" option. If the FTS5 table is created ** with either "detail=none" or "detail=column" and "content=" option ** (i.e. if it is a contentless table), then this API always returns 0. ** ** xInst: ** Query for the details of phrase match iIdx within the current row. ** Phrase matches are numbered starting from zero, so the iIdx argument ** should be greater than or equal to zero and smaller than the value ** output by xInstCount(). ** ** Usually, output parameter *piPhrase is set to the phrase number, *piCol ** to the column in which it occurs and *piOff the token offset of the ** first token of the phrase. Returns SQLITE_OK if successful, or an error ** code (i.e. SQLITE_NOMEM) if an error occurs. ** ** This API can be quite slow if used with an FTS5 table created with the ** "detail=none" or "detail=column" option. ** ** xRowid: ** Returns the rowid of the current row. ** |
︙ | ︙ | |||
12902 12903 12904 12905 12906 12907 12908 | ** phrase iPhrase of the current query is included in $p. For each ** row visited, the callback function passed as the fourth argument ** is invoked. The context and API objects passed to the callback ** function may be used to access the properties of each matched row. ** Invoking Api.xUserData() returns a copy of the pointer passed as ** the third argument to pUserData. ** | < < < < | 12823 12824 12825 12826 12827 12828 12829 12830 12831 12832 12833 12834 12835 12836 | ** phrase iPhrase of the current query is included in $p. For each ** row visited, the callback function passed as the fourth argument ** is invoked. The context and API objects passed to the callback ** function may be used to access the properties of each matched row. ** Invoking Api.xUserData() returns a copy of the pointer passed as ** the third argument to pUserData. ** ** If the callback function returns any value other than SQLITE_OK, the ** query is abandoned and the xQueryPhrase function returns immediately. ** If the returned value is SQLITE_DONE, xQueryPhrase returns SQLITE_OK. ** Otherwise, the error code is propagated upwards. ** ** If the query runs to completion without incident, SQLITE_OK is returned. ** Or, if some error occurs before the query completes or is aborted by |
︙ | ︙ | |||
13020 13021 13022 13023 13024 13025 13026 | ** xPhraseFirstColumn() may also be obtained using xPhraseFirst/xPhraseNext ** (or xInst/xInstCount). The chief advantage of this API is that it is ** significantly more efficient than those alternatives when used with ** "detail=column" tables. ** ** xPhraseNextColumn() ** See xPhraseFirstColumn above. | < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < | | 12937 12938 12939 12940 12941 12942 12943 12944 12945 12946 12947 12948 12949 12950 12951 12952 12953 | ** xPhraseFirstColumn() may also be obtained using xPhraseFirst/xPhraseNext ** (or xInst/xInstCount). The chief advantage of this API is that it is ** significantly more efficient than those alternatives when used with ** "detail=column" tables. ** ** xPhraseNextColumn() ** See xPhraseFirstColumn above. */ struct Fts5ExtensionApi { int iVersion; /* Currently always set to 2 */ void *(*xUserData)(Fts5Context*); int (*xColumnCount)(Fts5Context*); int (*xRowCount)(Fts5Context*, sqlite3_int64 *pnRow); int (*xColumnTotalSize)(Fts5Context*, int iCol, sqlite3_int64 *pnToken); |
︙ | ︙ | |||
13090 13091 13092 13093 13094 13095 13096 | void *(*xGetAuxdata)(Fts5Context*, int bClear); int (*xPhraseFirst)(Fts5Context*, int iPhrase, Fts5PhraseIter*, int*, int*); void (*xPhraseNext)(Fts5Context*, Fts5PhraseIter*, int *piCol, int *piOff); int (*xPhraseFirstColumn)(Fts5Context*, int iPhrase, Fts5PhraseIter*, int*); void (*xPhraseNextColumn)(Fts5Context*, Fts5PhraseIter*, int *piCol); | < < < < < < < | 12974 12975 12976 12977 12978 12979 12980 12981 12982 12983 12984 12985 12986 12987 | void *(*xGetAuxdata)(Fts5Context*, int bClear); int (*xPhraseFirst)(Fts5Context*, int iPhrase, Fts5PhraseIter*, int*, int*); void (*xPhraseNext)(Fts5Context*, Fts5PhraseIter*, int *piCol, int *piOff); int (*xPhraseFirstColumn)(Fts5Context*, int iPhrase, Fts5PhraseIter*, int*); void (*xPhraseNextColumn)(Fts5Context*, Fts5PhraseIter*, int *piCol); }; /* ** CUSTOM AUXILIARY FUNCTIONS *************************************************************************/ /************************************************************************* |
︙ | ︙ |
Changes to skins/ardoise/css.txt.
︙ | ︙ | |||
614 615 616 617 618 619 620 | ol, p, pre, table, ul { margin-bottom: 1.5rem } | | | | | | | | | | | | 614 615 616 617 618 619 620 621 622 623 624 625 626 627 628 629 630 631 632 633 634 635 636 637 638 639 640 641 642 643 644 645 646 647 648 649 650 651 652 653 654 655 656 657 658 659 660 661 662 | ol, p, pre, table, ul { margin-bottom: 1.5rem } .header { color: #888; font-weight: 400; padding-top: 10px; border-width: 0 } .filetree li > ul:before, .filetree li li:before { border-left: 2px solid #888; content: ''; position: absolute } .filetree>ul, .header .logo, .header .logo h1 { display: inline-block } .header .login { padding-top: 2px; text-align: right } .header .login .button { margin: 0 } .header h1 { margin: 0; color: #888; display: inline-block } .header .title h1 { padding-bottom: 10px } .header .login, .header h1 small, .header h2 small { color: #777 } .middle { background-color: #1d2021; padding-bottom: 20px; max-width: 100%; box-sizing: border-box |
︙ | ︙ | |||
682 683 684 685 686 687 688 | } .artifact_content blockquote:first-of-type { padding: 1px 20px; margin: 0 0 20px; background: #000; border-radius: 5px } | | | | | 682 683 684 685 686 687 688 689 690 691 692 693 694 695 696 697 698 699 700 701 702 703 704 705 706 | } .artifact_content blockquote:first-of-type { padding: 1px 20px; margin: 0 0 20px; background: #000; border-radius: 5px } .footer { padding: 10px 0 60px; border-top: 0; color: #888 } .footer a { color: #527b8f; background-repeat: no-repeat; background-position: center top 10px } .footer a:hover { color: #eef8ff } .mainmenu { background-color: #161819; border-top-right-radius: 15px; border-top-left-radius: 15px; clear: both |
︙ | ︙ | |||
731 732 733 734 735 736 737 | .mainmenu li:hover { background-color: #ff8000; border-radius: 5px } .mainmenu li:hover a { color: #000 } | | | 731 732 733 734 735 736 737 738 739 740 741 742 743 744 745 | .mainmenu li:hover { background-color: #ff8000; border-radius: 5px } .mainmenu li:hover a { color: #000 } div#hbdrop { background-color: #161819; border-radius: 15px; display: none; width: 100%; position: absolute; z-index: 20; } |
︙ | ︙ |
Changes to skins/ardoise/footer.txt.
|
| | | | | | | | | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 | <th1> if {[string first artifact $current_page] == 0 || [string first hexdump $current_page] == 0} { html "</div>" } </th1> </div> <!-- end div container --> </div> <!-- end div middle max-full-width --> <div class="footer"> <div class="container"> <div class="pull-right"> <a href="https://fossil-scm.org/">Fossil $release_version $manifest_version $manifest_date</a> </div> This page was generated in about <th1>puts [expr {([utime]+[stime]+1000)/1000*0.001}]</th1>s </div> </div> |
Changes to skins/ardoise/header.txt.
|
| | | 1 2 3 4 5 6 7 8 | <div class="header"> <div class="container"> <div class="login pull-right"> <th1> if {[info exists login]} { html "<b>$login</b> — <a class='button' href='$home/login'>Logout</a>\n" } else { html "<a class='button' href='$home/login'>Login</a>\n" |
︙ | ︙ | |||
16 17 18 19 20 21 22 | html "<a class='rss' href='$home/timeline.rss'></a>" } </th1> <small> $<title></small></h1> </div> <!-- Main Menu --> | | | | | | | | | | | | | | | | | < | | | | | | | | | | | 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 | html "<a class='rss' href='$home/timeline.rss'></a>" } </th1> <small> $<title></small></h1> </div> <!-- Main Menu --> <div class="mainmenu"> <ul> <th1> html "<li><a id='hbbtn' href='$home/sitemap' aria-label='Site Map'>☰</a></li>\n" builtin_request_js hbmenu.js set once 1 foreach {name url expr class} $mainmenu { if {![capexpr $expr]} continue if {$once && [string match $url\[/?#\]* /$current_page/]} { set class "$class active" set once 0 } html "<li class='$class'>" if {[string match /* $url]} {set url $home$url} html "<a href='$url'>$name</a></li>\n" } </th1></ul> </div> <!-- end div mainmenu --> <div id="hbdrop"></div> </div> <!-- end div container --> </div> <!-- end div header --> <div class="middle max-full-width"> <div class="container"> <th1> if {[string first artifact $current_page] == 0 || [string first hexdump $current_page] == 0} { html "<div class=\"artifact_content\">" } </th1> |
Changes to skins/black_and_white/css.txt.
︙ | ︙ | |||
47 48 49 50 51 52 53 | color: #333; font-size: 0.8em; font-weight: bold; white-space: nowrap; } /* The header across the top of the page */ | | | | | | | 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 | color: #333; font-size: 0.8em; font-weight: bold; white-space: nowrap; } /* The header across the top of the page */ div.header { margin:10px 0px 10px 0px; padding:1px 0px 0px 20px; border-style:solid; border-color:black; border-width:1px 0px; background-color:#eee; } /* The main menu bar that appears at the top left of the page beneath ** the header. Width must be co-ordinated with the container below */ div.mainmenu { float: left; margin-left: 10px; margin-right: 20px; font-size: 0.9em; font-weight: bold; padding:5px; background-color:#eee; border:1px solid #999; width:6em; } /* Main menu is now a list */ div.mainmenu ul { padding: 0; list-style:none; } div.mainmenu a, div.mainmenu a:visited{ padding: 1px 10px 1px 10px; color: #333; text-decoration: none; } div.mainmenu a:hover { color: #eee; background-color: #333; } /* Container for the sub-menu and content so they don't spread ** out underneath the main menu */ #container { |
︙ | ︙ | |||
147 148 149 150 151 152 153 | float: left; clear: left; color: #333; white-space: nowrap; } /* The footer at the very bottom of the page */ | | | 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 | float: left; clear: left; color: #333; white-space: nowrap; } /* The footer at the very bottom of the page */ div.footer { font-size: 0.8em; margin-top: 12px; padding: 5px 10px 5px 10px; text-align: right; background-color: #eee; color: #555; } |
︙ | ︙ |
Changes to skins/black_and_white/footer.txt.
|
| | | | 1 2 3 | <div class="footer"> Fossil $release_version $manifest_version $manifest_date </div> |
Changes to skins/black_and_white/header.txt.
|
| | | | | | | | | | | | | | | | | | | | | < | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 | <div class="header"> <div class="logo"> <img src="$logo_image_url" alt="logo"> <br />$<project_name> </div> <div class="title">$<title></div> <div class="status"><th1> if {[info exists login]} { puts "Logged in as $login" } else { puts "Not logged in" } </th1></div> </div> <div class="mainmenu"> <th1> set sitemap 0 foreach {name url expr class} $mainmenu { if {![capexpr $expr]} continue if {[string match /* $url]} {set url $home$url} html "<a href='$url'>$name</a><br/>\n" if {[string match /sitemap $url]} {set sitemap 1} } if {!$sitemap} { html "<a href='$home/sitemap'>Sitemap</a>\n" } </th1></div> |
Changes to skins/blitz/css.txt.
︙ | ︙ | |||
753 754 755 756 757 758 759 | box-sizing: border-box; } /* Header * Div displayed at the top of every page. ––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––– */ | | | | | | | | | | | 753 754 755 756 757 758 759 760 761 762 763 764 765 766 767 768 769 770 771 772 773 774 775 776 777 778 779 780 781 782 783 784 785 786 787 788 789 790 791 792 793 794 795 796 797 798 799 800 801 802 803 804 805 806 807 | box-sizing: border-box; } /* Header * Div displayed at the top of every page. ––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––– */ .header { color: #666; font-weight: 400; padding-top: 10px; border-width: 0px; border-top: 4px solid #446979; border-bottom: 1px solid #ccc; } .header .logo { display: inline-block; } .header .login { padding-top: 2px; text-align: right; } .header .login .button { margin: 0; } .header h1 { margin: 0px; color: #666; display: inline-block; } .header .logo h1 { display: inline-block; } .header .title h1 { padding-bottom: 10px; } .header h1 small, .header h2 small { color: #888; } .header a.rss { display: inline-block; padding: 10px 15px; background-image: url(data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAABQAAAAUCAYAAACNiR0NAAAABmJLR0QA/wD/AP+gvaeTAAAACXBIWXMAAAsTAAALEwEAmpwYAAAAB3RJTUUH3wMNDhwn05VjawAAABl0RVh0Q29tbWVudABDcmVhdGVkIHdpdGggR0lNUFeBDhcAAAGlSURBVDjLrdPfb8xREAXwT7tIl+paVNaPJghCKC8kXv0XXvyNXsRfwYPQJqVKiqykWFVZXd12vcxNJtduUtJJvrm7984998ycMxxwNGI9jPs4j7nY+/U/gIdiPYO71dk21rCE7r8ybOHGmMfmcRNnsbEf1gXwNzqYSXs5WljEMXzAaBLg1Ji9Js7hOi6OeeAznqC/X8AcMyHWYpX7E4/Rm1QyHMdefCWGeI/VcMDR2D8S7Fci5y/AeTzCPVyLi1sYJAut4BTaiX0n9kc14MmkcjPY3I5LXezGtxqKtyJ3Lir6VAM2AmCq6m8Hl6PsQTB5hyvxmMhZxk4G3MZLfAwLtdNZM9rwOs528TVVNB3ga7UoQ2wGmyWciFaU0VwIJiP8iL6Xfp7GK+w0JthliDep8UKonTSGvbBTaU8f3QzYxgPcCsBvWK9E6OBFCNGPVjTTqC430p+H6fLVGLGtmIw7SbwevqT+XkgVPJ9Otpmtyl6I9XswLXEp/d6oPN0ugJu14xMLob4kgPRYjtkCOMDTUG+AZ3ibEtfDLorfEmAB3UuTdXDxBzUUZV+B82aLAAAAAElFTkSuQmCC); background-position: center center; background-repeat: no-repeat; } |
︙ | ︙ | |||
828 829 830 831 832 833 834 | color: #002060; } /* Footer * Displayed after the middle div and forms the page bottom. ––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––– */ | | | | 828 829 830 831 832 833 834 835 836 837 838 839 840 841 842 843 844 845 846 847 848 849 850 851 | color: #002060; } /* Footer * Displayed after the middle div and forms the page bottom. ––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––– */ .footer { padding: 10px 0 60px; border-top: 1px solid #ccc; background-color: #f8f8f8; background-image: url(data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAAFAAAABGCAQAAADxl9ugAAAAAmJLR0QA/4ePzL8AAAAJcEhZcwAACxMAAAsTAQCanBgAAAAHdElNRQffAwkQBRPw+yfrAAAAPHRFWHRDb21tZW50ACBJbWFnZSBnZW5lcmF0ZWQgYnkgRVNQIEdob3N0c2NyaXB0IChkZXZpY2U9cG5tcmF3KQqV01S1AAANg0lEQVRo3s2aa3RVRZbH/3tXnftIbhJCeJMXyEsNCsgrSBzAYXyt0YVj2zSOonbLqNNO+25FB2kdoBVdrp6l09P2NNpOL/ExSrtUdGRU5JEQgggCggIhQIBAIIGQ5N57TlXt+ZAICY+AGCT76zlV91dVe+/a+38uDQ7dYmrxw41QocJuni28vWTedEa7Gd9iatplos7Y2mfDRED6A2G0I2At6AdPIiA84HgbLhx1o9QX9Zoh7QjYHpOkYJYrfNyOKH0O/f2FJpfkSRmMCNqDk9rDXwTAuxK7h3uFf7vXxC6QVO8w19taNOT4Y0wu1sCdW0AA6IZ73bCUaI/GHuqnsJqcEqVrzSo+wNtLdgAPsz6jeVURtQ9gA8bQT8xztW7X2u3I4G6UdA0cohxcSAN7nt939/zDQzl0LnfwaLpxqFT/i7w7XLYXhoUQ+YFW3pJlC0/nt6jZZb4zjXY2AaGXvQOz/gMY09NM4C6ixRL7OU3PTm4ORdiIjyLsJvq1Z28Hj5rBM82xMTqPuhHznmU72vq12W5M3+LykQWUW9pirzXOmmk8wjE87kb8HPmSKlY+wY62R9jhI69xf/AmYiEfiXs+e4CCAZiDUc9wP6cRIJn8tq23PQClbwap9Ivk7usiGj/CDs5xl3Qe/EiQCEWsgXLlq8sfamM73osMna43ef/pfm731Hsxv+Z00ozFzaJpF+gMrsNB2BwL3cGpSsNRWHbmvTQNiTbev9q4xXszMLzqpdgDNjxv/YjmvHlSQIc5brl8nrqT5pllMpHM9wRc4gU3oC8JyDLt8P78ZLyM2lrmAeTg5YrKdf1S1N9QTs6At1YM49DJo1gwWy7uE5kKBQHbler9K8R+L8API2qmFoDY7ejxpxGJxtMctyZcfYvJDYX4i5LXp/MJAS2edkM6edNUDuLNOdMDmz913bIw8RidKO/1w+3NQfePbHQjdbd/NKOjbq7UqQOmYtXb/+BdbGc4AJjPX53CXe6UGweYWxUR8N8rNkzn4wAFc9yYPvGbI+mttoycpo14fWbD0hZv+njOAcBl6Yl8jiEMbSKhqGPypZ5N0B2kDxrjOjF0QuI2oRqj2z6rAebx5pNiRrEo0jhV9wFkP//xpkPHAU6UGdnBLyR8jMOQs6xwcMAznVys+Uqa5YCCnilFkuVSkeKlGKfDED8JaI+VifMWR9LPC/u+UkoZA6edOezVu83pny4yD6qQnCxhDzk/MhUED+8PWHJMkAgWROvvQoxOcEdayynVY9/5tIgAwsfIvaD3XaFx6M6ddMx3WlurCGICiAIhMLxB1VM+wQRMDOeYjHCUMum85OW909/acOlJgqaI3qrulU9dYGlgzVI1tsVrETzlOt3MeUGCudVoCgJmAhFU78F+6Usu0V39iylSHCRZMROM0do5pUDWQamEPZTy5oqyXVt77bG94HGEyTkCE8BsjLOcl11Uvf7V+qvIP86jgWH8zhc9LpKYU7qRhH5NTRknwFw3+jpMQCIItG69PCLfZ1YKAMlS/7PQDCc2qbRzSjET4slINBAvhsPxjarKKy/9FhjZG8Ol4tINi/uFzkcfl8sEnwExATGRY73Ze2VpAHxHoPFki7r2iljN3a6LqqOxnZ+vXUBNwTF6uL0e0Nr3tSYWpqNhQtaIaA0A1oZftuf5Y1xmCMRO2IM1VXqf3aYr5IDXODTx7+43PAjPFcit1MANeHfFxuFpSFW9TbbORpaNEqt6/5tIoxuBOO049NHXNTN5lFwlE3o09JUMlWKdSmJ/UC8TvRwqjNzoVwNIw3upcluyq4hSAq3Idyt5HOyRIDGuGdA42un+6wv/Cq7sFA1F/B4N/5MEgL/yOsSPlFSMOzD5JhkMEQ8l8u41Job7hVoExsiRNMU1Biac4oqD92Np8Xt0BgUiEOeIHLyobbQb9U/8fSAItsFdJv1wECASB5L92xb2yefcE3Q+jgsof//m3+PO2i7YhgV0HisAK48UnE03EaT67axMyjMNXpFkf/za8r3VPP2I46Ri+aYDW0z3SMT6NJb62wPiJEFhEoFCYIhNoyS4VF1CBKArPu6MKWysBZhFVFrD7yri2VkuR4QZAIkTMAOAiDIY9KvFQ3gVPqXVFD9JSbSS3vP7b/QzKY8SlO6G5e2fv/dozvDxciKvCqOYrIURT/XU2+0SKrdVqtpVojz4Sq1Vi1duV0UE+PSEy33AhRRZAzDDc4vXrB1xKU0gRUR0LCCRi/be8XZ12/0M41Ka5K/5JjA80PlQKMjfeVVN9ZG0NZbmH8xf5Y8nJxIKmYC7SmLlexeVp2wKb+y++f6KZ3c/03g+qSISPONGDqHRZOk7QGde6XsrjTP+kXTTCpCVGErZteZUDRdhD3Ls65tz64J85ZGyQ0tXDAzckadFlOtv+ZQLoYMgHHaGcrJ7fb56inR3WVJJRVTdVM3kQSh5r1hjlbIGIKVX6ammE1vnmImh4Y4BZHGcvmvx5XQ6BcRIfnPHxV82ZnFUYsh+bWXLzMuwquGwCnvdBUTkqHcPmb+l5cJZcKervlwc8N04sbYwSJC1tsnneemJqh32LksZflr1SUh+pT6q+/ov/krztitDq1s4hP1p7mbfuFfkEKJgSejLh/Wa7Y5GJsdQmEYDoVocDCvnLCvnwPDMzOK/uiVoPS8IxoQi409D2xAU4Hd2xE2DZ3gjvcpVZY+0iqgAbFh7F9grp84xf5D6QFOE/rYo1vto4z6ZVg/COHKAiGJrACaljNHaiaqTp0NeXlr37Ydz/ahSLY9YzLIPMzlxSrxMzM/ochfnO8UHvJJpgTnm+UE5GKMeOu3LrLJPd32+Z1FamfczKli36jzTtHplvPg46o2WgNwEaOqoTk8Jxsu4+vGUruDkCCCJkg171g2ntqMkjrmuYUBwG7LglIq/U7Zr1HEjJrtiay9imIzsqt37gb4/RSZlSsP88iZf5T3kX2xOuBEck2yp13F32NbDuZbHqSjwFrzJbeP5eN6NHOJuoQyXRIZ7Y+3Gh08wohjpW/VOkFMyqagHILnOSAOuflh1bqKggew524Zs1ex+R6cmz1W7Z6clvzzF8T7rLsvnSfBgJVO9cN+ax4/IRwazZLZrqmOSWJjAR0hCdFc/F2AtToTos34POQHAkVw2Sp2m7EKwOEyLVj59w6EtpyjdFYSCCYhaC0n9fcmmr5pb8QBrQl0wqn/hNc82J8Rfc8lWVwVGwLeNDVPTdW30+KZN0Sab5JTBqOBBowIHzeZVJcBjXHdKXfQp93+DgwuIgy9QtqJ8evPhOtRRzZi30iKfBZPxQVNhofDP6sUXh05zA5MH+WKdpAgAJ/lAChqhyQmd5HgFChFoMvar0AaKq8qSGmA6o03lVFCEqx1Q0Ce0075hkgvW93EPNeMZ3Cl93eRl5Q8GVjoBs1tUN1l/rvkn1VOuNXUIAyATv6zz3bXPEo3t7N/L6bBirfV0MimktOclgnDUJHiTWxo9oJOZQT/TDUk0nIYfFODFaMMN4WFqbnEl8BzXtlBX++ATr2KW3eC9kbg/9d2l6/qmdxrq9bdRkBwMtrgd+lpkpxwyWSwAkvRKSfl0phf4l+6acGVeak6QQbCJYH/Gwci+zPi/+oMFEJpLdTDfQ8IpTTl8r/lkQuniX3oR98mjX65sJQZ0wYNu1NU0wZ8RjSUf0Qo22aCU1oAoCazTFe5bdZ1pVIqcv2B12XSm6awxCFNaichCr1I1auDOQFvKxzQHFD4mexGXNfds/PoYB+iMhxzwIC8vcJe7HKVIrBEoLRQkuVL+Ii70lDlMUOR/uHrJdNaAwXo82orksR8gfVUAGFkgy5JfhKdEKncelwhq8SinwMeidX+303VHuvOgCGSRRF2o6vb6FyKmQTGRNTraQt1qJ6EaAPA4V22MYH20wXx+aCyfKFvFAcymiYfCh/YhSQZACFHpBB/l6On2l0s/AgTBWdIHHbpIulzfWDbvAW4r3gPUIwWZ0lW6SoaE0AgDYG6Six0R4OjgWRMwGUAVHuUz+SS2iNQ+XSMAvKqzrLCemdtUo3ifWhpoTi7b/RSfVRH9zBc2y43umXVgtImf3R08c7zCIeYpVoPOtoh+ZjYEhd3kWtV4IHSldEjAV7U/ATHxig89Tx0QUFCtvUJisxnNknuHArSY4+xtNuk4ZQWai4wOBThJRqZJvhYXtptnNTVAHQnQYbT4k7QI6/Vd6w6f/U9h39cUgEg+BCG3rk/AHQ9wlhsz1IUCI3tRPrDVtdlhzO9rxPNke+nuXR0TUIdAMOrDlkJshwEM4QId98C8vKTmUUJHAwww03VNT8ny9hd/cJ9qCaU7QnrpjXvcsKuT6W7VdcvHc+sq95wDCuZILxp9t+TZJWVL/56PPdJzDhjDhG75UyUrsXXtBw9z/PjAObd3729leA++mTOldO07dyl9ghbmnALmYpjybndhO3/VV/dx9IQdlj53YQEAo2din3p1cv1VfDIF8EfvSdJQSZvU3ljGJJOvNuH94kOPs22jwWq3P5edTryG8G/OdtsxOPiZd6O1ppTeK0n43Hb/9yPuYBhPuFFXumEq3231S+ibL/e+yhtP2Zz+iD74hCu83vblr+W1yJ5LgguxmzedRu/8//AUSaqMTR1xAAAAAElFTkSuQmCC); background-repeat: no-repeat; background-position: center top 10px; } .footer a { color: #3b5c6b; } /* Main Menu * Displayed in header, contains repository links. ––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––– */ |
︙ | ︙ | |||
869 870 871 872 873 874 875 | .mainmenu li.active { background-image: url(data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAABEAAAAJCAYAAADU6McMAAAAAXNSR0IArs4c6QAAAAZiS0dEAP8A/wD/oL2nkwAAAAlwSFlzAAALEQAACxEBf2RfkQAAAAd0SU1FB90FDxEXAZ2XRzAAAABJSURBVCjPY2CgBzhz5sx/QmoYiTXAxMSEkWRDsLkAl0GMpHoBm0EoAlu3bmUQFxcnGAboBjEhc4gxAJtLGUmJBVwuYiTXAGSDAIx5IBObnuVxAAAAAElFTkSuQmCC); background-repeat: no-repeat; background-position: center bottom; } .mainmenu li a, | | | | | 869 870 871 872 873 874 875 876 877 878 879 880 881 882 883 884 885 886 887 888 889 890 891 892 893 894 895 896 897 | .mainmenu li.active { background-image: url(data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAABEAAAAJCAYAAADU6McMAAAAAXNSR0IArs4c6QAAAAZiS0dEAP8A/wD/oL2nkwAAAAlwSFlzAAALEQAACxEBf2RfkQAAAAd0SU1FB90FDxEXAZ2XRzAAAABJSURBVCjPY2CgBzhz5sx/QmoYiTXAxMSEkWRDsLkAl0GMpHoBm0EoAlu3bmUQFxcnGAboBjEhc4gxAJtLGUmJBVwuYiTXAGSDAIx5IBObnuVxAAAAAElFTkSuQmCC); background-repeat: no-repeat; background-position: center bottom; } .mainmenu li a, div#hbdrop a { color: #3b5c6b; padding: 10px 15px; } .mainmenu li.active a { font-weight: bold; } .mainmenu li:hover div#hbdrop a:hover { background-color: #eee; } div#hbdrop { background-color: white; border: 2px solid #ccc; display: none; width: 100%; position: absolute; z-index: 20; } |
︙ | ︙ |
Changes to skins/blitz/footer.txt.
1 2 | </div> <!-- end div container --> </div> <!-- end div middle max-full-width --> | | | | 1 2 3 4 5 6 7 8 9 10 | </div> <!-- end div container --> </div> <!-- end div middle max-full-width --> <div class="footer"> <div class="container"> <div class="pull-right"> <a href="https://www.fossil-scm.org/">Fossil $release_version $manifest_version $manifest_date</a> </div> This page was generated in about <th1>puts [expr {([utime]+[stime]+1000)/1000*0.001}]</th1>s </div> </div> |
Changes to skins/blitz/header.txt.
|
| | | 1 2 3 4 5 6 7 8 | <div class="header"> <div class="container"> <!-- Header --> <div class="login pull-right"> <th1> if {[info exists login]} { html "<b>$login</b> — <a class='button' href='$home/login'>Logout</a>\n" |
︙ | ︙ | |||
18 19 20 21 22 23 24 | html "<a class='rss' href='$home/timeline.rss'></a>" } </th1> <small> $<title></small></h1> </div> <!-- Main Menu --> | | < | | | | | | | | | | | | | | | | < | | | | | | 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 | html "<a class='rss' href='$home/timeline.rss'></a>" } </th1> <small> $<title></small></h1> </div> <!-- Main Menu --> <div class="mainmenu"> <ul><th1> html "<li><a id='hbbtn' href='$home/sitemap' aria-label='Site Map'>☰</a></li>\n" builtin_request_js hbmenu.js set once 1 foreach {name url expr class} $mainmenu { if {![capexpr $expr]} continue if {$once && [string match $url\[/?#\]* /$current_page/]} { set class "active $class" set once 0 } html "<li class='$class'>" if {[string match /* $url]} {set url $home$url} html "<a href='$url'>$name</a></li>\n" } </th1></ul> </div> <!-- end div mainmenu --> <div id="hbdrop"></div> </div> <!-- end div container --> </div> <!-- end div header --> <div class="middle max-full-width"> <div class="container"> |
Changes to skins/darkmode/css.txt.
︙ | ︙ | |||
32 33 34 35 36 37 38 | ** the area to show as blank. The purpose is to cause the ** title to be exactly centered. */ div.leftoftitle { visibility: hidden; } /* The header across the top of the page */ | | | | | | | | | 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 | ** the area to show as blank. The purpose is to cause the ** title to be exactly centered. */ div.leftoftitle { visibility: hidden; } /* The header across the top of the page */ div.header { display: table; width: 100%; } /* The main menu bar that appears at the top of the page beneath ** the header */ div.mainmenu { padding: 0.25em 0.5em; font-size: 0.9em; font-weight: bold; text-align: center; border-top-left-radius: 0.5em; border-top-right-radius: 0.5em; border-bottom: 1px dotted rgba(200,200,200,0.3); z-index: 21; /* just above hbdrop */ } div#hbdrop { background-color: #1f1f1f; border: 2px solid #303536; border-radius: 0 0 0.5em 0.5em; display: none; left: 2em; width: calc(100% - 4em); position: absolute; z-index: 20; /* just below mainmenu, but above timeline bubbles */ } div.mainmenu, div.submenu, div.sectionmenu { color: #ffffffcc; background-color: #303536/*#0000ff60*/; } /* The submenu bar that *sometimes* appears below the main menu */ div.submenu, div.sectionmenu { padding: 0.15em 0.5em 0.15em 0; font-size: 0.9em; text-align: center; border-bottom-left-radius: 0.5em; border-bottom-right-radius: 0.5em; } a, a:visited { color: rgba(127, 201, 255, 0.9); display: inline; text-decoration: none; } a:visited {opacity: 0.8} div.mainmenu a, div.submenu a, div.sectionmenu>a.button, div.submenu label, div.footer a { padding: 0.15em 0.5em; } div.mainmenu a.active { border-bottom: 1px solid #FF4500f0; } a:hover, a:visited:hover { background-color: #FF4500f0; color: rgba(24,24,24,0.8); border-radius: 0.1em; |
︙ | ︙ | |||
170 171 172 173 174 175 176 | margin: .2em 0 .2em 0; float: left; clear: left; white-space: nowrap; } /* The footer at the very bottom of the page */ | | | 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 | margin: .2em 0 .2em 0; float: left; clear: left; white-space: nowrap; } /* The footer at the very bottom of the page */ div.footer { clear: both; font-size: 0.8em; padding: 0.15em 0.5em; text-align: right; background-color: #303536/*#0000ff60*/; border-top: 1px dotted rgba(200,200,200,0.3); border-bottom-left-radius: 0.5em; |
︙ | ︙ |
Changes to skins/darkmode/footer.txt.
|
| | | | 1 2 3 4 5 6 7 8 | <div class="footer"> <div class="container"> <div class="pull-right"> <a href="https://www.fossil-scm.org/">Fossil $release_version $manifest_version $manifest_date</a> </div> This page was generated in about <th1>puts [expr {([utime]+[stime]+1000)/1000*0.001}]</th1>s </div> </div> |
Changes to skins/darkmode/header.txt.
|
| | | | | | | | | | | | | | | | | | | < | | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 | <div class="header"> <div class="status leftoftitle"><th1> if {[info exists login]} { set logintext "<a href='$home/login'>$login</a>\n" } else { set logintext "<a href='$home/login'>Login</a>\n" } html $logintext </th1></div> <div class="title">$<title></div> <div class="status"><nobr><th1> html $logintext </th1></nobr></div> </div> <div class="mainmenu"> <th1> html "<a id='hbbtn' href='$home/sitemap' aria-label='Site Map'>☰</a>" builtin_request_js hbmenu.js foreach {name url expr class} $mainmenu { if {![capexpr $expr]} continue if {[string match /* $url]} { if {[string match $url\[/?#\]* /$current_page/]} { set class "active $class" } set url $home$url } html "<a href='$url' class='$class'>$name</a>\n" } </th1></div> <div id='hbdrop'></div> |
Changes to skins/default/README.md.
|
| | < < < | < < < > | | 1 2 3 4 5 | This skin was contributed by Étienne Deparis. On 2015-03-14 this skin was promoted from an option to the default, which involved moving it from its original home in the skins/etienne1 directory into skins/default. |
Changes to skins/default/css.txt.
|
| | < > > > > < < < < < < < < < < < | < < < < < < < < < < < < < < < < < < < < < < < < | < < < < < < < > | | | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 | /* Overall page style */ body { margin: 0 auto; background-color: white; font-family: sans-serif; font-size: 14pt; -moz-text-size-adjust: none; -mx-text-size-adjust: none; -webkit-text-size-adjust: none; } a { color: #4183C4; text-decoration: none; } a:hover { color: #4183C4; text-decoration: underline; } /* Page title, above menu bars */ .title { color: #4183C4; float: left; } .title h1 { display: inline; } .title h1:after { content: " / "; color: #777; font-weight: normal; } .status { float: right; font-size: 0.7em; } /* Main menu and optional sub-menu */ .mainmenu { font-size: 0.8em; clear: both; background: #eaeaea linear-gradient(#fafafa, #eaeaea) repeat-x; border: 1px solid #eaeaea; border-radius: 5px; overflow-x: auto; overflow-y: hidden; white-space: nowrap; z-index: 21; /* just above hbdrop */ } .mainmenu a { text-decoration: none; color: #777; border-right: 1px solid #eaeaea; } .mainmenu a.active, .mainmenu a:hover { color: #000; border-bottom: 2px solid #D26911; } div#hbdrop { background-color: white; border: 1px solid black; border-top: white; border-radius: 0 0 0.5em 0.5em; display: none; font-size: 80%; left: 2em; width: 90%; padding-right: 1em; position: absolute; z-index: 20; /* just below mainmenu, but above timeline bubbles */ } .submenu { font-size: .7em; padding: 10px; border-bottom: 1px solid #ccc; } .submenu a, .submenu label { padding: 10px 11px; text-decoration: none; color: #777; |
︙ | ︙ | |||
141 142 143 144 145 146 147 | white-space: nowrap; } /* Main document area; elements common to most pages. */ .content { | | > | > > > | > > | | > | | > | 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 | white-space: nowrap; } /* Main document area; elements common to most pages. */ .content { padding-top: 10px; font-size: 0.8em; color: #444; } .content blockquote { padding: 0 15px; } .content h1 { font-size: 1.25em; } .content h2 { font-size: 1.15em; } .content h3 { font-size: 1.05em; } .section { font-size: 1em; font-weight: bold; background-color: #f5f5f5; border: 1px solid #d8d8d8; border-radius: 3px 3px 0 0; |
︙ | ︙ | |||
179 180 181 182 183 184 185 | hr { color: #eee; } /* Page footer */ | | | | < < < < < < < < < < < < < < < < < < < < | | < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < | < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < > < | 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 | hr { color: #eee; } /* Page footer */ .footer { border-top: 1px solid #ccc; padding: 10px; font-size: 0.7em; margin-top: 10px; color: #ccc; } /* Forum */ .forum a:visited { color: #6A7F94; } .forum blockquote { background-color: rgba(65, 131, 196, 0.1); border-left: 3px solid #254769; padding: .1em 1em; } /* Tickets */ table.report { cursor: auto; border-radius: 5px; border: 1px solid #ccc; margin: 1em 0; } .report td, .report th { border: 0; font-size: .8em; padding: 10px; } |
︙ | ︙ | |||
461 462 463 464 465 466 467 | white-space: pre-wrap; } /* Timeline */ span.timelineDetail { | | | | < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < | < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < | 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 | white-space: pre-wrap; } /* Timeline */ span.timelineDetail { font-size: 90%; } div.timelineDate { font-weight: bold; white-space: nowrap; } /* Miscellaneous UI elements */ .fossil-tooltip.help-buttonlet-content { background-color: lightyellow; } /* Exceptions for specific screen sizes */ @media screen and (max-width: 600px) { /* Spacing for mobile */ body { padding-left: 4px; padding-right: 4px; } .title { padding-top: 0px; padding-bottom: 0px; } .status {padding-top: 0px;} .mainmenu a { padding: 8px 10px; } .mainmenu { padding: 10px; } } @media screen and (min-width: 600px) { /* Spacing for desktop */ body { padding-left: 20px; padding-right: 20px; } .title { padding-top: 10px; padding-bottom: 10px; } .status {padding-top: 30px;} .mainmenu a { padding: 8px 20px; } .mainmenu { padding: 10px; } } |
Changes to skins/default/details.txt.
|
| < < | 1 2 3 4 | timeline-arrowheads: 1 timeline-circle-nodes: 1 timeline-color-graph-lines: 1 white-foreground: 0 |
Changes to skins/default/footer.txt.
|
| | | | 1 2 3 4 5 | <div class="footer"> This page was generated in about <th1>puts [expr {([utime]+[stime]+1000)/1000*0.001}]</th1>s by Fossil $release_version $manifest_version $manifest_date </div> |
Changes to skins/default/header.txt.
|
| < | < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < | < < < | < | | | | | | | < | | | | | | | | | | | | | | | < | < | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 | <div class="header"> <div class="title"><h1>$<project_name></h1>$<title></div> <div class="status"><th1> if {[info exists login]} { html "<a href='$home/login'>$login</a>\n" } else { html "<a href='$home/login'>Login</a>\n" } </th1></div> </div> <div class="mainmenu"> <th1> html "<a id='hbbtn' href='$home/sitemap' aria-label='Site Map'>☰</a>" builtin_request_js hbmenu.js foreach {name url expr class} $mainmenu { if {![capexpr $expr]} continue if {[string match /* $url]} { if {[string match $url\[/?#\]* /$current_page/]} { set class "active $class" } set url $home$url } html "<a href='$url' class='$class'>$name</a>\n" } </th1></div> <div id='hbdrop'></div> |
Changes to skins/eagle/css.txt.
︙ | ︙ | |||
45 46 47 48 49 50 51 | color: white; font-size: 0.8em; font-weight: bold; white-space: nowrap; } /* The header across the top of the page */ | | | | | | | 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 | color: white; font-size: 0.8em; font-weight: bold; white-space: nowrap; } /* The header across the top of the page */ div.header { display: table; width: 100%; } /* The main menu bar that appears at the top of the page beneath ** the header */ div.mainmenu { padding: 5px 10px 5px 10px; font-size: 0.9em; font-weight: bold; text-align: center; letter-spacing: 1px; background-color: #76869D; border-top-left-radius: 8px; border-top-right-radius: 8px; color: white; } div#hbdrop { background-color: #485D7B; border-radius: 0 0 15px 15px; border-left: 0.5em solid #76869d; border-bottom: 1.2em solid #76869d; display: none; width: 98%; position: absolute; z-index: 20; } /* The submenu bar that *sometimes* appears below the main menu */ div.submenu, div.sectionmenu { padding: 3px 10px 3px 0px; font-size: 0.9em; font-weight: bold; text-align: center; background-color: #485D7B; color: white; } div.mainmenu a, div.mainmenu a:visited, div.submenu a, div.submenu a:visited, div.sectionmenu>a.button:link, div.sectionmenu>a.button:visited, div.submenu label { padding: 3px 10px 3px 10px; color: white; text-decoration: none; } div.mainmenu a:hover, div.submenu a:hover, div.sectionmenu>a.button:hover, div.submenu label:hover { text-decoration: underline; } /* All page content from the bottom of the menu or submenu down to ** the footer */ div.content { |
︙ | ︙ | |||
129 130 131 132 133 134 135 | margin: .2em 0 .2em 0; float: left; clear: left; white-space: nowrap; } /* The footer at the very bottom of the page */ | | | 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 | margin: .2em 0 .2em 0; float: left; clear: left; white-space: nowrap; } /* The footer at the very bottom of the page */ div.footer { clear: both; font-size: 0.8em; margin-top: 12px; padding: 5px 10px 5px 10px; text-align: right; background-color: #485D7B; border-bottom-left-radius: 8px; |
︙ | ︙ |
Changes to skins/eagle/footer.txt.
|
| | | 1 2 3 4 5 6 7 8 | <div class="footer"> <th1> proc getTclVersion {} { if {[catch {tclEval info patchlevel} tclVersion] == 0} { return "<a href=\"https://www.tcl.tk/\">Tcl</a> version $tclVersion" } return "" } |
︙ | ︙ | |||
17 18 19 20 21 22 23 | </th1> This page was generated in about <th1>puts [expr {([utime]+[stime]+1000)/1000*0.001}]</th1>s by <a href="$fossilUrl/">Fossil</a> version $release_version $tclVersion <a href="$fossilUrl/index.html/info/$version">$manifest_version</a> <a href="$fossilUrl/index.html/timeline?c=$fossilDate&y=ci">$manifest_date</a> | | | 17 18 19 20 21 22 23 24 | </th1> This page was generated in about <th1>puts [expr {([utime]+[stime]+1000)/1000*0.001}]</th1>s by <a href="$fossilUrl/">Fossil</a> version $release_version $tclVersion <a href="$fossilUrl/index.html/info/$version">$manifest_version</a> <a href="$fossilUrl/index.html/timeline?c=$fossilDate&y=ci">$manifest_date</a> </div> |
Changes to skins/eagle/header.txt.
|
| | | 1 2 3 4 5 6 7 8 | <div class="header"> <div class="logo"> <th1> ## ## NOTE: The purpose of this procedure is to take the base URL of the ## Fossil project and return the root of the entire web site using ## the same URI scheme as the base URL (e.g. http or https). ## |
︙ | ︙ | |||
74 75 76 77 78 79 80 | <div class="status"><nobr><th1> if {[info exists login]} { puts "Logged in as $login" } else { puts "Not logged in" } </th1></nobr><small><div id="clock"></div></small></div> | | | | | | | | | | | | | | | | | | | < | | | | | | | | < | | 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 | <div class="status"><nobr><th1> if {[info exists login]} { puts "Logged in as $login" } else { puts "Not logged in" } </th1></nobr><small><div id="clock"></div></small></div> </div> <th1>html "<script nonce='$nonce'>"</th1> (function updateClock(){ var e = document.getElementById("clock"); if(!e) return; if(!updateClock.fmt){ updateClock.fmt = function(n){ return n < 10 ? '0' + n : n; }; } var d = new Date(); e.innerHTML = d.getUTCFullYear()+ '-' + updateClock.fmt(d.getUTCMonth() + 1) + '-' + updateClock.fmt(d.getUTCDate()) + ' ' + updateClock.fmt(d.getUTCHours()) + ':' + updateClock.fmt(d.getUTCMinutes()); setTimeout(updateClock,(60-d.getUTCSeconds())*1000); })(); </script> <div class="mainmenu"><th1> html "<a id='hbbtn' href='$home/sitemap' aria-label='Site Map'>☰</a>\n" builtin_request_js hbmenu.js foreach {name url expr class} $mainmenu { if {![capexpr $expr]} continue if {[string match /* $url]} {set url $home$url} html "<a href='$url' class='$class'>$name</a>\n" } </th1></div> <div id="hbdrop"></div> |
Deleted skins/etienne/README.md.
|
| < < < < < < < < < < < < < < |
Deleted skins/etienne/css.txt.
|
| < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < |
Deleted skins/etienne/details.txt.
|
| < < < < |
Deleted skins/etienne/footer.txt.
|
| < < < < < |
Deleted skins/etienne/header.txt.
|
| < < < < < < < < < < < < < < < < < < < < < < < < < < < < < |
Changes to skins/khaki/css.txt.
︙ | ︙ | |||
39 40 41 42 43 44 45 | padding: 5px 5px 0 0; font-size: 0.8em; font-weight: bold; white-space: nowrap; } /* The header across the top of the page */ | | | | | | | | | | | | 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 | padding: 5px 5px 0 0; font-size: 0.8em; font-weight: bold; white-space: nowrap; } /* The header across the top of the page */ div.header { display: table; width: 100%; } /* The main menu bar that appears at the top of the page beneath ** the header */ div.mainmenu { padding: 5px 10px 5px 10px; font-size: 0.9em; font-weight: bold; text-align: center; letter-spacing: 1px; background-color: #a09048; color: black; z-index: 21; /* just above hbdrop */ } div#hbdrop { background-color: #fef3bc; border: 2px solid #a09048; border-radius: 0 0 0.5em 0.5em; display: none; left: 2em; width: 90%; padding-right: 1em; position: absolute; z-index: 20; /* just below mainmenu, but above timeline bubbles */ } /* The submenu bar that *sometimes* appears below the main menu */ div.submenu, div.sectionmenu { padding: 3px 10px 3px 0px; font-size: 0.9em; text-align: center; background-color: #c0af58; color: white; } div.mainmenu a, div.mainmenu a:visited, div.submenu a, div.submenu a:visited, div.sectionmenu>a.button:link, div.sectionmenu>a.button:visited, div.submenu label { padding: 3px 10px 3px 10px; color: white; text-decoration: none; } div.mainmenu a:hover, div.submenu a:hover, div.sectionmenu>a.button:hover, div.submenu label:hover, div#hbdrop a:hover { color: #a09048; background-color: white; } /* All page content from the bottom of the menu or submenu down to ** the footer */ div.content { padding: 1ex 5px; } div.content a, div#hbdrop a { color: #706532; } div.content a:link, div#hbdrop a:link { color: #706532; } div.content a:visited, div#hbdrop a:visited { color: #704032; } div.content a:hover, div#hbdrop a:hover { background-color: white; color: #706532; } a, a:visited { text-decoration: none; } |
︙ | ︙ | |||
131 132 133 134 135 136 137 | margin: .2em 0 .2em 0; float: left; clear: left; white-space: nowrap; } /* The footer at the very bottom of the page */ | | | | | | | 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 | margin: .2em 0 .2em 0; float: left; clear: left; white-space: nowrap; } /* The footer at the very bottom of the page */ div.footer { font-size: 0.8em; margin-top: 12px; padding: 5px 10px 5px 10px; text-align: right; background-color: #a09048; color: white; } /* Hyperlink colors */ div.footer a { color: white; } div.footer a:link { color: white; } div.footer a:visited { color: white; } div.footer a:hover { background-color: white; color: #558195; } /* <verbatim> blocks */ pre.verbatim { background-color: #f5f5f5; padding: 0.5em; white-space: pre-wrap; } |
︙ | ︙ |
Changes to skins/khaki/footer.txt.
|
| | | | 1 2 3 | <div class="footer"> Fossil $release_version $manifest_version $manifest_date </div> |
Changes to skins/khaki/header.txt.
|
| | | | | | | | | < | < | | | | | | | | < | | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 | <div class="header"> <div class="title">$<title></div> <div class="status"> <div class="logo">$<project_name></div><br/> <th1> if {[info exists login]} { puts "Logged in as $login" } else { puts "Not logged in" } </th1></div> </div> <div class="mainmenu"><th1> html "<a id='hbbtn' href='$home/sitemap' aria-label='Site Map'>☰</a>" builtin_request_js hbmenu.js foreach {name url expr class} $mainmenu { if {![capexpr $expr]} continue if {[string match /* $url]} {set url $home$url} html "<a href='$url' class='$class'>$name</a>\n" } </th1></div> <div id='hbdrop'></div> |
Changes to skins/original/css.txt.
︙ | ︙ | |||
40 41 42 43 44 45 46 | color: #558195; font-size: 0.8em; font-weight: bold; white-space: nowrap; } /* The header across the top of the page */ | | | | | | 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 | color: #558195; font-size: 0.8em; font-weight: bold; white-space: nowrap; } /* The header across the top of the page */ div.header { display: table; width: 100%; } /* The main menu bar that appears at the top of the page beneath ** the header */ div.mainmenu { padding: 5px; font-size: 0.9em; font-weight: bold; text-align: center; letter-spacing: 1px; background-color: #558195; border-top-left-radius: 8px; border-top-right-radius: 8px; color: white; } /* The submenu bar that *sometimes* appears below the main menu */ div.submenu, div.sectionmenu { padding: 3px 10px 3px 0px; font-size: 0.9em; text-align: center; background-color: #456878; color: white; } div.mainmenu a, div.mainmenu a:visited, div.submenu a, div.submenu a:visited, div.sectionmenu>a.button:link, div.sectionmenu>a.button:visited, div.submenu label { padding: 3px 10px 3px 10px; color: white; text-decoration: none; } div.mainmenu a:hover, div.submenu a:hover, div.sectionmenu>a.button:hover, div.submenu label:hover { color: #558195; background-color: white; } /* All page content from the bottom of the menu or submenu down to ** the footer */ |
︙ | ︙ | |||
113 114 115 116 117 118 119 | margin: .2em 0 .2em 0; float: left; clear: left; white-space: nowrap; } /* The footer at the very bottom of the page */ | | | | | | | 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 | margin: .2em 0 .2em 0; float: left; clear: left; white-space: nowrap; } /* The footer at the very bottom of the page */ div.footer { clear: both; font-size: 0.8em; padding: 5px 10px 5px 10px; text-align: right; background-color: #558195; border-bottom-left-radius: 8px; border-bottom-right-radius: 8px; color: white; } /* Hyperlink colors in the footer */ div.footer a { color: white; } div.footer a:link { color: white; } div.footer a:visited { color: white; } div.footer a:hover { background-color: white; color: #558195; } /* verbatim blocks */ pre.verbatim { background-color: #f5f5f5; padding: 0.5em; white-space: pre-wrap; } |
︙ | ︙ |
Changes to skins/original/footer.txt.
|
| | | 1 2 3 4 5 6 7 8 | <div class="footer"> <th1> proc getTclVersion {} { if {[catch {tclEval info patchlevel} tclVersion] == 0} { return "<a href=\"https://www.tcl.tk/\">Tcl</a> version $tclVersion" } return "" } |
︙ | ︙ | |||
17 18 19 20 21 22 23 | </th1> This page was generated in about <th1>puts [expr {([utime]+[stime]+1000)/1000*0.001}]</th1>s by <a href="$fossilUrl/">Fossil</a> version $release_version $tclVersion <a href="$fossilUrl/index.html/info/$version">$manifest_version</a> <a href="$fossilUrl/index.html/timeline?c=$fossilDate&y=ci">$manifest_date</a> | | | 17 18 19 20 21 22 23 24 | </th1> This page was generated in about <th1>puts [expr {([utime]+[stime]+1000)/1000*0.001}]</th1>s by <a href="$fossilUrl/">Fossil</a> version $release_version $tclVersion <a href="$fossilUrl/index.html/info/$version">$manifest_version</a> <a href="$fossilUrl/index.html/timeline?c=$fossilDate&y=ci">$manifest_date</a> </div> |
Changes to skins/original/header.txt.
|
| | | 1 2 3 4 5 6 7 8 | <div class="header"> <div class="logo"> <th1> ## ## NOTE: The purpose of this procedure is to take the base URL of the ## Fossil project and return the root of the entire web site using ## the same URI scheme as the base URL (e.g. http or https). ## |
︙ | ︙ | |||
68 69 70 71 72 73 74 | <div class="status"><nobr><th1> if {[info exists login]} { puts "Logged in as $login" } else { puts "Not logged in" } </th1></nobr><small><div id="clock"></div></small></div> | | | | | | | | | | | | | | | | | | | < | | | | | | | | | | | < | 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 | <div class="status"><nobr><th1> if {[info exists login]} { puts "Logged in as $login" } else { puts "Not logged in" } </th1></nobr><small><div id="clock"></div></small></div> </div> <th1>html "<script nonce='$nonce'>"</th1> function updateClock(){ var e = document.getElementById("clock"); if(e){ var d = new Date(); function f(n) { return n < 10 ? '0' + n : n; } e.innerHTML = d.getUTCFullYear()+ '-' + f(d.getUTCMonth() + 1) + '-' + f(d.getUTCDate()) + ' ' + f(d.getUTCHours()) + ':' + f(d.getUTCMinutes()); setTimeout(updateClock,(60-d.getUTCSeconds())*1000); } } updateClock(); </script> <div class="mainmenu"><th1> set sitemap 0 foreach {name url expr class} $mainmenu { if {![capexpr $expr]} continue if {[string match /* $url]} {set url $home$url} html "<a href='$url' class='$class'>$name</a>\n" if {[string match */sitemap $url]} {set sitemap 1} } if {!$sitemap} { html "<a href='$home/sitemap'>...</a>" } </th1></div> |
Changes to skins/plain_gray/css.txt.
︙ | ︙ | |||
26 27 28 29 30 31 32 | vertical-align: bottom; color: #404040; font-weight: bold; white-space: nowrap; } /* The header across the top of the page */ | | | | 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 | vertical-align: bottom; color: #404040; font-weight: bold; white-space: nowrap; } /* The header across the top of the page */ div.header { display: table; width: 100%; } /* The main menu bar that appears at the top of the page beneath ** the header */ div.mainmenu { padding: 5px 10px 5px 10px; font-size: 0.9em; font-weight: bold; text-align: center; letter-spacing: 1px; background-color: #404040; color: white; |
︙ | ︙ | |||
65 66 67 68 69 70 71 | div.submenu, div.sectionmenu { padding: 3px 10px 3px 0px; font-size: 0.9em; text-align: center; background-color: #606060; color: white; } | | | | | 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 | div.submenu, div.sectionmenu { padding: 3px 10px 3px 0px; font-size: 0.9em; text-align: center; background-color: #606060; color: white; } div.mainmenu a, div.mainmenu a:visited, div.submenu a, div.submenu a:visited, div.sectionmenu>a.button:link, div.sectionmenu>a.button:visited, div.submenu label { padding: 3px 10px 3px 10px; color: white; text-decoration: none; } div.mainmenu a:hover, div.submenu a:hover, div.sectionmenu>a.button:hover, div.submenu label:hover { color: #404040; background-color: white; } a, a:visited { |
︙ | ︙ | |||
129 130 131 132 133 134 135 | margin: .2em 0 .2em 0; float: left; clear: left; white-space: nowrap; } /* The footer at the very bottom of the page */ | | | 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 | margin: .2em 0 .2em 0; float: left; clear: left; white-space: nowrap; } /* The footer at the very bottom of the page */ div.footer { font-size: 0.8em; margin-top: 12px; padding: 5px 10px 5px 10px; text-align: right; background-color: #404040; color: white; } |
︙ | ︙ |
Changes to skins/plain_gray/footer.txt.
|
| | | | 1 2 3 | <div class="footer"> Fossil $release_version $manifest_version $manifest_date </div> |
Changes to skins/plain_gray/header.txt.
|
| | | | | | | | | | | | | < | | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 | <div class="header"> <div class="title">$<project_name>: $<title></div> </div> <div class="mainmenu"> <th1> html "<a id='hbbtn' href='$home/sitemap' aria-label='Site Map'>☰</a>" builtin_request_js hbmenu.js foreach {name url expr class} $mainmenu { if {![capexpr $expr]} continue if {[string match /* $url]} {set url $home$url} html "<a href='$url' class='$class'>$name</a>\n" } </th1></div> <div id='hbdrop' class='hbdrop'></div> |
Changes to skins/xekri/css.txt.
︙ | ︙ | |||
59 60 61 62 63 64 65 66 67 68 69 70 | h2 { font-size: 1.5rem; } h3 { font-size: 1.25rem; } /************************************** * Main Area */ | > > > > > > > > > | | | 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 | h2 { font-size: 1.5rem; } h3 { font-size: 1.25rem; } span[style^=background-color] { color: #000; } td[style^=background-color] { color: #000; } /************************************** * Main Area */ div.header, div.mainmenu, div.submenu, div.content, div.footer { clear: both; margin: 0 auto; max-width: 90%; padding: 0.25rem 1rem; } /************************************** * Main Area: Header */ div.header { margin: 0.5rem auto 0 auto; display: flex; flex-direction: row; align-items: center; flex-wrap: wrap; } div.logo { |
︙ | ︙ | |||
135 136 137 138 139 140 141 | } /************************************** * Main Area: Global Menu */ | | | | | | 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 | } /************************************** * Main Area: Global Menu */ div.mainmenu, div.submenu { background-color: #080; border-radius: 1rem 1rem 0 0; box-shadow: 3px 4px 1px #000; color: #000; font-weight: bold; font-size: 1.1rem; text-align: center; } div.mainmenu { padding-top: 0.33rem; padding-bottom: 0.25rem; } div.submenu { border-top: 1px solid #0a0; border-radius: 0; display: block; } div.mainmenu a, div.submenu a, div.submenu label { color: #000; padding: 0 0.75rem; text-decoration: none; } div.mainmenu a:hover, div.submenu a:hover, div.submenu label:hover { color: #fff; text-shadow: 0px 0px 6px #0f0; } div.submenu * { margin: 0 0.5rem; vertical-align: middle; |
︙ | ︙ | |||
212 213 214 215 216 217 218 | stroke: white; } /************************************** * Main Area: Footer */ | | | | | | | | 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 | stroke: white; } /************************************** * Main Area: Footer */ div.footer { color: #ee0; font-size: 0.75rem; padding: 0; text-align: right; width: 75%; } div.footer div { background-color: #222; box-shadow: 3px 3px 1px #000; border-radius: 0 0 1rem 1rem; margin: 0 0 10px 0; padding: 0.25rem 0.75rem; } div.footer div.page-time { float: left; } div.footer div.fossil-info { float: right; } div.footer a, div.footer a:link, div.footer a:visited { color: #ee0; } div.footer a:hover { color: #fff; text-shadow: 0px 0px 6px #ee0; } /************************************** * Check-in |
︙ | ︙ | |||
562 563 564 565 566 567 568 | margin: 1.2rem auto 0.75rem auto; padding: 0.2rem; text-align: center; } div.sectionmenu { border-radius: 0 0 3rem 3rem; | | | 571 572 573 574 575 576 577 578 579 580 581 582 583 584 585 | margin: 1.2rem auto 0.75rem auto; padding: 0.2rem; text-align: center; } div.sectionmenu { border-radius: 0 0 3rem 3rem; margin-top: -0.75rem; width: 75%; } div.sectionmenu > a:link, div.sectionmenu > a:visited { color: #000; text-decoration: none; } |
︙ | ︙ | |||
1073 1074 1075 1076 1077 1078 1079 | /* format for report configuration errors */ blockquote.reportError { color: #f00; font-weight: bold; } /* format for artifact lines, no longer shunned */ p.noMoreShun { | | | | 1082 1083 1084 1085 1086 1087 1088 1089 1090 1091 1092 1093 1094 1095 1096 1097 1098 1099 1100 | /* format for report configuration errors */ blockquote.reportError { color: #f00; font-weight: bold; } /* format for artifact lines, no longer shunned */ p.noMoreShun { color: #00f; } /* format for artifact lines being shunned */ p.shunned { color: #00f; } /* a broken hyperlink */ span.brokenlink { color: #f00; } /* List of files in a timeline */ ul.filelist { |
︙ | ︙ | |||
1153 1154 1155 1156 1157 1158 1159 | } body.branch .brlist > table > tbody > tr:hover:not(.selected), body.branch .brlist > table > tbody > tr.selected { background-color: #444; } | | | | 1162 1163 1164 1165 1166 1167 1168 1169 1170 1171 1172 1173 1174 1175 1176 1177 1178 1179 1180 1181 | } body.branch .brlist > table > tbody > tr:hover:not(.selected), body.branch .brlist > table > tbody > tr.selected { background-color: #444; } body.chat div.header, body.chat div.footer, body.chat div.mainmenu, body.chat div.submenu, body.chat div.content { margin-left: 0.5em; margin-right: 0.5em; margin-top: auto/*eliminates unnecessary scrollbars*/; } body.chat.chat-only-mode div.content { max-width: revert; } body.chat #chat-user-list .chat-user{ color: white; } |
Changes to skins/xekri/footer.txt.
1 | </div> | | | | | | | | | | 1 2 3 4 5 6 7 8 9 | </div> <div class="footer"> <div class="page-time"> Generated in <th1>puts [expr {([utime]+[stime]+1000)/1000*0.001}]</th1>s </div> <div class="fossil-info"> Fossil v$release_version $manifest_version </div> </div> |
Changes to skins/xekri/header.txt.
|
| | | 1 2 3 4 5 6 7 8 | <div class="header"> <div class="logo"> <th1> ## ## NOTE: The purpose of this procedure is to take the base URL of the ## Fossil project and return the root of the entire web site using ## the same URI scheme as the base URL (e.g. http or https). ## |
︙ | ︙ | |||
67 68 69 70 71 72 73 | } </th1> <a href="$logourl"> <img src="$logo_image_url" border="0" alt="$project_name"> </a> </div> <div class="title">$<title></div> | | < | | | | | < | | | | | | | | | | | | | | | | | | | < | | | | | | | | | | | | | | | | < | 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 | } </th1> <a href="$logourl"> <img src="$logo_image_url" border="0" alt="$project_name"> </a> </div> <div class="title">$<title></div> <div class="status"><nobr><th1> if {[info exists login]} { puts "Logged in as $login" } else { puts "Not logged in" } </th1></nobr><small><div id="clock"></div></small></div> </div> <th1>html "<script nonce='$nonce'>"</th1> function updateClock(){ var e = document.getElementById("clock"); if(e){ var d = new Date(); function f(n) { return n < 10 ? '0' + n : n; } e.innerHTML = d.getUTCFullYear()+ '-' + f(d.getUTCMonth() + 1) + '-' + f(d.getUTCDate()) + ' ' + f(d.getUTCHours()) + ':' + f(d.getUTCMinutes()); setTimeout(updateClock,(60-d.getUTCSeconds())*1000); } } updateClock(); </script> <div class="mainmenu"><th1> set sitemap 0 foreach {name url expr class} $mainmenu { if {![capexpr $expr]} continue if {[string match /* $url]} { if {[string match $url\[/?#\]* /$current_page/]} { set class "active $class" } set url $home$url } html "<a href='$url' class='$class'>$name</a>\n" if {[string match */sitemap $url]} {set sitemap 1} } if {!$sitemap} { html "<a href='$home/sitemap'>...</a>\n" } </th1></div> |
Changes to src/add.c.
︙ | ︙ | |||
445 446 447 448 449 450 451 | zName = blob_str(&fullName); isDir = file_isdir(zName, RepoFILE); if( isDir==1 ){ vfile_scan(&fullName, nRoot-1, scanFlags, pClean, pIgnore, RepoFILE); }else if( isDir==0 ){ fossil_warning("not found: %s", zName); }else{ | | < | 445 446 447 448 449 450 451 452 453 454 455 456 457 458 459 460 461 462 463 464 465 466 467 468 469 470 471 472 473 474 475 476 477 478 479 | zName = blob_str(&fullName); isDir = file_isdir(zName, RepoFILE); if( isDir==1 ){ vfile_scan(&fullName, nRoot-1, scanFlags, pClean, pIgnore, RepoFILE); }else if( isDir==0 ){ fossil_warning("not found: %s", zName); }else{ char *zTreeName = &zName[nRoot]; if( !forceFlag && glob_match(pIgnore, zTreeName) ){ Blob ans; char cReply; char *prompt = mprintf("file \"%s\" matches \"ignore-glob\". " "Add it (a=all/y/N)? ", zTreeName); prompt_user(prompt, &ans); fossil_free(prompt); cReply = blob_str(&ans)[0]; blob_reset(&ans); if( cReply=='a' || cReply=='A' ){ forceFlag = 1; }else if( cReply!='y' && cReply!='Y' ){ blob_reset(&fullName); continue; } } db_multi_exec( "INSERT OR IGNORE INTO sfile(pathname) VALUES(%Q)", zTreeName ); } blob_reset(&fullName); } glob_free(pIgnore); glob_free(pClean); /** Check for Windows-reserved names and warn or exit, as |
︙ | ︙ |
Changes to src/ajax.c.
︙ | ︙ | |||
390 391 392 393 394 395 396 | */ void ajax_route_dispatcher(void){ const char * zName = P("name"); AjaxRoute routeName = {0,0,0,0}; const AjaxRoute * pRoute = 0; const AjaxRoute routes[] = { /* Keep these sorted by zName (for bsearch()) */ | | < < < < < < < < < | | 390 391 392 393 394 395 396 397 398 399 400 401 402 403 404 405 406 407 408 409 410 411 412 413 414 415 | */ void ajax_route_dispatcher(void){ const char * zName = P("name"); AjaxRoute routeName = {0,0,0,0}; const AjaxRoute * pRoute = 0; const AjaxRoute routes[] = { /* Keep these sorted by zName (for bsearch()) */ {"preview-text", ajax_route_preview_text, 1, 1} }; if(zName==0 || zName[0]==0){ ajax_route_error(400,"Missing required [route] 'name' parameter."); return; } routeName.zName = zName; pRoute = (const AjaxRoute *)bsearch(&routeName, routes, count(routes), sizeof routes[0], cmp_ajax_route_name); if(pRoute==0){ ajax_route_error(404,"Ajax route not found."); return; }else if(0==ajax_route_bootstrap(pRoute->bWriteMode, pRoute->bPost)){ return; } pRoute->xCallback(); } |
Changes to src/alerts.c.
︙ | ︙ | |||
45 46 47 48 49 50 51 | @ -- to the USER entry. @ -- @ -- The ssub field is a string where each character indicates a particular @ -- type of event to subscribe to. Choices: @ -- a - Announcements @ -- c - Check-ins @ -- f - Forum posts | < | 45 46 47 48 49 50 51 52 53 54 55 56 57 58 | @ -- to the USER entry. @ -- @ -- The ssub field is a string where each character indicates a particular @ -- type of event to subscribe to. Choices: @ -- a - Announcements @ -- c - Check-ins @ -- f - Forum posts @ -- n - New forum threads @ -- r - Replies to my own forum posts @ -- t - Ticket changes @ -- w - Wiki changes @ -- x - Edits to forum posts @ -- Probably different codes will be added in the future. In the future @ -- we might also add a separate table that allows subscribing to email |
︙ | ︙ | |||
85 86 87 88 89 90 91 | @ -- @ CREATE TABLE repository.pending_alert( @ eventid TEXT PRIMARY KEY, -- Object that changed @ sentSep BOOLEAN DEFAULT false, -- individual alert sent @ sentDigest BOOLEAN DEFAULT false, -- digest alert sent @ sentMod BOOLEAN DEFAULT false -- pending moderation alert sent @ ) WITHOUT ROWID; | | | 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 | @ -- @ CREATE TABLE repository.pending_alert( @ eventid TEXT PRIMARY KEY, -- Object that changed @ sentSep BOOLEAN DEFAULT false, -- individual alert sent @ sentDigest BOOLEAN DEFAULT false, -- digest alert sent @ sentMod BOOLEAN DEFAULT false -- pending moderation alert sent @ ) WITHOUT ROWID; @ @ -- Obsolete table. No longer used. @ DROP TABLE IF EXISTS repository.alert_bounce; ; /* ** Return true if the email notification tables exist. */ |
︙ | ︙ | |||
874 875 876 877 878 879 880 | */ void email_header_to(Blob *pMsg, int *pnTo, char ***pazTo){ int nTo = 0; char **azTo = 0; Blob v; char *z, *zAddr; int i; | | | | 873 874 875 876 877 878 879 880 881 882 883 884 885 886 887 888 889 890 891 892 893 894 895 896 897 898 899 900 901 | */ void email_header_to(Blob *pMsg, int *pnTo, char ***pazTo){ int nTo = 0; char **azTo = 0; Blob v; char *z, *zAddr; int i; email_header_value(pMsg, "to", &v); z = blob_str(&v); for(i=0; z[i]; i++){ if( z[i]=='<' && (zAddr = email_copy_addr(&z[i+1],'>'))!=0 ){ azTo = fossil_realloc(azTo, sizeof(azTo[0])*(nTo+1) ); azTo[nTo++] = zAddr; } } *pnTo = nTo; *pazTo = azTo; } /* ** Free a list of To addresses obtained from a prior call to ** email_header_to() */ void email_header_to_free(int nTo, char **azTo){ int i; for(i=0; i<nTo; i++) fossil_free(azTo[i]); fossil_free(azTo); } |
︙ | ︙ | |||
914 915 916 917 918 919 920 | ** From: ** Date: ** Message-Id: ** Content-Type: ** Content-Transfer-Encoding: ** MIME-Version: ** Sender: | | | | 913 914 915 916 917 918 919 920 921 922 923 924 925 926 927 928 929 930 931 932 933 934 935 936 937 938 939 940 | ** From: ** Date: ** Message-Id: ** Content-Type: ** Content-Transfer-Encoding: ** MIME-Version: ** Sender: ** ** The caller maintains ownership of the input Blobs. This routine will ** read the Blobs and send them onward to the email system, but it will ** not free them. ** ** The Message-Id: field is added if there is not already a Message-Id ** in the pHdr parameter. ** ** If the zFromName argument is not NULL, then it should be a human-readable ** name or handle for the sender. In that case, "From:" becomes a made-up ** email address based on a hash of zFromName and the domain of email-self, ** and an additional "Sender:" field is inserted with the email-self ** address. Downstream software might use the Sender header to set ** the envelope-from address of the email. If zFromName is a NULL pointer, ** then the "From:" is set to the email-self value and Sender is ** omitted. */ void alert_send( AlertSender *p, /* Emailer context */ Blob *pHdr, /* Email header (incomplete) */ Blob *pBody, /* Email body */ |
︙ | ︙ | |||
1044 1045 1046 1047 1048 1049 1050 | ** the basename for hyperlinks included in email alert text. ** Omit the trailing "/". If the repository is not intended to be ** a long-running server and will not be sending email notifications, ** then leave this setting blank. */ /* ** SETTING: email-admin width=40 | | | 1043 1044 1045 1046 1047 1048 1049 1050 1051 1052 1053 1054 1055 1056 1057 | ** the basename for hyperlinks included in email alert text. ** Omit the trailing "/". If the repository is not intended to be ** a long-running server and will not be sending email notifications, ** then leave this setting blank. */ /* ** SETTING: email-admin width=40 ** This is the email address for the human administrator for the system. ** Abuse and trouble reports and password reset requests are send here. */ /* ** SETTING: email-subname width=16 ** This is a short name used to identifies the repository in the Subject: ** line of email alerts. Traditionally this name is included in square ** brackets. Examples: "[fossil-src]", "[sqlite-src]". |
︙ | ︙ | |||
1079 1080 1081 1082 1083 1084 1085 | ** a subscription is less than email-renew-cutoff, then now new emails ** are sent to the subscriber. ** ** email-renew-warning is the time (in days since 1970-01-01) when the ** last batch of "your subscription is about to expire" emails were ** sent out. ** | | | | 1078 1079 1080 1081 1082 1083 1084 1085 1086 1087 1088 1089 1090 1091 1092 1093 1094 1095 1096 1097 1098 1099 1100 | ** a subscription is less than email-renew-cutoff, then now new emails ** are sent to the subscriber. ** ** email-renew-warning is the time (in days since 1970-01-01) when the ** last batch of "your subscription is about to expire" emails were ** sent out. ** ** email-renew-cutoff is normally 7 days behind email-renew-warning. */ /* ** SETTING: email-send-method width=5 default=off sensitive ** Determine the method used to send email. Allowed values are ** "off", "relay", "pipe", "dir", "db", and "stdout". The "off" value ** means no email is ever sent. The "relay" value means emails are sent ** to an Mail Sending Agent using SMTP located at email-send-relayhost. ** The "pipe" value means email messages are piped into a command ** determined by the email-send-command setting. The "dir" value means ** emails are written to individual files in a directory determined ** by the email-send-dir setting. The "db" value means that emails ** are added to an SQLite database named by the* email-send-db setting. ** The "stdout" value writes email text to standard output, for debugging. */ /* |
︙ | ︙ | |||
1132 1133 1134 1135 1136 1137 1138 | ** SMTP server configured as a Mail Submission Agent listening on the ** designated host and port and all times. */ /* ** COMMAND: alerts* | | | 1131 1132 1133 1134 1135 1136 1137 1138 1139 1140 1141 1142 1143 1144 1145 | ** SMTP server configured as a Mail Submission Agent listening on the ** designated host and port and all times. */ /* ** COMMAND: alerts* ** ** Usage: %fossil alerts SUBCOMMAND ARGS... ** ** Subcommands: ** ** pending Show all pending alerts. Useful for debugging. ** ** reset Hard reset of all email notification tables |
︙ | ︙ | |||
1458 1459 1460 1461 1462 1463 1464 | /* If we reach this point, all is well */ return 1; } /* ** Text of email message sent in order to confirm a subscription. */ | | | 1457 1458 1459 1460 1461 1462 1463 1464 1465 1466 1467 1468 1469 1470 1471 | /* If we reach this point, all is well */ return 1; } /* ** Text of email message sent in order to confirm a subscription. */ static const char zConfirmMsg[] = @ Someone has signed you up for email alerts on the Fossil repository @ at %s. @ @ To confirm your subscription and begin receiving alerts, click on @ the following hyperlink: @ @ %s/alerts/%s |
︙ | ︙ | |||
1741 1742 1743 1744 1745 1746 1747 | } /* ** Either shutdown or completely delete a subscription entry given ** by the hex value zName. Then paint a webpage that explains that ** the entry has been removed. */ | | | < < | | | < < < < < < < < | 1740 1741 1742 1743 1744 1745 1746 1747 1748 1749 1750 1751 1752 1753 1754 1755 1756 1757 1758 1759 1760 1761 1762 1763 1764 1765 1766 1767 1768 1769 1770 1771 1772 1773 1774 | } /* ** Either shutdown or completely delete a subscription entry given ** by the hex value zName. Then paint a webpage that explains that ** the entry has been removed. */ static void alert_unsubscribe(int sid){ const char *zEmail = 0; const char *zLogin = 0; int uid = 0; Stmt q; db_prepare(&q, "SELECT semail, suname FROM subscriber" " WHERE subscriberId=%d", sid); if( db_step(&q)==SQLITE_ROW ){ zEmail = db_column_text(&q, 0); zLogin = db_column_text(&q, 1); uid = db_int(0, "SELECT uid FROM user WHERE login=%Q", zLogin); } style_set_current_feature("alerts"); if( zEmail==0 ){ style_header("Unsubscribe Fail"); @ <p>Unable to locate a subscriber with the requested key</p> }else{ db_multi_exec( "DELETE FROM subscriber WHERE subscriberId=%d", sid ); style_header("Unsubscribed"); @ <p>The "%h(zEmail)" email address has been unsubscribed from all @ notifications. All subscription records for "%h(zEmail)" have @ been purged. No further emails will be sent to "%h(zEmail)".</p> if( uid && g.perm.Admin ){ @ <p>You may also want to @ <a href="%R/setup_uedit?id=%d(uid)">edit or delete |
︙ | ︙ | |||
1803 1804 1805 1806 1807 1808 1809 | ** email and clicks on the link in the email. When a ** compilete subscriberCode is seen on the name= query parameter, ** that constitutes verification of the email address. ** ** * The sid= query parameter contains an integer subscriberId. ** This only works for the administrator. It allows the ** administrator to edit any subscription. | | | 1792 1793 1794 1795 1796 1797 1798 1799 1800 1801 1802 1803 1804 1805 1806 | ** email and clicks on the link in the email. When a ** compilete subscriberCode is seen on the name= query parameter, ** that constitutes verification of the email address. ** ** * The sid= query parameter contains an integer subscriberId. ** This only works for the administrator. It allows the ** administrator to edit any subscription. ** ** * The user is logged into an account other than "nobody" or ** "anonymous". In that case the notification settings ** associated with that account can be edited without needing ** to know the subscriber code. ** ** * The name= query parameter contains a 32-digit prefix of ** subscriber code. (Subscriber codes are normally 64 hex digits |
︙ | ︙ | |||
1933 1934 1935 1936 1937 1938 1939 | } if( P("delete")!=0 && cgi_csrf_safe(2) ){ if( !PB("dodelete") ){ eErr = 9; zErr = mprintf("Select this checkbox and press \"Unsubscribe\" again to" " unsubscribe"); }else{ | | | | 1922 1923 1924 1925 1926 1927 1928 1929 1930 1931 1932 1933 1934 1935 1936 1937 1938 | } if( P("delete")!=0 && cgi_csrf_safe(2) ){ if( !PB("dodelete") ){ eErr = 9; zErr = mprintf("Select this checkbox and press \"Unsubscribe\" again to" " unsubscribe"); }else{ alert_unsubscribe(sid); db_commit_transaction(); return; } } style_set_current_feature("alerts"); style_header("Update Subscription"); db_prepare(&q, "SELECT" " semail," /* 0 */ |
︙ | ︙ | |||
2098 2099 2100 2101 2102 2103 2104 | @ Ticket changes</label><br> } if( g.perm.RdWiki ){ @ <label><input type="checkbox" name="sw" %s(sw?"checked":"")>\ @ Wiki</label> } @ </td></tr> | < < < < | 2087 2088 2089 2090 2091 2092 2093 2094 2095 2096 2097 2098 2099 2100 | @ Ticket changes</label><br> } if( g.perm.RdWiki ){ @ <label><input type="checkbox" name="sw" %s(sw?"checked":"")>\ @ Wiki</label> } @ </td></tr> @ <tr> @ <td class="form_label">Delivery:</td> @ <td><select size="1" name="sdigest"> @ <option value="0" %s(sdigest?"":"selected")>Individual Emails</option> @ <option value="1" %s(sdigest?"selected":"")>Daily Digest</option> @ </select></td> @ </tr> |
︙ | ︙ | |||
2196 2197 2198 2199 2200 2201 2202 | style_finish_page(); } /* This is the message that gets sent to describe how to change ** or modify a subscription */ | | < < < < | | | < < | 2181 2182 2183 2184 2185 2186 2187 2188 2189 2190 2191 2192 2193 2194 2195 2196 2197 2198 2199 2200 2201 2202 2203 2204 2205 2206 2207 2208 2209 2210 2211 2212 2213 2214 2215 2216 2217 2218 2219 2220 2221 2222 2223 2224 2225 2226 2227 2228 2229 2230 2231 2232 2233 2234 2235 2236 2237 2238 2239 2240 2241 2242 2243 2244 | style_finish_page(); } /* This is the message that gets sent to describe how to change ** or modify a subscription */ static const char zUnsubMsg[] = @ To changes your subscription settings at %s visit this link: @ @ %s/alerts/%s @ @ To completely unsubscribe from %s, visit the following link: @ @ %s/unsubscribe/%s ; /* ** WEBPAGE: unsubscribe ** ** Users visit this page to be delisted from email alerts. ** ** If a valid subscriber code is supplied in the name= query parameter, ** then that subscriber is delisted. ** ** Otherwise, If the users is logged in, then they are redirected ** to the /alerts page where they have an unsubscribe button. ** ** Non-logged-in users with no name= query parameter are invited to enter ** an email address to which will be sent the unsubscribe link that ** contains the correct subscriber code. */ void unsubscribe_page(void){ const char *zName = P("name"); char *zErr = 0; int eErr = 0; unsigned int uSeed = 0; const char *zDecoded; char *zCaptcha = 0; int dx; int bSubmit; const char *zEAddr; char *zCode = 0; int sid = 0; if( zName==0 ) zName = P("scode"); /* If a valid subscriber code is supplied, then either present the user ** with a comformation, or if already confirmed, unsubscribe immediately. */ if( zName && (sid = db_int(0, "SELECT subscriberId FROM subscriber" " WHERE subscriberCode=hextoblob(%Q)", zName))!=0 ){ char *zUnsubName = mprintf("confirm%04x", sid); if( P(zUnsubName)!=0 ){ alert_unsubscribe(sid); }else if( P("manage")!=0 ){ cgi_redirectf("%R/alerts/%s", zName); }else{ style_header("Unsubscribe"); form_begin(0, "%R/unsubscribe"); @ <input type="hidden" name="scode" value="%h(zName)"> @ <table border="0" cellpadding="10" width="100%%"> |
︙ | ︙ | |||
2332 2333 2334 2335 2336 2337 2338 | }else{ @ <p>An email has been sent to "%h(zEAddr)" that explains how to @ unsubscribe and/or modify your subscription settings</p> } alert_sender_free(pSender); style_finish_page(); return; | | | 2311 2312 2313 2314 2315 2316 2317 2318 2319 2320 2321 2322 2323 2324 2325 | }else{ @ <p>An email has been sent to "%h(zEAddr)" that explains how to @ unsubscribe and/or modify your subscription settings</p> } alert_sender_free(pSender); style_finish_page(); return; } /* Non-logged-in users have to enter an email address to which is ** sent a message containing the unsubscribe link. */ style_header("Unsubscribe Request"); @ <p>Fill out the form below to request an email message that will @ explain how to unsubscribe and/or change your subscription settings.</p> |
︙ | ︙ | |||
2561 2562 2563 2564 2565 2566 2567 | } /* ** Compute a string that is appropriate for the EmailEvent.zPriors field ** for a particular forum post. ** ** This string is an encode list of sender names and rids for all ancestors | | | 2540 2541 2542 2543 2544 2545 2546 2547 2548 2549 2550 2551 2552 2553 2554 | } /* ** Compute a string that is appropriate for the EmailEvent.zPriors field ** for a particular forum post. ** ** This string is an encode list of sender names and rids for all ancestors ** of the fpdi post - the post that fpid answer, the post that that parent ** post answers, and so forth back up to the root post. Duplicates sender ** names are omitted. ** ** The EmailEvent.zPriors field is used to screen events for people who ** only want to see replies to their own posts or to specific posts. */ static char *alert_compute_priors(int fpid){ |
︙ | ︙ | |||
2741 2742 2743 2744 2745 2746 2747 | zUuid = db_column_text(&q, 1); zTitle = db_column_text(&q, 3); if( p->needMod ){ blob_appendf(&p->hdr, "Subject: %s Pending Moderation: %s\r\n", zSub, zTitle); }else{ blob_appendf(&p->hdr, "Subject: %s %s\r\n", zSub, zTitle); | | | 2720 2721 2722 2723 2724 2725 2726 2727 2728 2729 2730 2731 2732 2733 2734 | zUuid = db_column_text(&q, 1); zTitle = db_column_text(&q, 3); if( p->needMod ){ blob_appendf(&p->hdr, "Subject: %s Pending Moderation: %s\r\n", zSub, zTitle); }else{ blob_appendf(&p->hdr, "Subject: %s %s\r\n", zSub, zTitle); blob_appendf(&p->hdr, "Message-Id: <%.32s@%s>\r\n", zUuid, alert_hostname(zFrom)); zIrt = db_column_text(&q, 4); if( zIrt && zIrt[0] ){ blob_appendf(&p->hdr, "In-Reply-To: <%.32s@%s>\r\n", zIrt, alert_hostname(zFrom)); } } |
︙ | ︙ | |||
3108 3109 3110 3111 3112 3113 3114 | " ssub," /* 2 */ " fullcap(user.cap)," /* 3 */ " suname" /* 4 */ " FROM subscriber LEFT JOIN user ON (login=suname)" " WHERE sverified" " AND NOT sdonotcall" " AND sdigest IS %s" | | < | | | 3087 3088 3089 3090 3091 3092 3093 3094 3095 3096 3097 3098 3099 3100 3101 3102 3103 3104 3105 3106 3107 3108 3109 3110 3111 3112 3113 3114 3115 3116 3117 | " ssub," /* 2 */ " fullcap(user.cap)," /* 3 */ " suname" /* 4 */ " FROM subscriber LEFT JOIN user ON (login=suname)" " WHERE sverified" " AND NOT sdonotcall" " AND sdigest IS %s" " AND coalesce(subscriber.lastContact,subscriber.mtime)>=%d", zDigest/*safe-for-%s*/, db_get_int("email-renew-cutoff",0) ); while( db_step(&q)==SQLITE_ROW ){ const char *zCode = db_column_text(&q, 0); const char *zSub = db_column_text(&q, 2); const char *zEmail = db_column_text(&q, 1); const char *zCap = db_column_text(&q, 3); int nHit = 0; for(p=pEvents; p; p=p->pNext){ if( strchr(zSub,p->type)==0 ){ if( p->type!='f' ) continue; if( strchr(zSub,'n')!=0 && (p->zPriors==0 || p->zPriors[0]==0) ){ /* New post: accepted */ }else if( strchr(zSub,'r')!=0 && alert_in_priors(db_column_text(&q,4), p->zPriors) ){ /* A follow-up to a post written by the user: accept */ }else{ continue; } } if( p->needMod ){ /* For events that require moderator approval, only send an alert |
︙ | ︙ | |||
3168 3169 3170 3171 3172 3173 3174 | if( blob_size(&p->hdr)>0 ){ /* This alert should be sent as a separate email */ Blob fhdr, fbody; blob_init(&fhdr, 0, 0); blob_appendf(&fhdr, "To: <%s>\r\n", zEmail); blob_append(&fhdr, blob_buffer(&p->hdr), blob_size(&p->hdr)); blob_init(&fbody, blob_buffer(&p->txt), blob_size(&p->txt)); | < < < < | 3146 3147 3148 3149 3150 3151 3152 3153 3154 3155 3156 3157 3158 3159 | if( blob_size(&p->hdr)>0 ){ /* This alert should be sent as a separate email */ Blob fhdr, fbody; blob_init(&fhdr, 0, 0); blob_appendf(&fhdr, "To: <%s>\r\n", zEmail); blob_append(&fhdr, blob_buffer(&p->hdr), blob_size(&p->hdr)); blob_init(&fbody, blob_buffer(&p->txt), blob_size(&p->txt)); blob_appendf(&fbody, "\n-- \nUnsubscribe: %s/unsubscribe/%s\n", zUrl, zCode); /* blob_appendf(&fbody, "Subscription settings: %s/alerts/%s\n", ** zUrl, zCode); */ alert_send(pSender,&fhdr,&fbody,p->zFromName); nSent++; blob_reset(&fhdr); |
︙ | ︙ | |||
3198 3199 3200 3201 3202 3203 3204 | } nHit++; blob_append(&body, "\n", 1); blob_append(&body, blob_buffer(&p->txt), blob_size(&p->txt)); } } if( nHit==0 ) continue; | | | 3172 3173 3174 3175 3176 3177 3178 3179 3180 3181 3182 3183 3184 3185 3186 | } nHit++; blob_append(&body, "\n", 1); blob_append(&body, blob_buffer(&p->txt), blob_size(&p->txt)); } } if( nHit==0 ) continue; blob_appendf(&hdr, "List-Unsubscribe: <%s/unsubscribe/%s>\r\n", zUrl, zCode); blob_appendf(&hdr, "List-Unsubscribe-Post: List-Unsubscribe=One-Click\r\n"); blob_appendf(&body,"\n-- \nSubscription info: %s/alerts/%s\n", zUrl, zCode); alert_send(pSender,&hdr,&body,0); nSent++; blob_truncate(&hdr, 0); |
︙ | ︙ | |||
3221 3222 3223 3224 3225 3226 3227 | ** alerts that have been completely sent. */ db_multi_exec("DELETE FROM pending_alert WHERE sentDigest AND sentSep;"); /* Send renewal messages to subscribers whose subscriptions are about ** to expire. Only do this if: ** | | | 3195 3196 3197 3198 3199 3200 3201 3202 3203 3204 3205 3206 3207 3208 3209 | ** alerts that have been completely sent. */ db_multi_exec("DELETE FROM pending_alert WHERE sentDigest AND sentSep;"); /* Send renewal messages to subscribers whose subscriptions are about ** to expire. Only do this if: ** ** (1) email-renew-interval is 14 or greater (or in other words if ** subscription expiration is enabled). ** ** (2) The SENDALERT_RENEWAL flag is set */ send_alert_expiration_warnings: if( (flags & SENDALERT_RENEWAL)!=0 && (iInterval = db_get_int("email-renew-interval",0))>=14 |
︙ | ︙ | |||
3250 3251 3252 3253 3254 3255 3256 | " AND length(sdigest)>0", iNewWarn, iOldWarn ); while( db_step(&q)==SQLITE_ROW ){ Blob hdr, body; blob_init(&hdr, 0, 0); blob_init(&body, 0, 0); | | | 3224 3225 3226 3227 3228 3229 3230 3231 3232 3233 3234 3235 3236 3237 3238 | " AND length(sdigest)>0", iNewWarn, iOldWarn ); while( db_step(&q)==SQLITE_ROW ){ Blob hdr, body; blob_init(&hdr, 0, 0); blob_init(&body, 0, 0); alert_renewal_msg(&hdr, &body, db_column_text(&q,0), db_column_int(&q,1), db_column_text(&q,2), db_column_text(&q,3), zRepoName, zUrl); alert_send(pSender,&hdr,&body,0); blob_reset(&hdr); |
︙ | ︙ | |||
3320 3321 3322 3323 3324 3325 3326 | style_set_current_feature("alerts"); if( zAdminEmail==0 || zAdminEmail[0]==0 ){ style_header("Outbound Email Disabled"); @ <p>Outbound email is disabled on this repository style_finish_page(); return; } | | | 3294 3295 3296 3297 3298 3299 3300 3301 3302 3303 3304 3305 3306 3307 3308 | style_set_current_feature("alerts"); if( zAdminEmail==0 || zAdminEmail[0]==0 ){ style_header("Outbound Email Disabled"); @ <p>Outbound email is disabled on this repository style_finish_page(); return; } if( P("submit")!=0 && P("subject")!=0 && P("msg")!=0 && P("from")!=0 && cgi_csrf_safe(2) && captcha_is_correct(0) ){ Blob hdr, body; |
︙ | ︙ |
Changes to src/allrepo.c.
︙ | ︙ | |||
29 30 31 32 33 34 35 | */ static void collect_argument(Blob *pExtra,const char *zArg,const char *zShort){ const char *z = find_option(zArg, zShort, 0); if( z!=0 ){ blob_appendf(pExtra, " %s", z); } } | | < < | | 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 | */ static void collect_argument(Blob *pExtra,const char *zArg,const char *zShort){ const char *z = find_option(zArg, zShort, 0); if( z!=0 ){ blob_appendf(pExtra, " %s", z); } } static void collect_argument_value(Blob *pExtra, const char *zArg){ const char *zValue = find_option(zArg, 0, 1); if( zValue ){ if( zValue[0] ){ blob_appendf(pExtra, " --%s %$", zArg, zValue); }else{ blob_appendf(pExtra, " --%s \"\"", zArg); } } |
︙ | ︙ | |||
106 107 108 109 110 111 112 | ** --verbose and --share-links options are supported. ** ** push Run a "push" on all repositories. Only the --verbose ** option is supported. ** ** rebuild Rebuild on all repositories. The command line options ** supported by the rebuild command itself, if any are | | | | 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 | ** --verbose and --share-links options are supported. ** ** push Run a "push" on all repositories. Only the --verbose ** option is supported. ** ** rebuild Rebuild on all repositories. The command line options ** supported by the rebuild command itself, if any are ** present, are passed along verbatim. The --force and ** --randomize options are not supported. ** ** remote Show remote hosts for all repositories. ** ** repack Look for extra compression in all repositories. ** ** sync Run a "sync" on all repositories. Only the --verbose ** and --unversioned and --share-links options are supported. |
︙ | ︙ | |||
132 133 134 135 136 137 138 | ** ** ui Run the "ui" command on all repositories. Like "server" ** but bind to the loopback TCP address only, enable ** the --localauth option and automatically launch a ** web-browser ** ** whatis Run the "whatis" command on all repositories. Only | | | 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 | ** ** ui Run the "ui" command on all repositories. Like "server" ** but bind to the loopback TCP address only, enable ** the --localauth option and automatically launch a ** web-browser ** ** whatis Run the "whatis" command on all repositories. Only ** show output for repositories that have a match. ** ** ** In addition, the following maintenance operations are supported: ** ** add Add all the repositories named to the set of repositories ** tracked by Fossil. Normally Fossil is able to keep up with ** this list by itself, but sometimes it can benefit from this |
︙ | ︙ | |||
210 211 212 213 214 215 216 | if( file_isdir(zDest, ExtFILE)!=1 ){ fossil_fatal("argument to \"fossil all backup\" must be a directory"); } blob_appendf(&extra, " %$", zDest); }else if( fossil_strcmp(zCmd, "clean")==0 ){ zCmd = "clean --chdir"; collect_argument(&extra, "allckouts",0); | | | | | | 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 | if( file_isdir(zDest, ExtFILE)!=1 ){ fossil_fatal("argument to \"fossil all backup\" must be a directory"); } blob_appendf(&extra, " %$", zDest); }else if( fossil_strcmp(zCmd, "clean")==0 ){ zCmd = "clean --chdir"; collect_argument(&extra, "allckouts",0); collect_argument_value(&extra, "case-sensitive"); collect_argument_value(&extra, "clean"); collect_argument(&extra, "dirsonly",0); collect_argument(&extra, "disable-undo",0); collect_argument(&extra, "dotfiles",0); collect_argument(&extra, "emptydirs",0); collect_argument(&extra, "force","f"); collect_argument_value(&extra, "ignore"); collect_argument_value(&extra, "keep"); collect_argument(&extra, "no-prompt",0); collect_argument(&extra, "temp",0); collect_argument(&extra, "verbose","v"); collect_argument(&extra, "whatif",0); useCheckouts = 1; }else if( fossil_strcmp(zCmd, "config")==0 ){ zCmd = "config -R"; |
︙ | ︙ | |||
247 248 249 250 251 252 253 | }else if( fossil_strcmp(zCmd, "extras")==0 ){ if( showFile ){ zCmd = "extras --chdir"; }else{ zCmd = "extras --header --chdir"; } collect_argument(&extra, "abs-paths",0); | | | | 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 | }else if( fossil_strcmp(zCmd, "extras")==0 ){ if( showFile ){ zCmd = "extras --chdir"; }else{ zCmd = "extras --header --chdir"; } collect_argument(&extra, "abs-paths",0); collect_argument_value(&extra, "case-sensitive"); collect_argument(&extra, "dotfiles",0); collect_argument_value(&extra, "ignore"); collect_argument(&extra, "rel-paths",0); useCheckouts = 1; stopOnError = 0; quiet = 1; }else if( fossil_strcmp(zCmd, "git")==0 ){ if( g.argc<4 ){ usage("git (export|status)"); |
︙ | ︙ | |||
276 277 278 279 280 281 282 | collect_argument(&extra, "verbose","v"); }else if( fossil_strcmp(zCmd, "pull")==0 ){ zCmd = "pull -autourl -R"; collect_argument(&extra, "verbose","v"); collect_argument(&extra, "share-links",0); }else if( fossil_strcmp(zCmd, "rebuild")==0 ){ zCmd = "rebuild"; | < | | | 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 | collect_argument(&extra, "verbose","v"); }else if( fossil_strcmp(zCmd, "pull")==0 ){ zCmd = "pull -autourl -R"; collect_argument(&extra, "verbose","v"); collect_argument(&extra, "share-links",0); }else if( fossil_strcmp(zCmd, "rebuild")==0 ){ zCmd = "rebuild"; collect_argument(&extra, "cluster",0); collect_argument(&extra, "compress",0); collect_argument(&extra, "compress-only",0); collect_argument(&extra, "noverify",0); collect_argument_value(&extra, "pagesize"); collect_argument(&extra, "vacuum",0); collect_argument(&extra, "deanalyze",0); collect_argument(&extra, "analyze",0); collect_argument(&extra, "wal",0); collect_argument(&extra, "stats",0); collect_argument(&extra, "index",0); collect_argument(&extra, "noindex",0); collect_argument(&extra, "ifneeded", 0); }else if( fossil_strcmp(zCmd, "remote")==0 ){ |
︙ | ︙ | |||
415 416 417 418 419 420 421 | }else if( fossil_strcmp(zCmd, "cache")==0 ){ zCmd = "cache -R"; showLabel = 1; collect_argv(&extra, 3); }else if( fossil_strcmp(zCmd, "whatis")==0 ){ zCmd = "whatis -q -R"; quiet = 1; | < < | 412 413 414 415 416 417 418 419 420 421 422 423 424 425 | }else if( fossil_strcmp(zCmd, "cache")==0 ){ zCmd = "cache -R"; showLabel = 1; collect_argv(&extra, 3); }else if( fossil_strcmp(zCmd, "whatis")==0 ){ zCmd = "whatis -q -R"; quiet = 1; collect_argv(&extra, 3); }else{ fossil_fatal("\"all\" subcommand should be one of: " "add cache changes clean dbstat extras fts-config git ignore " "info list ls pull push rebuild remote " "server setting sync ui unset whatis"); } |
︙ | ︙ |
Changes to src/attach.c.
︙ | ︙ | |||
748 749 750 751 752 753 754 | if( (pWiki = manifest_get(rid, CFTYPE_EVENT, 0))!=0 ){ zBody = pWiki->zWiki; } if( zBody==0 ){ fossil_fatal("technote [%s] not found",zETime); } zTarget = db_text(0, | | < | 748 749 750 751 752 753 754 755 756 757 758 759 760 761 762 | if( (pWiki = manifest_get(rid, CFTYPE_EVENT, 0))!=0 ){ zBody = pWiki->zWiki; } if( zBody==0 ){ fossil_fatal("technote [%s] not found",zETime); } zTarget = db_text(0, "SELECT substr(tagname,7) FROM tag WHERE tagid=(SELECT tagid FROM event WHERE objid='%d')", rid ); zFile = g.argv[3]; } blob_read_from_file(&content, zFile, ExtFILE); user_select(); attach_commit( |
︙ | ︙ |
Changes to src/backlink.c.
︙ | ︙ | |||
247 248 249 250 251 252 253 | void *opaque ){ Backlink *p = (Backlink*)opaque; char *zTarget = blob_buffer(target); int nTarget = blob_size(target); backlink_create(p, zTarget, nTarget); | | | 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 | void *opaque ){ Backlink *p = (Backlink*)opaque; char *zTarget = blob_buffer(target); int nTarget = blob_size(target); backlink_create(p, zTarget, nTarget); return 1; } /* No-op routines for the rendering callbacks that we do not need */ static void mkdn_noop_prolog(Blob *b, void *v){ return; } static void (*mkdn_noop_epilog)(Blob*, void*) = mkdn_noop_prolog; static void mkdn_noop_footnotes(Blob *b1, const Blob *b2, void *v){ return; } static void mkdn_noop_blockcode(Blob *b1, Blob *b2, void *v){ return; } |
︙ | ︙ |
Changes to src/backoffice.c.
︙ | ︙ | |||
313 314 315 316 317 318 319 | ** we cannot prove that the process is dead, return true. */ static int backofficeProcessExists(sqlite3_uint64 pid){ #if defined(_WIN32) return pid>0 && backofficeWin32ProcessExists((DWORD)pid)!=0; #else return pid>0 && kill((pid_t)pid, 0)==0; | | | | 313 314 315 316 317 318 319 320 321 322 323 324 325 326 327 328 329 330 331 332 333 334 335 336 337 338 339 | ** we cannot prove that the process is dead, return true. */ static int backofficeProcessExists(sqlite3_uint64 pid){ #if defined(_WIN32) return pid>0 && backofficeWin32ProcessExists((DWORD)pid)!=0; #else return pid>0 && kill((pid_t)pid, 0)==0; #endif } /* ** Check to see if the process identified by pid has finished. If ** we cannot prove that the process is still running, return true. */ static int backofficeProcessDone(sqlite3_uint64 pid){ #if defined(_WIN32) return pid<=0 || backofficeWin32ProcessExists((DWORD)pid)==0; #else return pid<=0 || kill((pid_t)pid, 0)!=0; #endif } /* ** Return a process id number for the current process */ static sqlite3_uint64 backofficeProcessId(void){ return (sqlite3_uint64)GETPID(); |
︙ | ︙ | |||
673 674 675 676 677 678 679 | ** This might be done by a cron job or similar to make sure backoffice ** processing happens periodically. Or, the --poll option can be used ** to run this command as a daemon that will periodically invoke backoffice ** on a collection of repositories. ** ** If only a single repository is named and --poll is omitted, then the ** backoffice work is done in-process. But if there are multiple repositories | | | 673 674 675 676 677 678 679 680 681 682 683 684 685 686 687 | ** This might be done by a cron job or similar to make sure backoffice ** processing happens periodically. Or, the --poll option can be used ** to run this command as a daemon that will periodically invoke backoffice ** on a collection of repositories. ** ** If only a single repository is named and --poll is omitted, then the ** backoffice work is done in-process. But if there are multiple repositories ** or if --poll is used, a separate sub-process is started for each poll of ** each repository. ** ** Standard options: ** ** --debug Show what this command is doing ** ** --logfile FILE Append a log of backoffice actions onto FILE |
︙ | ︙ |
Changes to src/bag.c.
︙ | ︙ | |||
72 73 74 75 76 77 78 | free(p->a); bag_init(p); } /* ** The hash function */ | | | 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 | free(p->a); bag_init(p); } /* ** The hash function */ #define bag_hash(i) (i*101) /* ** Change the size of the hash table on a bag so that ** it contains N slots ** ** Completely reconstruct the hash table from scratch. Deleted ** entries (indicated by a -1) are removed. When finished, it |
︙ | ︙ |
Changes to src/blob.c.
︙ | ︙ | |||
1549 1550 1551 1552 1553 1554 1555 | z[--j] = z[i]; } } } /* ** ASCII (for reference): | | | | | | | | | | | | 1549 1550 1551 1552 1553 1554 1555 1556 1557 1558 1559 1560 1561 1562 1563 1564 1565 1566 1567 1568 1569 1570 1571 1572 1573 1574 1575 1576 1577 1578 1579 1580 1581 1582 1583 1584 1585 1586 1587 | z[--j] = z[i]; } } } /* ** ASCII (for reference): ** x0 x1 x2 x3 x4 x5 x6 x7 x8 x9 xa xb xc xd xe xf ** 0x ^` ^a ^b ^c ^d ^e ^f ^g \b \t \n () \f \r ^n ^o ** 1x ^p ^q ^r ^s ^t ^u ^v ^w ^x ^y ^z ^{ ^| ^} ^~ ^ ** 2x () ! " # $ % & ' ( ) * + , - . / ** 3x 0 1 2 3 4 5 6 7 8 9 : ; < = > ? ** 4x @ A B C D E F G H I J K L M N O ** 5x P Q R S T U V W X Y Z [ \ ] ^ _ ** 6x ` a b c d e f g h i j k l m n o ** 7x p q r s t u v w x y z { | } ~ ^_ */ /* ** Meanings for bytes in a filename: ** ** 0 Ordinary character. No encoding required ** 1 Needs to be escaped ** 2 Illegal character. Do not allow in a filename ** 3 First byte of a 2-byte UTF-8 ** 4 First byte of a 3-byte UTF-8 ** 5 First byte of a 4-byte UTF-8 */ static const char aSafeChar[256] = { #ifdef _WIN32 /* Windows ** Prohibit: all control characters, including tab, \r and \n ** Escape: (space) " # $ % & ' ( ) * ; < > ? [ ] ^ ` { | } */ /* x0 x1 x2 x3 x4 x5 x6 x7 x8 x9 xa xb xc xd xe xf */ 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, /* 0x */ 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, /* 1x */ 1, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, /* 2x */ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 0, 1, 1, /* 3x */ |
︙ | ︙ | |||
1663 1664 1665 1666 1667 1668 1669 | blob_token(pBlob, &bad); fossil_fatal("the [%s] argument to the \"%s\" command contains " "an illegal UTF-8 character", zIn, blob_str(&bad)); } i += x-2; } | | | 1663 1664 1665 1666 1667 1668 1669 1670 1671 1672 1673 1674 1675 1676 1677 | blob_token(pBlob, &bad); fossil_fatal("the [%s] argument to the \"%s\" command contains " "an illegal UTF-8 character", zIn, blob_str(&bad)); } i += x-2; } } } /* Separate from the previous argument by a space */ if( n>0 && !fossil_isspace(z[n-1]) ){ blob_append_char(pBlob, ' '); } |
︙ | ︙ | |||
1698 1699 1700 1701 1702 1703 1704 | blob_append_char(pBlob, '\\'); }else if( zIn[0]=='/' ){ blob_append_char(pBlob, '.'); } for(i=0; (c = (unsigned char)zIn[i])!=0; i++){ blob_append_char(pBlob, (char)c); if( c=='"' ) blob_append_char(pBlob, '"'); | < < | 1698 1699 1700 1701 1702 1703 1704 1705 1706 1707 1708 1709 1710 1711 | blob_append_char(pBlob, '\\'); }else if( zIn[0]=='/' ){ blob_append_char(pBlob, '.'); } for(i=0; (c = (unsigned char)zIn[i])!=0; i++){ blob_append_char(pBlob, (char)c); if( c=='"' ) blob_append_char(pBlob, '"'); } blob_append_char(pBlob, '"'); #else /* Quoting strategy for unix: ** If the name does not contain ', then surround the whole thing ** with '...'. If there is one or more ' characters within the ** name, then put \ before each special character. |
︙ | ︙ | |||
1795 1796 1797 1798 1799 1800 1801 | } #ifdef _WIN32 if( zBuf[0]=='-' && zArg[0]=='.' && zArg[1]=='\\' ) zArg += 2; #else if( zBuf[0]=='-' && zArg[0]=='.' && zArg[1]=='/' ) zArg += 2; #endif if( strcmp(zBuf, zArg)!=0 ){ | | | 1793 1794 1795 1796 1797 1798 1799 1800 1801 1802 1803 1804 1805 1806 1807 | } #ifdef _WIN32 if( zBuf[0]=='-' && zArg[0]=='.' && zArg[1]=='\\' ) zArg += 2; #else if( zBuf[0]=='-' && zArg[0]=='.' && zArg[1]=='/' ) zArg += 2; #endif if( strcmp(zBuf, zArg)!=0 ){ fossil_fatal("argument disagree: \"%s\" (%s) versus \"%s\"", zBuf, g.argv[i-1], zArg); } continue; }else if( fossil_strcmp(zArg, "--fuzz")==0 && i+1<g.argc ){ int n = atoi(g.argv[++i]); int j; for(j=0; j<n; j++){ |
︙ | ︙ |
Changes to src/branch.c.
︙ | ︙ | |||
304 305 306 307 308 309 310 | const char *zUser ){ Blob sql; blob_init(&sql, 0, 0); brlist_create_temp_table(); /* Ignore nLimitMRU if no chronological sort requested. */ if( (brFlags & BRL_ORDERBY_MTIME)==0 ) nLimitMRU = 0; | | > | | 304 305 306 307 308 309 310 311 312 313 314 315 316 317 318 319 320 | const char *zUser ){ Blob sql; blob_init(&sql, 0, 0); brlist_create_temp_table(); /* Ignore nLimitMRU if no chronological sort requested. */ if( (brFlags & BRL_ORDERBY_MTIME)==0 ) nLimitMRU = 0; /* Undocumented: invert negative values for nLimitMRU, so that command-line ** arguments similar to `head -5' with "option numbers" are possible. */ if( nLimitMRU<0 ) nLimitMRU = -nLimitMRU; /* OUTER QUERY */ blob_append_sql(&sql,"SELECT name, isprivate, mergeto,"); if( brFlags & BRL_LIST_USERS ){ blob_append_sql(&sql, " (SELECT group_concat(user) FROM (" " SELECT DISTINCT * FROM (" " SELECT coalesce(euser,user) AS user" |
︙ | ︙ | |||
338 339 340 341 342 343 344 | blob_append_sql(&sql, "SELECT name, isprivate, mtime, mergeto FROM tmp_brlist WHERE 1" ); break; } case BRL_OPEN_ONLY: { blob_append_sql(&sql, | | < | 339 340 341 342 343 344 345 346 347 348 349 350 351 352 353 | blob_append_sql(&sql, "SELECT name, isprivate, mtime, mergeto FROM tmp_brlist WHERE 1" ); break; } case BRL_OPEN_ONLY: { blob_append_sql(&sql, "SELECT name, isprivate, mtime, mergeto FROM tmp_brlist WHERE NOT isclosed" ); break; } } if( brFlags & BRL_PRIVATE ) blob_append_sql(&sql, " AND isprivate"); if( brFlags & BRL_MERGED ) blob_append_sql(&sql, " AND mergeto IS NOT NULL"); if( zBrNameGlob ) blob_append_sql(&sql, " AND (name GLOB %Q)", zBrNameGlob); |
︙ | ︙ | |||
771 772 773 774 775 776 777 | int isPriv = db_column_int(&q, 1)==1; const char *zMergeTo = db_column_text(&q, 2); int isCur = zCurrent!=0 && fossil_strcmp(zCurrent,zBr)==0; const char *zUsers = db_column_text(&q, 3); if( (brFlags & BRL_MERGED) && fossil_strcmp(zCurrent,zMergeTo)!=0 ){ continue; } | | | 771 772 773 774 775 776 777 778 779 780 781 782 783 784 785 | int isPriv = db_column_int(&q, 1)==1; const char *zMergeTo = db_column_text(&q, 2); int isCur = zCurrent!=0 && fossil_strcmp(zCurrent,zBr)==0; const char *zUsers = db_column_text(&q, 3); if( (brFlags & BRL_MERGED) && fossil_strcmp(zCurrent,zMergeTo)!=0 ){ continue; } if( (brFlags & BRL_UNMERGED) && (fossil_strcmp(zCurrent,zMergeTo)==0 || isCur) ){ continue; } blob_appendf(&txt, "%s%s%s", ( (brFlags & BRL_PRIVATE) ? " " : ( isPriv ? "#" : " ") ), (isCur ? "* " : " "), zBr); if( nUsers ){ |
︙ | ︙ | |||
804 805 806 807 808 809 810 | blob_reset(&txt); } db_finalize(&q); }else if( strncmp(zCmd,"new",n)==0 ){ branch_new(); }else if( strncmp(zCmd,"close",5)==0 ){ if(g.argc<4){ | | | | | | 804 805 806 807 808 809 810 811 812 813 814 815 816 817 818 819 820 821 822 823 824 825 826 827 828 829 830 831 832 833 | blob_reset(&txt); } db_finalize(&q); }else if( strncmp(zCmd,"new",n)==0 ){ branch_new(); }else if( strncmp(zCmd,"close",5)==0 ){ if(g.argc<4){ usage("branch close branch-name(s)..."); } branch_cmd_close(3, 1); }else if( strncmp(zCmd,"reopen",6)==0 ){ if(g.argc<4){ usage("branch reopen branch-name(s)..."); } branch_cmd_close(3, 0); }else if( strncmp(zCmd,"hide",4)==0 ){ if(g.argc<4){ usage("branch hide branch-name(s)..."); } branch_cmd_hide(3,1); }else if( strncmp(zCmd,"unhide",6)==0 ){ if(g.argc<4){ usage("branch unhide branch-name(s)..."); } branch_cmd_hide(3,0); }else{ fossil_fatal("branch subcommand should be one of: " "close current hide info list ls lsh new reopen unhide"); } } |
︙ | ︙ | |||
885 886 887 888 889 890 891 | } } if( zBgClr && zBgClr[0] && show_colors ){ @ <tr style="background-color:%s(zBgClr)"> }else{ @ <tr> } | | | 885 886 887 888 889 890 891 892 893 894 895 896 897 898 899 | } } if( zBgClr && zBgClr[0] && show_colors ){ @ <tr style="background-color:%s(zBgClr)"> }else{ @ <tr> } @ <td>%z(href("%R/timeline?r=%T",zBranch))%h(zBranch)</a><input @ type="checkbox" disabled="disabled"/></td> @ <td data-sortkey="%016llx(iMtime)">%s(zAge)</td> @ <td>%d(nCkin)</td> fossil_free(zAge); @ <td>%s(isClosed?"closed":"")</td> if( zMergeTo ){ @ <td>merged into |
︙ | ︙ |
Changes to src/browse.c.
︙ | ︙ | |||
356 357 358 359 360 361 362 | /* Generate a multi-column table listing the contents of zD[] ** directory. */ mxLen = db_int(12, "SELECT max(length(x)) FROM localfiles /*scan*/"); if( mxLen<12 ) mxLen = 12; mxLen += (mxLen+9)/10; | | | 356 357 358 359 360 361 362 363 364 365 366 367 368 369 370 | /* Generate a multi-column table listing the contents of zD[] ** directory. */ mxLen = db_int(12, "SELECT max(length(x)) FROM localfiles /*scan*/"); if( mxLen<12 ) mxLen = 12; mxLen += (mxLen+9)/10; db_prepare(&q, "SELECT x, u FROM localfiles ORDER BY x COLLATE uintnocase /*scan*/"); @ <div class="columns files" style="columns: %d(mxLen)ex auto"> @ <ul class="browser"> while( db_step(&q)==SQLITE_ROW ){ const char *zFN; zFN = db_column_text(&q, 0); if( zFN[0]=='/' ){ |
︙ | ︙ | |||
469 470 471 472 473 474 475 | FileTreeNode *pSibling; /* Next element in the same subdirectory */ FileTreeNode *pChild; /* List of child nodes */ FileTreeNode *pLastChild; /* Last child on the pChild list */ char *zName; /* Name of this entry. The "tail" */ char *zFullName; /* Full pathname of this entry */ char *zUuid; /* Artifact hash of this file. May be NULL. */ double mtime; /* Modification time for this entry */ | | < | 469 470 471 472 473 474 475 476 477 478 479 480 481 482 483 | FileTreeNode *pSibling; /* Next element in the same subdirectory */ FileTreeNode *pChild; /* List of child nodes */ FileTreeNode *pLastChild; /* Last child on the pChild list */ char *zName; /* Name of this entry. The "tail" */ char *zFullName; /* Full pathname of this entry */ char *zUuid; /* Artifact hash of this file. May be NULL. */ double mtime; /* Modification time for this entry */ double sortBy; /* Either mtime or size, depending on desired sort order */ int iSize; /* Size for this entry */ unsigned nFullName; /* Length of zFullName */ unsigned iLevel; /* Levels of parent directories */ }; /* ** A complete file hierarchy |
︙ | ︙ | |||
507 508 509 510 511 512 513 | const char *zUuid, /* Hash of the file. Might be NULL. */ double mtime, /* Modification time for this entry */ int size, /* Size for this entry */ int sortOrder /* 0: filename, 1: mtime, 2: size */ ){ int i; FileTreeNode *pParent; /* Parent (directory) of the next node to insert */ | | | 506 507 508 509 510 511 512 513 514 515 516 517 518 519 520 | const char *zUuid, /* Hash of the file. Might be NULL. */ double mtime, /* Modification time for this entry */ int size, /* Size for this entry */ int sortOrder /* 0: filename, 1: mtime, 2: size */ ){ int i; FileTreeNode *pParent; /* Parent (directory) of the next node to insert */ /* Make pParent point to the most recent ancestor of zPath, or ** NULL if there are no prior entires that are a container for zPath. */ pParent = pTree->pLast; while( pParent!=0 && ( strncmp(pParent->zFullName, zPath, pParent->nFullName)!=0 || zPath[pParent->nFullName]!='/' ) |
︙ | ︙ |
Changes to src/builtin.c.
︙ | ︙ | |||
519 520 521 522 523 524 525 | builtinVtab_cursor *pCur = (builtinVtab_cursor*)cur; return pCur->iRowid>count(aBuiltinFiles); } /* ** This method is called to "rewind" the builtinVtab_cursor object back ** to the first row of output. This method is always called at least | | | | 519 520 521 522 523 524 525 526 527 528 529 530 531 532 533 534 535 536 537 | builtinVtab_cursor *pCur = (builtinVtab_cursor*)cur; return pCur->iRowid>count(aBuiltinFiles); } /* ** This method is called to "rewind" the builtinVtab_cursor object back ** to the first row of output. This method is always called at least ** once prior to any call to builtinVtabColumn() or builtinVtabRowid() or ** builtinVtabEof(). */ static int builtinVtabFilter( sqlite3_vtab_cursor *pVtabCursor, int idxNum, const char *idxStr, int argc, sqlite3_value **argv ){ builtinVtab_cursor *pCur = (builtinVtab_cursor *)pVtabCursor; pCur->iRowid = 1; return SQLITE_OK; } |
︙ | ︙ | |||
548 549 550 551 552 553 554 | ){ pIdxInfo->estimatedCost = (double)count(aBuiltinFiles); pIdxInfo->estimatedRows = count(aBuiltinFiles); return SQLITE_OK; } /* | | | 548 549 550 551 552 553 554 555 556 557 558 559 560 561 562 | ){ pIdxInfo->estimatedCost = (double)count(aBuiltinFiles); pIdxInfo->estimatedRows = count(aBuiltinFiles); return SQLITE_OK; } /* ** This following structure defines all the methods for the ** virtual table. */ static sqlite3_module builtinVtabModule = { /* iVersion */ 0, /* xCreate */ 0, /* The builtin vtab is eponymous and read-only */ /* xConnect */ builtinVtabConnect, /* xBestIndex */ builtinVtabBestIndex, |
︙ | ︙ | |||
575 576 577 578 579 580 581 | /* xCommit */ 0, /* xRollback */ 0, /* xFindMethod */ 0, /* xRename */ 0, /* xSavepoint */ 0, /* xRelease */ 0, /* xRollbackTo */ 0, | | < | 575 576 577 578 579 580 581 582 583 584 585 586 587 588 589 | /* xCommit */ 0, /* xRollback */ 0, /* xFindMethod */ 0, /* xRename */ 0, /* xSavepoint */ 0, /* xRelease */ 0, /* xRollbackTo */ 0, /* xShadowName */ 0 }; /* ** Register the builtin virtual table */ int builtin_vtab_register(sqlite3 *db){ |
︙ | ︙ | |||
814 815 816 817 818 819 820 | ** per-page basis. In this case, all arguments are ignored! ** ** This function has an internal mapping of the dependencies for each ** of the known fossil.XYZ.js modules and ensures that the ** dependencies also get queued (recursively) and that each module is ** queued only once. ** | | | 813 814 815 816 817 818 819 820 821 822 823 824 825 826 827 | ** per-page basis. In this case, all arguments are ignored! ** ** This function has an internal mapping of the dependencies for each ** of the known fossil.XYZ.js modules and ensures that the ** dependencies also get queued (recursively) and that each module is ** queued only once. ** ** If passed a name which is not a base fossil module name then it ** will fail fatally! ** ** DO NOT use this for loading fossil.page.*.js: use ** builtin_request_js() for those. ** ** If the current JS delivery mode is *not* JS_BUNDLED then this ** function queues up a request for each given module and its known |
︙ | ︙ |
Changes to src/bundle.c.
︙ | ︙ | |||
144 145 146 147 148 149 150 | db_finalize(&q); } } /* ** Implement the "fossil bundle append BUNDLE FILE..." command. Add ** the named files into the BUNDLE. Create the BUNDLE if it does not | | | 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 | db_finalize(&q); } } /* ** Implement the "fossil bundle append BUNDLE FILE..." command. Add ** the named files into the BUNDLE. Create the BUNDLE if it does not ** alraedy exist. */ static void bundle_append_cmd(void){ Blob content, hash; int i; Stmt q; verify_all_options(); |
︙ | ︙ | |||
537 538 539 540 541 542 543 | ** Write elements of a bundle on standard output */ static void bundle_cat_cmd(void){ int i; Blob x; verify_all_options(); if( g.argc<5 ) usage("cat BUNDLE HASH..."); | | | 537 538 539 540 541 542 543 544 545 546 547 548 549 550 551 | ** Write elements of a bundle on standard output */ static void bundle_cat_cmd(void){ int i; Blob x; verify_all_options(); if( g.argc<5 ) usage("cat BUNDLE HASH..."); bundle_attach_file(g.argv[3], "b1", 1); blob_zero(&x); for(i=4; i<g.argc; i++){ int blobid = db_int(0,"SELECT blobid FROM bblob WHERE uuid LIKE '%q%%'", g.argv[i]); if( blobid==0 ){ fossil_fatal("no such artifact in bundle: %s", g.argv[i]); } |
︙ | ︙ | |||
567 568 569 570 571 572 573 | */ static void bundle_import_cmd(void){ int forceFlag = find_option("force","f",0)!=0; int isPriv = find_option("publish",0,0)==0; char *zMissingDeltas; verify_all_options(); if ( g.argc!=4 ) usage("import BUNDLE ?OPTIONS?"); | | | 567 568 569 570 571 572 573 574 575 576 577 578 579 580 581 | */ static void bundle_import_cmd(void){ int forceFlag = find_option("force","f",0)!=0; int isPriv = find_option("publish",0,0)==0; char *zMissingDeltas; verify_all_options(); if ( g.argc!=4 ) usage("import BUNDLE ?OPTIONS?"); bundle_attach_file(g.argv[3], "b1", 1); /* Only import a bundle that was generated from a repo with the same ** project code, unless the --force flag is true */ if( !forceFlag ){ if( !db_exists("SELECT 1 FROM config, bconfig" " WHERE config.name='project-code'" " AND bconfig.bcname='project-code'" |
︙ | ︙ |
Changes to src/cache.c.
︙ | ︙ | |||
311 312 313 314 315 316 317 | sqlite3_exec(db, "DELETE FROM cache; DELETE FROM blob; VACUUM;",0,0,0); sqlite3_close(db); fossil_print("cache cleared\n"); }else{ fossil_print("nothing to clear; cache does not exist\n"); } }else if( strncmp(zCmd, "list", nCmd)==0 | | | 311 312 313 314 315 316 317 318 319 320 321 322 323 324 325 | sqlite3_exec(db, "DELETE FROM cache; DELETE FROM blob; VACUUM;",0,0,0); sqlite3_close(db); fossil_print("cache cleared\n"); }else{ fossil_print("nothing to clear; cache does not exist\n"); } }else if( strncmp(zCmd, "list", nCmd)==0 || strncmp(zCmd, "ls", nCmd)==0 || strncmp(zCmd, "status", nCmd)==0 ){ db = cacheOpen(0); if( db==0 ){ fossil_print("cache does not exist\n"); }else{ int nEntry = 0; |
︙ | ︙ | |||
430 431 432 433 434 435 436 | @ hit-count: %d(sqlite3_column_int(pStmt,2)) @ last-access: %s(sqlite3_column_text(pStmt,3)) \ if( zHash ){ @ %z(href("%R/timeline?c=%S",zHash))check-in</a>\ fossil_free(zHash); } @ </p></li> | | | 430 431 432 433 434 435 436 437 438 439 440 441 442 443 444 | @ hit-count: %d(sqlite3_column_int(pStmt,2)) @ last-access: %s(sqlite3_column_text(pStmt,3)) \ if( zHash ){ @ %z(href("%R/timeline?c=%S",zHash))check-in</a>\ fossil_free(zHash); } @ </p></li> } sqlite3_finalize(pStmt); @ </ol> } zDbName = cacheName(); bigSizeName(sizeof(zBuf), zBuf, file_size(zDbName, ExtFILE)); @ <p> |
︙ | ︙ |
Changes to src/capabilities.c.
︙ | ︙ | |||
399 400 401 402 403 404 405 | @ <th>Unversioned Content</th></tr> while( db_step(&q)==SQLITE_ROW ){ const char *zId = db_column_text(&q, 0); const char *zCap = db_column_text(&q, 1); int n = db_column_int(&q, 3); int eType; static const char *const azType[] = { "off", "read", "write" }; | | | 399 400 401 402 403 404 405 406 407 408 409 410 411 412 413 | @ <th>Unversioned Content</th></tr> while( db_step(&q)==SQLITE_ROW ){ const char *zId = db_column_text(&q, 0); const char *zCap = db_column_text(&q, 1); int n = db_column_int(&q, 3); int eType; static const char *const azType[] = { "off", "read", "write" }; static const char *const azClass[] = { "capsumOff", "capsumRead", "capsumWrite" }; if( n==0 ) continue; /* Code */ if( db_column_int(&q,2)<10 ){ @ <tr><th align="right"><tt>"%h(zId)"</tt></th> |
︙ | ︙ |
Changes to src/cgi.c.
︙ | ︙ | |||
35 36 37 38 39 40 41 | ** So, even though the name of this file implies that it only deals with ** CGI, in fact, the code in this file is used to interpret webpage requests ** received by a variety of means, and to generate well-formatted replies ** to those requests. ** ** The code in this file abstracts the web-request so that downstream ** modules that generate the body of the reply (based on the requested page) | | | 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 | ** So, even though the name of this file implies that it only deals with ** CGI, in fact, the code in this file is used to interpret webpage requests ** received by a variety of means, and to generate well-formatted replies ** to those requests. ** ** The code in this file abstracts the web-request so that downstream ** modules that generate the body of the reply (based on the requested page) ** do not need to know if the request is coming from CGI, direct HTTP, ** SCGI, or some other means. ** ** This module gathers information about web page request into a key/value ** store. Keys and values come from: ** ** * Query parameters ** * POST parameter |
︙ | ︙ | |||
479 480 481 482 483 484 485 | if( iReplyStatus<=0 ){ iReplyStatus = 200; zReplyStatus = "OK"; } if( g.fullHttpReply ){ if( rangeEnd>0 | | | 479 480 481 482 483 484 485 486 487 488 489 490 491 492 493 | if( iReplyStatus<=0 ){ iReplyStatus = 200; zReplyStatus = "OK"; } if( g.fullHttpReply ){ if( rangeEnd>0 && iReplyStatus==200 && fossil_strcmp(P("REQUEST_METHOD"),"GET")==0 ){ iReplyStatus = 206; zReplyStatus = "Partial Content"; } blob_appendf(&hdr, "HTTP/1.0 %d %s\r\n", iReplyStatus, zReplyStatus); blob_appendf(&hdr, "Date: %s\r\n", cgi_rfc822_datestamp(time(0))); |
︙ | ︙ | |||
560 561 562 563 564 565 566 | blob_appendf(&hdr, "Content-Encoding: gzip\r\n"); blob_appendf(&hdr, "Vary: Accept-Encoding\r\n"); } total_size = blob_size(&cgiContent[0]) + blob_size(&cgiContent[1]); if( iReplyStatus==206 ){ blob_appendf(&hdr, "Content-Range: bytes %d-%d/%d\r\n", rangeStart, rangeEnd-1, total_size); | | | 560 561 562 563 564 565 566 567 568 569 570 571 572 573 574 | blob_appendf(&hdr, "Content-Encoding: gzip\r\n"); blob_appendf(&hdr, "Vary: Accept-Encoding\r\n"); } total_size = blob_size(&cgiContent[0]) + blob_size(&cgiContent[1]); if( iReplyStatus==206 ){ blob_appendf(&hdr, "Content-Range: bytes %d-%d/%d\r\n", rangeStart, rangeEnd-1, total_size); total_size = rangeEnd - rangeStart; } blob_appendf(&hdr, "Content-Length: %d\r\n", total_size); }else{ total_size = 0; } blob_appendf(&hdr, "\r\n"); cgi_fwrite(blob_buffer(&hdr), blob_size(&hdr)); |
︙ | ︙ | |||
1236 1237 1238 1239 1240 1241 1242 | return; } } fputs(z, pLog); } /* Forward declaration */ | | | | 1236 1237 1238 1239 1240 1241 1242 1243 1244 1245 1246 1247 1248 1249 1250 1251 1252 1253 1254 1255 1256 1257 1258 1259 1260 1261 1262 1263 1264 1265 1266 1267 1268 | return; } } fputs(z, pLog); } /* Forward declaration */ static NORETURN void malformed_request(const char *zMsg); /* ** Checks the QUERY_STRING environment variable, sets it up ** via add_param_list() and, if found, applies its "skin" ** setting. Returns 0 if no QUERY_STRING is set, 1 if it is, ** and 2 if it sets the skin (in which case the cookie may ** still need flushing by the page, via cookie_render()). */ int cgi_setup_query_string(void){ int rc = 0; char * z = (char*)P("QUERY_STRING"); if( z ){ ++rc; z = fossil_strdup(z); add_param_list(z, '&'); z = (char*)P("skin"); if( z ){ char *zErr = skin_use_alternative(z, 2); ++rc; if( !zErr && P("once")==0 ){ cookie_write_parameter("skin","skin",z); /* Per /chat discussion, passing ?skin=... without "once" ** implies the "udc" argument, so we force that into the ** environment here. */ cgi_set_parameter_nocopy("udc", "1", 1); |
︙ | ︙ | |||
1309 1310 1311 1312 1313 1314 1315 | ** / \ ** https://fossil-scm.org/forum/info/12736b30c072551a?t=c ** \___/ \____________/\____/\____________________/ \_/ ** | | | | | ** | HTTP_HOST | PATH_INFO QUERY_STRING ** | | ** REQUEST_SCHEMA SCRIPT_NAME | | < | 1309 1310 1311 1312 1313 1314 1315 1316 1317 1318 1319 1320 1321 1322 1323 1324 1325 1326 1327 1328 1329 1330 1331 1332 | ** / \ ** https://fossil-scm.org/forum/info/12736b30c072551a?t=c ** \___/ \____________/\____/\____________________/ \_/ ** | | | | | ** | HTTP_HOST | PATH_INFO QUERY_STRING ** | | ** REQUEST_SCHEMA SCRIPT_NAME ** */ void cgi_init(void){ char *z; const char *zType; char *zSemi; int len; const char *zRequestUri = cgi_parameter("REQUEST_URI",0); const char *zScriptName = cgi_parameter("SCRIPT_NAME",0); const char *zPathInfo = cgi_parameter("PATH_INFO",0); #ifdef _WIN32 const char *zServerSoftware = cgi_parameter("SERVER_SOFTWARE",0); #endif #ifdef FOSSIL_ENABLE_JSON const int noJson = P("no_json")!=0; #endif |
︙ | ︙ | |||
1347 1348 1349 1350 1351 1352 1353 | zScriptName = fossil_strndup(zRequestUri,(int)(z-zRequestUri)); cgi_set_parameter("SCRIPT_NAME", zScriptName); } #ifdef _WIN32 /* The Microsoft IIS web server does not define REQUEST_URI, instead it uses ** PATH_INFO for virtually the same purpose. Define REQUEST_URI the same as | | | 1346 1347 1348 1349 1350 1351 1352 1353 1354 1355 1356 1357 1358 1359 1360 | zScriptName = fossil_strndup(zRequestUri,(int)(z-zRequestUri)); cgi_set_parameter("SCRIPT_NAME", zScriptName); } #ifdef _WIN32 /* The Microsoft IIS web server does not define REQUEST_URI, instead it uses ** PATH_INFO for virtually the same purpose. Define REQUEST_URI the same as ** PATH_INFO and redefine PATH_INFO with SCRIPT_NAME removed from the ** beginning. */ if( zServerSoftware && strstr(zServerSoftware, "Microsoft-IIS") ){ int i, j; cgi_set_parameter("REQUEST_URI", zPathInfo); for(i=0; zPathInfo[i]==zScriptName[i] && zPathInfo[i]; i++){} for(j=i; zPathInfo[j] && zPathInfo[j]!='?'; j++){} zPathInfo = fossil_strndup(zPathInfo+i, j-i); |
︙ | ︙ | |||
1406 1407 1408 1409 1410 1411 1412 | #endif z = (char*)P("HTTP_COOKIE"); if( z ){ z = fossil_strdup(z); add_param_list(z, ';'); z = (char*)cookie_value("skin",0); if(z){ | | | < < < < < < < < | 1405 1406 1407 1408 1409 1410 1411 1412 1413 1414 1415 1416 1417 1418 1419 1420 1421 1422 1423 1424 1425 1426 1427 1428 1429 1430 | #endif z = (char*)P("HTTP_COOKIE"); if( z ){ z = fossil_strdup(z); add_param_list(z, ';'); z = (char*)cookie_value("skin",0); if(z){ skin_use_alternative(z, 2); } } cgi_setup_query_string(); z = (char*)P("REMOTE_ADDR"); if( z ){ g.zIpAddr = fossil_strdup(z); } len = atoi(PD("CONTENT_LENGTH", "0")); zType = P("CONTENT_TYPE"); zSemi = zType ? strchr(zType, ';') : 0; if( zSemi ){ g.zContentType = fossil_strndup(zType, (int)(zSemi-zType)); zType = g.zContentType; }else{ g.zContentType = zType; |
︙ | ︙ | |||
1774 1775 1776 1777 1778 1779 1780 | "REQUEST_URI", "SCRIPT_FILENAME", "SCRIPT_NAME", "SERVER_NAME", "SERVER_PROTOCOL", "HOME", "FOSSIL_HOME", "USERNAME", "USER", "FOSSIL_USER", "SQLITE_TMPDIR", "TMPDIR", "TEMP", "TMP", "FOSSIL_VFS", "FOSSIL_FORCE_TICKET_MODERATION", "FOSSIL_FORCE_WIKI_MODERATION", "FOSSIL_TCL_PATH", "TH1_DELETE_INTERP", "TH1_ENABLE_DOCS", "TH1_ENABLE_HOOKS", "TH1_ENABLE_TCL", "REMOTE_HOST", | < | 1765 1766 1767 1768 1769 1770 1771 1772 1773 1774 1775 1776 1777 1778 | "REQUEST_URI", "SCRIPT_FILENAME", "SCRIPT_NAME", "SERVER_NAME", "SERVER_PROTOCOL", "HOME", "FOSSIL_HOME", "USERNAME", "USER", "FOSSIL_USER", "SQLITE_TMPDIR", "TMPDIR", "TEMP", "TMP", "FOSSIL_VFS", "FOSSIL_FORCE_TICKET_MODERATION", "FOSSIL_FORCE_WIKI_MODERATION", "FOSSIL_TCL_PATH", "TH1_DELETE_INTERP", "TH1_ENABLE_DOCS", "TH1_ENABLE_HOOKS", "TH1_ENABLE_TCL", "REMOTE_HOST", }; int i; for(i=0; i<count(azCgiVars); i++) (void)P(azCgiVars[i]); } /* ** Print all query parameters on standard output. |
︙ | ︙ | |||
1819 1820 1821 1822 1823 1824 1825 | break; } case 2: { cgi_debug("%s = %s\n", zName, zValue); break; } case 3: { | | | 1809 1810 1811 1812 1813 1814 1815 1816 1817 1818 1819 1820 1821 1822 1823 | break; } case 2: { cgi_debug("%s = %s\n", zName, zValue); break; } case 3: { if( strlen(zValue)>100 ){ fprintf(out,"%s = %.100s...\n", zName, zValue); }else{ fprintf(out,"%s = %s\n", zName, zValue); } break; } } |
︙ | ︙ | |||
1919 1920 1921 1922 1923 1924 1925 | vxprintf(pContent,zFormat,ap); } /* ** Send a reply indicating that the HTTP request was malformed */ | | < < < < < | < < < < | < | < | | 1909 1910 1911 1912 1913 1914 1915 1916 1917 1918 1919 1920 1921 1922 1923 1924 1925 1926 1927 | vxprintf(pContent,zFormat,ap); } /* ** Send a reply indicating that the HTTP request was malformed */ static NORETURN void malformed_request(const char *zMsg){ cgi_set_status(501, "Not Implemented"); cgi_printf( "<html><body><p>Bad Request: %s</p></body></html>\n", zMsg ); cgi_reply(); fossil_exit(0); } /* ** Panic and die while processing a webpage. */ |
︙ | ︙ | |||
2051 2052 2053 2054 2055 2056 2057 | */ void cgi_handle_http_request(const char *zIpAddr){ char *z, *zToken; int i; const char *zScheme = "http"; char zLine[2000]; /* A single line of input. */ g.fullHttpReply = 1; | < | | < | | 2030 2031 2032 2033 2034 2035 2036 2037 2038 2039 2040 2041 2042 2043 2044 2045 2046 2047 2048 2049 2050 2051 2052 2053 2054 2055 2056 2057 2058 2059 2060 2061 2062 2063 | */ void cgi_handle_http_request(const char *zIpAddr){ char *z, *zToken; int i; const char *zScheme = "http"; char zLine[2000]; /* A single line of input. */ g.fullHttpReply = 1; if( cgi_fgets(zLine, sizeof(zLine))==0 ){ malformed_request("missing HTTP header"); } blob_append(&g.httpHeader, zLine, -1); cgi_trace(zLine); zToken = extract_token(zLine, &z); if( zToken==0 ){ malformed_request("malformed HTTP header"); } if( fossil_strcmp(zToken,"GET")!=0 && fossil_strcmp(zToken,"POST")!=0 && fossil_strcmp(zToken,"HEAD")!=0 ){ malformed_request("unsupported HTTP method"); } cgi_setenv("GATEWAY_INTERFACE","CGI/1.0"); cgi_setenv("REQUEST_METHOD",zToken); zToken = extract_token(z, &z); if( zToken==0 ){ malformed_request("malformed URL in HTTP header"); } cgi_setenv("REQUEST_URI", zToken); cgi_setenv("SCRIPT_NAME", ""); for(i=0; zToken[i] && zToken[i]!='?'; i++){} if( zToken[i] ) zToken[i++] = 0; cgi_setenv("PATH_INFO", zToken); cgi_setenv("QUERY_STRING", &zToken[i]); |
︙ | ︙ | |||
2184 2185 2186 2187 2188 2189 2190 | if( nCycles==0 ){ cgi_setenv("REMOTE_ADDR", zIpAddr); g.zIpAddr = fossil_strdup(zIpAddr); } }else{ fossil_fatal("missing SSH IP address"); } | < | 2161 2162 2163 2164 2165 2166 2167 2168 2169 2170 2171 2172 2173 2174 | if( nCycles==0 ){ cgi_setenv("REMOTE_ADDR", zIpAddr); g.zIpAddr = fossil_strdup(zIpAddr); } }else{ fossil_fatal("missing SSH IP address"); } if( fgets(zLine, sizeof(zLine),g.httpIn)==0 ){ malformed_request("missing HTTP header"); } cgi_trace(zLine); zToken = extract_token(zLine, &z); if( zToken==0 ){ malformed_request("malformed HTTP header"); |
︙ | ︙ | |||
2796 2797 2798 2799 2800 2801 2802 | ** implementation as possible, ideally just before it begins doing ** potentially CPU-intensive computations and after all query parameters ** have been consulted. */ void cgi_check_for_malice(void){ struct QParam * pParam; int i; | | | < < | < | 2772 2773 2774 2775 2776 2777 2778 2779 2780 2781 2782 2783 2784 2785 2786 | ** implementation as possible, ideally just before it begins doing ** potentially CPU-intensive computations and after all query parameters ** have been consulted. */ void cgi_check_for_malice(void){ struct QParam * pParam; int i; for(i = 0; i < nUsedQP; ++i){ pParam = &aParamQP[i]; if(0 == pParam->isFetched && fossil_islower(pParam->zName[0])){ cgi_value_spider_check(pParam->zValue, pParam->zName); } } } |
Changes to src/chat.c.
︙ | ︙ | |||
32 33 34 35 36 37 38 | ** * Chat content lives in a single repository. It is never synced. ** Content expires and is deleted after a set interval (a week or so). ** ** Notification is accomplished using the "hanging GET" or "long poll" design ** in which a GET request is issued but the server does not send a reply until ** new content arrives. Newer Web Sockets and Server Sent Event protocols are ** more elegant, but are not compatible with CGI, and would thus complicate | | | 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 | ** * Chat content lives in a single repository. It is never synced. ** Content expires and is deleted after a set interval (a week or so). ** ** Notification is accomplished using the "hanging GET" or "long poll" design ** in which a GET request is issued but the server does not send a reply until ** new content arrives. Newer Web Sockets and Server Sent Event protocols are ** more elegant, but are not compatible with CGI, and would thus complicate ** configuration. */ #include "config.h" #include <assert.h> #include "chat.h" /* ** Outputs JS code to initialize a list of chat alert audio files for |
︙ | ︙ | |||
317 318 319 320 321 322 323 | " ORDER BY msgid LIMIT 1"); if( rAge>mxDays ){ msgid = db_int(0, "SELECT msgid FROM chat" " ORDER BY msgid DESC LIMIT 1 OFFSET %d", mxCnt); if( msgid>0 ){ Stmt s; db_multi_exec("PRAGMA secure_delete=ON;"); | | | 317 318 319 320 321 322 323 324 325 326 327 328 329 330 331 | " ORDER BY msgid LIMIT 1"); if( rAge>mxDays ){ msgid = db_int(0, "SELECT msgid FROM chat" " ORDER BY msgid DESC LIMIT 1 OFFSET %d", mxCnt); if( msgid>0 ){ Stmt s; db_multi_exec("PRAGMA secure_delete=ON;"); db_prepare(&s, "DELETE FROM chat WHERE mtime<julianday('now')-:mxage" " AND msgid<%d", msgid); db_bind_double(&s, ":mxage", mxDays); db_step(&s); db_finalize(&s); } } |
︙ | ︙ | |||
691 692 693 694 695 696 697 | } sqlite3_sleep(iDelay); nDelay--; } } /* Exit by "break" */ db_finalize(&q1); blob_append(&json, "\n]}", 3); cgi_set_content(&json); | | | 691 692 693 694 695 696 697 698 699 700 701 702 703 704 705 | } sqlite3_sleep(iDelay); nDelay--; } } /* Exit by "break" */ db_finalize(&q1); blob_append(&json, "\n]}", 3); cgi_set_content(&json); return; } /* ** WEBPAGE: chat-fetch-one hidden loadavg-exempt ** ** /chat-fetch-one/N ** |
︙ | ︙ | |||
724 725 726 727 728 729 730 | if( !g.perm.Chat ) { chat_emit_permissions_error(0); return; } zChatUser = db_get("chat-timeline-user",0); chat_create_tables(); cgi_set_content_type("application/json"); | | | 724 725 726 727 728 729 730 731 732 733 734 735 736 737 738 | if( !g.perm.Chat ) { chat_emit_permissions_error(0); return; } zChatUser = db_get("chat-timeline-user",0); chat_create_tables(); cgi_set_content_type("application/json"); db_prepare(&q, "SELECT datetime(mtime), xfrom, xmsg, octet_length(file)," " fname, fmime, lmtime" " FROM chat WHERE msgid=%d AND mdel IS NULL", msgid); if(SQLITE_ROW==db_step(&q)){ const char *zDate = db_column_text(&q, 0); const char *zFrom = db_column_text(&q, 1); |
︙ | ︙ | |||
767 768 769 770 771 772 773 | fossil_free(zMsg); } if( nByte==0 ){ blob_appendf(&json, "\"fsize\":0"); }else{ blob_appendf(&json, "\"fsize\":%d,\"fname\":%!j,\"fmime\":%!j", nByte, zFName, zFMime); | | | 767 768 769 770 771 772 773 774 775 776 777 778 779 780 781 | fossil_free(zMsg); } if( nByte==0 ){ blob_appendf(&json, "\"fsize\":0"); }else{ blob_appendf(&json, "\"fsize\":%d,\"fname\":%!j,\"fmime\":%!j", nByte, zFName, zFMime); } blob_append(&json,"}",1); cgi_set_content(&json); }else{ ajax_route_error(404,"Chat message #%d not found.", msgid); } db_finalize(&q); } |
︙ | ︙ | |||
955 956 957 958 959 960 961 | sqlite3_value **argv ){ const char *zType = (const char*)sqlite3_value_text(argv[0]); int rid = sqlite3_value_int(argv[1]); const char *zUser = (const char*)sqlite3_value_text(argv[2]); const char *zMsg = (const char*)sqlite3_value_text(argv[3]); char *zRes = 0; | | | 955 956 957 958 959 960 961 962 963 964 965 966 967 968 969 | sqlite3_value **argv ){ const char *zType = (const char*)sqlite3_value_text(argv[0]); int rid = sqlite3_value_int(argv[1]); const char *zUser = (const char*)sqlite3_value_text(argv[2]); const char *zMsg = (const char*)sqlite3_value_text(argv[3]); char *zRes = 0; if( zType==0 || zUser==0 || zMsg==0 ) return; if( zType[0]=='c' ){ /* Check-ins */ char *zBranch; char *zUuid; zBranch = db_text(0, |
︙ | ︙ | |||
1215 1216 1217 1218 1219 1220 1221 | blob_appendf(&reqUri, "/chat-backup?msgid=%d", msgid); if( g.url.user && g.url.user[0] ){ zObs = obscure(g.url.user); blob_appendf(&reqUri, "&resid=%t", zObs); fossil_free(zObs); } zPw = g.url.passwd; | | < < < < < < < < | 1215 1216 1217 1218 1219 1220 1221 1222 1223 1224 1225 1226 1227 1228 1229 | blob_appendf(&reqUri, "/chat-backup?msgid=%d", msgid); if( g.url.user && g.url.user[0] ){ zObs = obscure(g.url.user); blob_appendf(&reqUri, "&resid=%t", zObs); fossil_free(zObs); } zPw = g.url.passwd; if( zPw==0 && isDefaultUrl ) zPw = unobscure(db_get("last-sync-pw", 0)); if( zPw && zPw[0] ){ zObs = obscure(zPw); blob_appendf(&reqUri, "&token=%t", zObs); fossil_free(zObs); } g.url.path = blob_str(&reqUri); if( bDebug ){ |
︙ | ︙ |
Changes to src/checkin.c.
︙ | ︙ | |||
1363 1364 1365 1366 1367 1368 1369 | "#\n%.78c\n" "# The following diff is excluded from the commit message:\n#\n", '#' ); diff_options(&DCfg, 0, 1); DCfg.diffFlags |= DIFF_VERBOSE; if( g.aCommitFile ){ | < < < < < < < < | < < < < < < < > | 1363 1364 1365 1366 1367 1368 1369 1370 1371 1372 1373 1374 1375 1376 1377 1378 1379 1380 1381 1382 1383 | "#\n%.78c\n" "# The following diff is excluded from the commit message:\n#\n", '#' ); diff_options(&DCfg, 0, 1); DCfg.diffFlags |= DIFF_VERBOSE; if( g.aCommitFile ){ FileDirList *diffFiles; int i; for(i=0; g.aCommitFile[i]!=0; ++i){} diffFiles = fossil_malloc_zero((i+1) * sizeof(*diffFiles)); for(i=0; g.aCommitFile[i]!=0; ++i){ diffFiles[i].zName = db_text(0, "SELECT pathname FROM vfile WHERE id=%d", g.aCommitFile[i]); if( fossil_strcmp(diffFiles[i].zName, "." )==0 ){ diffFiles[0].zName[0] = '.'; diffFiles[0].zName[1] = 0; break; } diffFiles[i].nName = strlen(diffFiles[i].zName); diffFiles[i].nUsed = 0; |
︙ | ︙ | |||
2533 2534 2535 2536 2537 2538 2539 | "use --override-lock", g.ckinLockFail); }else{ fossil_fatal("Would fork. \"update\" first or use --branch or " "--allow-fork."); } } | | | | 2519 2520 2521 2522 2523 2524 2525 2526 2527 2528 2529 2530 2531 2532 2533 2534 2535 2536 2537 2538 2539 2540 2541 2542 2543 2544 2545 2546 2547 2548 2549 2550 2551 2552 2553 2554 | "use --override-lock", g.ckinLockFail); }else{ fossil_fatal("Would fork. \"update\" first or use --branch or " "--allow-fork."); } } /* ** Do not allow a commit against a closed leaf unless the commit ** ends up on a different branch. */ if( /* parent check-in has the "closed" tag... */ leaf_is_closed(vid) /* ... and the new check-in has no --branch option or the --branch ** option does not actually change the branch */ && (sCiInfo.zBranch==0 || db_exists("SELECT 1 FROM tagxref" " WHERE tagid=%d AND rid=%d AND tagtype>0" " AND value=%Q", TAG_BRANCH, vid, sCiInfo.zBranch)) ){ fossil_fatal("cannot commit against a closed leaf"); } /* Always exit the loop on the second pass */ if( bRecheck ) break; /* Get the check-in comment. This might involve prompting the ** user for the check-in comment, in which case we should resync ** to renew the check-in lock and repeat the checks for conflicts. */ if( zComment ){ blob_zero(&comment); blob_append(&comment, zComment, -1); |
︙ | ︙ |
Changes to src/clone.c.
︙ | ︙ | |||
134 135 136 137 138 139 140 | ** --save-http-password Remember the HTTP password without asking ** -c|--ssh-command SSH Use SSH as the "ssh" command ** --ssl-identity FILENAME Use the SSL identity if requested by the server ** --transport-command CMD Use CMD to move messages to the server and back ** -u|--unversioned Also sync unversioned content ** -v|--verbose Show more statistics in output ** --workdir DIR Also open a check-out in DIR | < | 134 135 136 137 138 139 140 141 142 143 144 145 146 147 | ** --save-http-password Remember the HTTP password without asking ** -c|--ssh-command SSH Use SSH as the "ssh" command ** --ssl-identity FILENAME Use the SSL identity if requested by the server ** --transport-command CMD Use CMD to move messages to the server and back ** -u|--unversioned Also sync unversioned content ** -v|--verbose Show more statistics in output ** --workdir DIR Also open a check-out in DIR ** ** See also: [[init]], [[open]] */ void clone_cmd(void){ char *zPassword; const char *zDefaultUser; /* Optional name of the default user */ const char *zHttpAuth; /* HTTP Authorization user:pass information */ |
︙ | ︙ | |||
160 161 162 163 164 165 166 | if( find_option("private",0,0)!=0 ) syncFlags |= SYNC_PRIVATE; if( find_option("once",0,0)!=0) urlFlags &= ~URL_REMEMBER; if( find_option("save-http-password",0,0)!=0 ){ urlFlags &= ~URL_PROMPT_PW; urlFlags |= URL_REMEMBER_PW; } if( find_option("verbose","v",0)!=0) syncFlags |= SYNC_VERBOSE; | < | 159 160 161 162 163 164 165 166 167 168 169 170 171 172 | if( find_option("private",0,0)!=0 ) syncFlags |= SYNC_PRIVATE; if( find_option("once",0,0)!=0) urlFlags &= ~URL_REMEMBER; if( find_option("save-http-password",0,0)!=0 ){ urlFlags &= ~URL_PROMPT_PW; urlFlags |= URL_REMEMBER_PW; } if( find_option("verbose","v",0)!=0) syncFlags |= SYNC_VERBOSE; if( find_option("unversioned","u",0)!=0 ){ syncFlags |= SYNC_UNVERSIONED; if( syncFlags & SYNC_VERBOSE ){ syncFlags |= SYNC_UV_TRACE; } } zHttpAuth = find_option("httpauth","B",1); |
︙ | ︙ | |||
196 197 198 199 200 201 202 | g.argv[2]); } zRepo = mprintf("./%s.fossil", zBase); if( zWorkDir==0 ){ zWorkDir = mprintf("./%s", zBase); } fossil_free(zBase); | | | 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 | g.argv[2]); } zRepo = mprintf("./%s.fossil", zBase); if( zWorkDir==0 ){ zWorkDir = mprintf("./%s", zBase); } fossil_free(zBase); } if( -1 != file_size(zRepo, ExtFILE) ){ fossil_fatal("file already exists: %s", zRepo); } /* Fail before clone if open will fail because inside an open check-out */ if( zWorkDir!=0 && zWorkDir[0]!=0 && !noOpen ){ if( db_open_local_v2(0, allowNested) ){ fossil_fatal("there is already an open tree at %s", g.zLocalRoot); |
︙ | ︙ | |||
260 261 262 263 264 265 266 | "DELETE FROM config WHERE name='project-code';" ); db_protect_pop(); url_enable_proxy(0); clone_ssh_db_set_options(); url_get_password_if_needed(); g.xlinkClusterOnly = 1; | | < < < < < < < | < < < | 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 | "DELETE FROM config WHERE name='project-code';" ); db_protect_pop(); url_enable_proxy(0); clone_ssh_db_set_options(); url_get_password_if_needed(); g.xlinkClusterOnly = 1; nErr = client_sync(syncFlags,CONFIGSET_ALL,0,0); g.xlinkClusterOnly = 0; verify_cancel(); db_end_transaction(0); db_close(1); if( nErr ){ file_delete(zRepo); fossil_fatal("server returned an error - clone aborted"); } db_open_repository(zRepo); } db_begin_transaction(); if( db_exists("SELECT 1 FROM delta WHERE srcId IN phantom") ){ fossil_fatal("there are unresolved deltas -" " the clone is probably incomplete and unusable."); |
︙ | ︙ |
Changes to src/comformat.c.
︙ | ︙ | |||
272 273 274 275 276 277 278 | if( maxChars<useChars ){ zBuf[iBuf++] = ' '; break; } }else if( wordBreak && fossil_isspace(c) ){ int distUTF8; int nextIndex = comment_next_space(zLine, index, &distUTF8); | | | 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 | if( maxChars<useChars ){ zBuf[iBuf++] = ' '; break; } }else if( wordBreak && fossil_isspace(c) ){ int distUTF8; int nextIndex = comment_next_space(zLine, index, &distUTF8); if( nextIndex<=0 || distUTF8>maxChars ){ break; } charCnt++; }else{ charCnt++; } assert( c!='\n' || charCnt==0 ); |
︙ | ︙ |
Changes to src/configure.c.
︙ | ︙ | |||
91 92 93 94 95 96 97 | } aConfig[] = { { "css", CONFIGSET_CSS }, { "header", CONFIGSET_SKIN }, { "mainmenu", CONFIGSET_SKIN }, { "footer", CONFIGSET_SKIN }, { "details", CONFIGSET_SKIN }, { "js", CONFIGSET_SKIN }, | < | 91 92 93 94 95 96 97 98 99 100 101 102 103 104 | } aConfig[] = { { "css", CONFIGSET_CSS }, { "header", CONFIGSET_SKIN }, { "mainmenu", CONFIGSET_SKIN }, { "footer", CONFIGSET_SKIN }, { "details", CONFIGSET_SKIN }, { "js", CONFIGSET_SKIN }, { "logo-mimetype", CONFIGSET_SKIN }, { "logo-image", CONFIGSET_SKIN }, { "background-mimetype", CONFIGSET_SKIN }, { "background-image", CONFIGSET_SKIN }, { "icon-mimetype", CONFIGSET_SKIN }, { "icon-image", CONFIGSET_SKIN }, { "timeline-block-markup", CONFIGSET_SKIN }, |
︙ | ︙ | |||
869 870 871 872 873 874 875 | } url_parse(zServer, URL_PROMPT_PW|URL_USE_CONFIG); if( g.url.protocol==0 ) fossil_fatal("no server URL specified"); user_select(); url_enable_proxy("via proxy: "); if( overwriteFlag ) mask |= CONFIGSET_OVERWRITE; if( strncmp(zMethod, "push", n)==0 ){ | | | | | 868 869 870 871 872 873 874 875 876 877 878 879 880 881 882 883 884 885 886 887 888 | } url_parse(zServer, URL_PROMPT_PW|URL_USE_CONFIG); if( g.url.protocol==0 ) fossil_fatal("no server URL specified"); user_select(); url_enable_proxy("via proxy: "); if( overwriteFlag ) mask |= CONFIGSET_OVERWRITE; if( strncmp(zMethod, "push", n)==0 ){ client_sync(0,0,(unsigned)mask,0); }else if( strncmp(zMethod, "pull", n)==0 ){ if( overwriteFlag ) db_unprotect(PROTECT_USER); client_sync(0,(unsigned)mask,0,0); if( overwriteFlag ) db_protect_pop(); }else{ client_sync(0,(unsigned)mask,(unsigned)mask,0); } }else if( strncmp(zMethod, "reset", n)==0 ){ int mask, i; char *zBackup; if( g.argc!=4 ) usage("reset AREA"); mask = configure_name_to_mask(g.argv[3], 1); |
︙ | ︙ |
Changes to src/cookies.c.
︙ | ︙ | |||
211 212 213 214 215 216 217 | assert( zPName!=0 ); cookie_parse(); for(i=0; i<cookies.nParam && strcmp(zPName,cookies.aParam[i].zPName); i++){} return i<cookies.nParam ? cookies.aParam[i].zPValue : zDefault; } /* | < < < < < < | < < < < | < < | | < < | < < < | 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 | assert( zPName!=0 ); cookie_parse(); for(i=0; i<cookies.nParam && strcmp(zPName,cookies.aParam[i].zPName); i++){} return i<cookies.nParam ? cookies.aParam[i].zPValue : zDefault; } /* ** WEBPAGE: cookies ** ** Show the current display settings contained in the ** "fossil_display_settings" cookie. */ void cookie_page(void){ int i; int nCookie = 0; const char *zName = 0; const char *zValue = 0; int isQP = 0; cookie_parse(); style_header("Cookies"); @ <form method="POST"> @ <ol> for(i=0; cgi_param_info(i, &zName, &zValue, &isQP); i++){ char *zDel; if( isQP ) continue; if( fossil_isupper(zName[0]) ) continue; zDel = mprintf("del%s",zName); if( P(zDel)!=0 ){ cgi_set_cookie(zName, "", 0, -1); cgi_redirect("cookies"); } nCookie++; @ <li><p><b>%h(zName)</b>: %h(zValue) @ <input type="submit" name="%h(zDel)" value="Delete"> if( fossil_strcmp(zName, DISPLAY_SETTINGS_COOKIE)==0 && cookies.nParam>0 ){ int j; @ <ul> for(j=0; j<cookies.nParam; j++){ @ <li>%h(cookies.aParam[j].zPName): "%h(cookies.aParam[j].zPValue)" } @ </ul> } fossil_free(zDel); } @ </ol> @ </form> if( nCookie==0 ){ @ <p><i>No cookies for this website</i></p> } style_finish_page(); } |
Changes to src/db.c.
︙ | ︙ | |||
170 171 172 173 174 175 176 | void *pAuthArg; /* Argument to the authorizer */ const char *zAuthName; /* Name of the authorizer */ int bProtectTriggers; /* True if protection triggers already exist */ int nProtect; /* Slots of aProtect used */ unsigned aProtect[12]; /* Saved values of protectMask */ } db = { PROTECT_USER|PROTECT_CONFIG|PROTECT_BASELINE, /* protectMask */ | | | 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 | void *pAuthArg; /* Argument to the authorizer */ const char *zAuthName; /* Name of the authorizer */ int bProtectTriggers; /* True if protection triggers already exist */ int nProtect; /* Slots of aProtect used */ unsigned aProtect[12]; /* Saved values of protectMask */ } db = { PROTECT_USER|PROTECT_CONFIG|PROTECT_BASELINE, /* protectMask */ 0, 0, 0, 0, 0, 0, }; /* ** Arrange for the given file to be deleted on a failure. */ void db_delete_on_failure(const char *zFilename){ assert( db.nDeleteOnFail<count(db.azDeleteOnFail) ); if( zFilename==0 ) return; |
︙ | ︙ | |||
455 456 457 458 459 460 461 | ** be compromised by an attack. */ void db_protect_only(unsigned flags){ if( db.nProtect>=count(db.aProtect)-2 ){ fossil_panic("too many db_protect() calls"); } db.aProtect[db.nProtect++] = db.protectMask; | | | 455 456 457 458 459 460 461 462 463 464 465 466 467 468 469 | ** be compromised by an attack. */ void db_protect_only(unsigned flags){ if( db.nProtect>=count(db.aProtect)-2 ){ fossil_panic("too many db_protect() calls"); } db.aProtect[db.nProtect++] = db.protectMask; if( (flags & PROTECT_SENSITIVE)!=0 && db.bProtectTriggers==0 && g.repositoryOpen ){ /* Create the triggers needed to protect sensitive settings from ** being created or modified the first time that PROTECT_SENSITIVE ** is enabled. Deleting a sensitive setting is harmless, so there ** is not trigger to block deletes. After being created once, the |
︙ | ︙ | |||
1555 1556 1557 1558 1559 1560 1561 | sqlite3_create_function(db, "protected_setting", 1, SQLITE_UTF8, 0, db_protected_setting_func, 0, 0); sqlite3_create_function(db, "win_reserved", 1, SQLITE_UTF8, 0, db_win_reserved_func,0,0); sqlite3_create_function(db, "url_nouser", 1, SQLITE_UTF8, 0, url_nouser_func,0,0); sqlite3_create_function(db, "chat_msg_from_event", 4, | | | | 1555 1556 1557 1558 1559 1560 1561 1562 1563 1564 1565 1566 1567 1568 1569 1570 1571 | sqlite3_create_function(db, "protected_setting", 1, SQLITE_UTF8, 0, db_protected_setting_func, 0, 0); sqlite3_create_function(db, "win_reserved", 1, SQLITE_UTF8, 0, db_win_reserved_func,0,0); sqlite3_create_function(db, "url_nouser", 1, SQLITE_UTF8, 0, url_nouser_func,0,0); sqlite3_create_function(db, "chat_msg_from_event", 4, SQLITE_UTF8 | SQLITE_INNOCUOUS, 0, chat_msg_from_event, 0, 0); } #if USE_SEE /* ** This is a pointer to the saved database encryption key string. */ static char *zSavedKey = 0; |
︙ | ︙ | |||
2487 2488 2489 2490 2491 2492 2493 | db_multi_exec("ALTER TABLE undo ADD COLUMN isLink BOOLEAN DEFAULT 0"); } if( db_local_table_exists_but_lacks_column("undo_vfile", "islink") ){ db_multi_exec("ALTER TABLE undo_vfile ADD COLUMN islink BOOL DEFAULT 0"); } } | | | 2487 2488 2489 2490 2491 2492 2493 2494 2495 2496 2497 2498 2499 2500 2501 | db_multi_exec("ALTER TABLE undo ADD COLUMN isLink BOOLEAN DEFAULT 0"); } if( db_local_table_exists_but_lacks_column("undo_vfile", "islink") ){ db_multi_exec("ALTER TABLE undo_vfile ADD COLUMN islink BOOL DEFAULT 0"); } } /* The design of the check-out database changed on 2019-01-19, adding the mhash ** column to vfile and vmerge and changing the UNIQUE index on vmerge into ** a PRIMARY KEY that includes the new mhash column. However, we must have ** the repository database at hand in order to do the migration, so that ** step is deferred. */ return 1; } |
︙ | ︙ | |||
2604 2605 2606 2607 2608 2609 2610 | sqlite3_stmt *pStmt = 0; sz = file_size(zDbName, ExtFILE); if( sz<16834 ) return 0; db = db_open(zDbName); if( !db ) return 0; if( !g.zVfsName && sz%512 ) return 0; | | | 2604 2605 2606 2607 2608 2609 2610 2611 2612 2613 2614 2615 2616 2617 2618 | sqlite3_stmt *pStmt = 0; sz = file_size(zDbName, ExtFILE); if( sz<16834 ) return 0; db = db_open(zDbName); if( !db ) return 0; if( !g.zVfsName && sz%512 ) return 0; rc = sqlite3_prepare_v2(db, "SELECT count(*) FROM sqlite_schema" " WHERE name COLLATE nocase IN" "('blob','delta','rcvfrom','user','config','mlink','plink');", -1, &pStmt, 0); if( rc ) goto is_repo_end; rc = sqlite3_step(pStmt); if( rc!=SQLITE_ROW ) goto is_repo_end; |
︙ | ︙ | |||
3714 3715 3716 3717 3718 3719 3720 | z = fossil_strdup(pSetting->def); }else{ z = fossil_strdup(zDefault); } } return z; } | | < | 3714 3715 3716 3717 3718 3719 3720 3721 3722 3723 3724 3725 3726 3727 3728 | z = fossil_strdup(pSetting->def); }else{ z = fossil_strdup(zDefault); } } return z; } char *db_get_mtime(const char *zName, const char *zFormat, const char *zDefault){ char *z = 0; if( g.repositoryOpen ){ z = db_text(0, "SELECT mtime FROM config WHERE name=%Q", zName); } if( z==0 ){ z = fossil_strdup(zDefault); }else if( zFormat!=0 ){ |
︙ | ︙ | |||
4020 4021 4022 4023 4024 4025 4026 | if( !g.localOpen ) return; zName = db_repository_filename(); } file_canonical_name(zName, &full, 0); (void)filename_collation(); /* Initialize before connection swap */ db_swap_connections(); zRepoSetting = mprintf("repo:%q", blob_str(&full)); | | | 4019 4020 4021 4022 4023 4024 4025 4026 4027 4028 4029 4030 4031 4032 4033 | if( !g.localOpen ) return; zName = db_repository_filename(); } file_canonical_name(zName, &full, 0); (void)filename_collation(); /* Initialize before connection swap */ db_swap_connections(); zRepoSetting = mprintf("repo:%q", blob_str(&full)); db_unprotect(PROTECT_CONFIG); db_multi_exec( "DELETE FROM global_config WHERE name %s = %Q;", filename_collation(), zRepoSetting ); db_multi_exec( "INSERT OR IGNORE INTO global_config(name,value)" |
︙ | ︙ | |||
4097 4098 4099 4100 4101 4102 4103 | ** "new-name.fossil". ** ** Options: ** --empty Initialize check-out as being empty, but still connected ** with the local repository. If you commit this check-out, ** it will become a new "initial" commit in the repository. ** -f|--force Continue with the open even if the working directory is | | > > | 4096 4097 4098 4099 4100 4101 4102 4103 4104 4105 4106 4107 4108 4109 4110 4111 4112 4113 4114 4115 4116 4117 4118 4119 4120 4121 4122 | ** "new-name.fossil". ** ** Options: ** --empty Initialize check-out as being empty, but still connected ** with the local repository. If you commit this check-out, ** it will become a new "initial" commit in the repository. ** -f|--force Continue with the open even if the working directory is ** not empty ** --force-missing Force opening a repository with missing content ** -k|--keep Only modify the manifest file(s) ** --nested Allow opening a repository inside an opened check-out ** --nosync Do not auto-sync the repository prior to opening even ** if the autosync setting is on. ** --repodir DIR If REPOSITORY is a URI that will be cloned, store ** the clone in DIR rather than in "." ** --setmtime Set timestamps of all files to match their SCM-side ** times (the timestamp of the last check-in which modified ** them). ** --sync Auto-sync prior to opening even if the autosync setting ** is off ** --verbose If passed a URI then this flag is passed on to the clone ** operation, otherwise it has no effect ** --workdir DIR Use DIR as the working directory instead of ".". The DIR ** directory is created if it does not exist. ** ** See also: [[close]], [[clone]] */ |
︙ | ︙ | |||
4184 4185 4186 4187 4188 4189 4190 | if( keepFlag==0 && bForce==0 && (nLocal = file_directory_size(".", 0, 1))>0 && (nLocal>1 || isUri || !file_in_cwd(zRepo)) ){ fossil_fatal("directory %s is not empty\n" "use the -f (--force) option to override\n" | | | 4185 4186 4187 4188 4189 4190 4191 4192 4193 4194 4195 4196 4197 4198 4199 | if( keepFlag==0 && bForce==0 && (nLocal = file_directory_size(".", 0, 1))>0 && (nLocal>1 || isUri || !file_in_cwd(zRepo)) ){ fossil_fatal("directory %s is not empty\n" "use the -f (--force) option to override\n" "or the -k (--keep) option to keep local files unchanged", file_getcwd(0,0)); } if( db_open_local_v2(0, allowNested) ){ fossil_fatal("there is already an open tree at %s", g.zLocalRoot); } |
︙ | ︙ | |||
4390 4391 4392 4393 4394 4395 4396 | ** ** When the admin-log setting is enabled, configuration changes are recorded ** in the "admin_log" table of the repository. */ /* ** SETTING: allow-symlinks boolean default=off sensitive ** | | | 4391 4392 4393 4394 4395 4396 4397 4398 4399 4400 4401 4402 4403 4404 4405 | ** ** When the admin-log setting is enabled, configuration changes are recorded ** in the "admin_log" table of the repository. */ /* ** SETTING: allow-symlinks boolean default=off sensitive ** ** When allow-symlinks is OFF, Fossil does not see symbolic links ** (a.k.a "symlinks") on disk as a separate class of object. Instead Fossil ** sees the object that the symlink points to. Fossil will only manage files ** and directories, not symlinks. When a symlink is added to a repository, ** the object that the symlink points to is added, not the symlink itself. ** ** When allow-symlinks is ON, Fossil sees symlinks on disk as a separate ** object class that is distinct from files and directories. When a symlink |
︙ | ︙ | |||
4448 4449 4450 4451 4452 4453 4454 | ** When the auto-hyperlink setting is 1, the javascript that runs to set ** the href= attributes of hyperlinks delays by this many milliseconds ** after the page load. Suggested values: 50 to 200. */ /* ** SETTING: auto-hyperlink-mouseover boolean default=off ** | | | 4449 4450 4451 4452 4453 4454 4455 4456 4457 4458 4459 4460 4461 4462 4463 | ** When the auto-hyperlink setting is 1, the javascript that runs to set ** the href= attributes of hyperlinks delays by this many milliseconds ** after the page load. Suggested values: 50 to 200. */ /* ** SETTING: auto-hyperlink-mouseover boolean default=off ** ** When the auto-hyperlink setting is 1 and this setting is on, the ** javascript that runs to set the href= attributes of hyperlinks waits ** until either a mousedown or mousemove event is seen. This helps ** to distinguish real users from robots. For maximum robot defense, ** the recommended setting is ON. */ /* ** SETTING: auto-shun boolean default=on |
︙ | ︙ | |||
4483 4484 4485 4486 4487 4488 4489 | ** off,commit=pullonly Do not autosync, except do a pull before each ** "commit", presumably to avoid undesirable ** forks. ** ** The syntax is a comma-separated list of VALUE and COMMAND=VALUE entries. ** A plain VALUE entry is the default that is used if no COMMAND matches. ** Otherwise, the VALUE of the matching command is used. | < < < | 4484 4485 4486 4487 4488 4489 4490 4491 4492 4493 4494 4495 4496 4497 | ** off,commit=pullonly Do not autosync, except do a pull before each ** "commit", presumably to avoid undesirable ** forks. ** ** The syntax is a comma-separated list of VALUE and COMMAND=VALUE entries. ** A plain VALUE entry is the default that is used if no COMMAND matches. ** Otherwise, the VALUE of the matching command is used. */ /* ** SETTING: autosync-tries width=16 default=1 ** If autosync is enabled setting this to a value greater ** than zero will cause autosync to try no more than this ** number of attempts if there is a sync failure. */ |
︙ | ︙ | |||
4672 4673 4674 4675 4676 4677 4678 | ** Note that /fileedit cannot edit binary files, so the list should not ** contain any globs for, e.g., images or PDFs. */ /* ** SETTING: forbid-delta-manifests boolean default=off ** If enabled on a client, new delta manifests are prohibited on ** commits. If enabled on a server, whenever a client attempts | | | 4670 4671 4672 4673 4674 4675 4676 4677 4678 4679 4680 4681 4682 4683 4684 | ** Note that /fileedit cannot edit binary files, so the list should not ** contain any globs for, e.g., images or PDFs. */ /* ** SETTING: forbid-delta-manifests boolean default=off ** If enabled on a client, new delta manifests are prohibited on ** commits. If enabled on a server, whenever a client attempts ** to obtain a check-in lock during auto-sync, the server will ** send the "pragma avoid-delta-manifests" statement in its reply, ** which will cause the client to avoid generating a delta ** manifest. */ /* ** SETTING: forum-close-policy boolean default=off ** If true, forum moderators may close/re-open forum posts, and reply |
︙ | ︙ | |||
5013 5014 5015 5016 5017 5018 5019 | ** Defaults to "start" on windows, "open" on Mac, ** and "firefox" on Unix. */ /* ** SETTING: large-file-size width=10 default=200000000 ** Fossil considers any file whose size is greater than this value ** to be a "large file". Fossil might issue warnings if you try to | | | 5011 5012 5013 5014 5015 5016 5017 5018 5019 5020 5021 5022 5023 5024 5025 | ** Defaults to "start" on windows, "open" on Mac, ** and "firefox" on Unix. */ /* ** SETTING: large-file-size width=10 default=200000000 ** Fossil considers any file whose size is greater than this value ** to be a "large file". Fossil might issue warnings if you try to ** "add" or "commit" a "large file". Set this value to 0 or less ** to disable all such warnings. */ /* ** Look up a control setting by its name. Return a pointer to the Setting ** object, or NULL if there is no such setting. ** |
︙ | ︙ | |||
5238 5239 5240 5241 5242 5243 5244 | ** optimization. FILENAME can also be the configuration database file ** (~/.fossil or ~/.config/fossil.db) or a local .fslckout or _FOSSIL_ file. ** ** The purpose of this command is for testing the WITHOUT ROWID capabilities ** of SQLite. There is no big advantage to using WITHOUT ROWID in Fossil. ** ** Options: | | | 5236 5237 5238 5239 5240 5241 5242 5243 5244 5245 5246 5247 5248 5249 5250 | ** optimization. FILENAME can also be the configuration database file ** (~/.fossil or ~/.config/fossil.db) or a local .fslckout or _FOSSIL_ file. ** ** The purpose of this command is for testing the WITHOUT ROWID capabilities ** of SQLite. There is no big advantage to using WITHOUT ROWID in Fossil. ** ** Options: ** -n|--dry-run No changes. Just print what would happen. */ void test_without_rowid(void){ int i, j; Stmt q; Blob allSql; int dryRun = find_option("dry-run", "n", 0)!=0; for(i=2; i<g.argc; i++){ |
︙ | ︙ |
Changes to src/default.css.
︙ | ︙ | |||
494 495 496 497 498 499 500 501 502 503 504 505 506 507 | padding: 0; width: 125px; text-align: center; border-collapse: collapse; border-spacing: 0; } table.report { border: 1px solid #999; margin: 1em 0 1em 0; cursor: pointer; } td.rpteditex { border-width: thin; border-color: #000000; | > | 494 495 496 497 498 499 500 501 502 503 504 505 506 507 508 | padding: 0; width: 125px; text-align: center; border-collapse: collapse; border-spacing: 0; } table.report { border-collapse:collapse; border: 1px solid #999; margin: 1em 0 1em 0; cursor: pointer; } td.rpteditex { border-width: thin; border-color: #000000; |
︙ | ︙ | |||
579 580 581 582 583 584 585 | line-height: 1.275/*for mobile: forum post e6f4ee7de98b55c0*/; text-size-adjust: none /* ^^^ attempt to keep mobile from inflating some text */; } table.diff pre > ins, table.diff pre > del { /* Fill platform-dependent color gaps caused by | | | 580 581 582 583 584 585 586 587 588 589 590 591 592 593 594 | line-height: 1.275/*for mobile: forum post e6f4ee7de98b55c0*/; text-size-adjust: none /* ^^^ attempt to keep mobile from inflating some text */; } table.diff pre > ins, table.diff pre > del { /* Fill platform-dependent color gaps caused by inflated line-height */; padding: 0.062em 0 0.062em 0; } table.diff pre > ins > *, table.diff pre > del > *{ /* Avoid odd-looking color swatches in conjunction with (table.diff pre > ins/del) padding */ padding: inherit; |
︙ | ︙ | |||
615 616 617 618 619 620 621 622 623 624 625 626 627 628 | } tr.diffskip.jchunk:hover { /*background-color: rgba(127,127,127,0.5); cursor: pointer;*/ } tr.diffskip > td.chunkctrl { text-align: left; } tr.diffskip > td.chunkctrl > div { display: flex; align-items: center; } tr.diffskip > td.chunkctrl > div > span.error { padding: 0.25em 0.5em; | > | 616 617 618 619 620 621 622 623 624 625 626 627 628 629 630 | } tr.diffskip.jchunk:hover { /*background-color: rgba(127,127,127,0.5); cursor: pointer;*/ } tr.diffskip > td.chunkctrl { text-align: left; font-family: monospace; } tr.diffskip > td.chunkctrl > div { display: flex; align-items: center; } tr.diffskip > td.chunkctrl > div > span.error { padding: 0.25em 0.5em; |
︙ | ︙ | |||
1290 1291 1292 1293 1294 1295 1296 1297 1298 1299 1300 1301 1302 1303 | margin: 0; } .flex-container.child-gap-small > * { margin: 0.25em; } #fossil-status-bar { display: block; border-width: 1px; border-style: inset; border-color: inherit; min-height: 1.5em; font-size: 1.2em; padding: 0.2em; margin: 0.25em 0; | > | 1292 1293 1294 1295 1296 1297 1298 1299 1300 1301 1302 1303 1304 1305 1306 | margin: 0; } .flex-container.child-gap-small > * { margin: 0.25em; } #fossil-status-bar { display: block; font-family: monospace; border-width: 1px; border-style: inset; border-color: inherit; min-height: 1.5em; font-size: 1.2em; padding: 0.2em; margin: 0.25em 0; |
︙ | ︙ | |||
1380 1381 1382 1383 1384 1385 1386 1387 1388 1389 1390 1391 1392 1393 | table.numbered-lines { width: 100%; table-layout: fixed /* required to keep ultra-wide code from exceeding window width, and instead force a scrollbar on them. */; } table.numbered-lines > tbody > tr { line-height: 1.35; white-space: pre; } table.numbered-lines > tbody > tr > td { font-family: inherit; font-size: inherit; line-height: inherit; | > | 1383 1384 1385 1386 1387 1388 1389 1390 1391 1392 1393 1394 1395 1396 1397 | table.numbered-lines { width: 100%; table-layout: fixed /* required to keep ultra-wide code from exceeding window width, and instead force a scrollbar on them. */; } table.numbered-lines > tbody > tr { font-family: monospace; line-height: 1.35; white-space: pre; } table.numbered-lines > tbody > tr > td { font-family: inherit; font-size: inherit; line-height: inherit; |
︙ | ︙ | |||
1530 1531 1532 1533 1534 1535 1536 | } blockquote.file-content { /* file content block in the /file page */ margin: 0 1em; } | < < < < < < < < < < < < < < < < < < < < < | 1534 1535 1536 1537 1538 1539 1540 1541 1542 1543 1544 1545 1546 1547 | } blockquote.file-content { /* file content block in the /file page */ margin: 0 1em; } /** Circular "help" buttons intended to be placed to the right of another element and hold text text for it. These typically get initialized automatically at page startup via fossil.popupwidget.js, and can be manually initialized/created using window.fossil.helpButtonlets.setup/create(). All of their |
︙ | ︙ | |||
1776 1777 1778 1779 1780 1781 1782 | body.branch .submenu > a.timeline-link { display: none; } body.branch .submenu > a.timeline-link.selected { display: inline; } | < < < < < < < | < < < < | > | 1759 1760 1761 1762 1763 1764 1765 1766 1767 1768 1769 1770 1771 1772 1773 1774 1775 1776 1777 1778 1779 1780 1781 1782 1783 1784 1785 | body.branch .submenu > a.timeline-link { display: none; } body.branch .submenu > a.timeline-link.selected { display: inline; } .monospace { font-family: monospace; } div.markdown > ol.footnotes { font-size: 90%; } div.markdown > ol.footnotes > li { margin-bottom: 0.5em; } div.markdown ol.footnotes > li.fn-joined > sup.fn-joined { color: gray; font-family: monospace; } div.markdown ol.footnotes > li.fn-joined > sup.fn-joined::after { content: "(joined from multiple locations) "; } div.markdown ol.footnotes > li.fn-misreference { margin-top: 0.75em; margin-bottom: 0.75em; |
︙ | ︙ | |||
1872 1873 1874 1875 1876 1877 1878 | /* Objects in the "desktoponly" class are invisible on mobile */ @media screen and (max-width: 600px) { .desktoponly { display: none; } } | < < < < < < < < | 1845 1846 1847 1848 1849 1850 1851 1852 1853 1854 1855 1856 1857 | /* Objects in the "desktoponly" class are invisible on mobile */ @media screen and (max-width: 600px) { .desktoponly { display: none; } } /* Objects in the "wideonly" class are invisible only on wide-screen desktops */ @media screen and (max-width: 1200px) { .wideonly { display: none; } } |
Changes to src/deltafunc.c.
︙ | ︙ | |||
484 485 486 487 488 489 490 | /* xCommit */ 0, /* xRollback */ 0, /* xFindMethod */ 0, /* xRename */ 0, /* xSavepoint */ 0, /* xRelease */ 0, /* xRollbackTo */ 0, | | < | 484 485 486 487 488 489 490 491 492 493 494 495 496 497 498 | /* xCommit */ 0, /* xRollback */ 0, /* xFindMethod */ 0, /* xRename */ 0, /* xSavepoint */ 0, /* xRelease */ 0, /* xRollbackTo */ 0, /* xShadowName */ 0 }; /* ** Invoke this routine to register the various delta functions. */ int deltafunc_init(sqlite3 *db){ int rc = SQLITE_OK; |
︙ | ︙ |
Changes to src/diff.c.
︙ | ︙ | |||
17 18 19 20 21 22 23 | ** ** This file contains code used to compute a "diff" between two ** text files. */ #include "config.h" #include "diff.h" #include <assert.h> | < | 17 18 19 20 21 22 23 24 25 26 27 28 29 30 | ** ** This file contains code used to compute a "diff" between two ** text files. */ #include "config.h" #include "diff.h" #include <assert.h> #if INTERFACE /* ** Flag parameters to the text_diff() routine used to control the formatting ** of the diff output. */ |
︙ | ︙ | |||
47 48 49 50 51 52 53 | #define DIFF_BROWSER 0x00008000 /* The --browser option */ #define DIFF_JSON 0x00010000 /* JSON output */ #define DIFF_DEBUG 0x00020000 /* Debugging diff output */ #define DIFF_RAW 0x00040000 /* Raw triples - for debugging */ #define DIFF_TCL 0x00080000 /* For the --tk option */ #define DIFF_INCBINARY 0x00100000 /* The --diff-binary option */ #define DIFF_SHOW_VERS 0x00200000 /* Show compared versions */ | < < < < < < < < | 46 47 48 49 50 51 52 53 54 55 56 57 58 59 | #define DIFF_BROWSER 0x00008000 /* The --browser option */ #define DIFF_JSON 0x00010000 /* JSON output */ #define DIFF_DEBUG 0x00020000 /* Debugging diff output */ #define DIFF_RAW 0x00040000 /* Raw triples - for debugging */ #define DIFF_TCL 0x00080000 /* For the --tk option */ #define DIFF_INCBINARY 0x00100000 /* The --diff-binary option */ #define DIFF_SHOW_VERS 0x00200000 /* Show compared versions */ /* ** These error messages are shared in multiple locations. They are defined ** here for consistency. */ #define DIFF_CANNOT_COMPUTE_BINARY \ "cannot compute difference between binary files\n" |
︙ | ︙ | |||
89 90 91 92 93 94 95 | ** Conceptually, this object is as an encoding of the command-line options ** for the "fossil diff" command. That is not a precise description, though, ** because not all diff operations are started from the command-line. But ** the idea is sound. ** ** Information encoded by this object includes but is not limited to: ** | | | | 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 | ** Conceptually, this object is as an encoding of the command-line options ** for the "fossil diff" command. That is not a precise description, though, ** because not all diff operations are started from the command-line. But ** the idea is sound. ** ** Information encoded by this object includes but is not limited to: ** ** * The desired output format (unified vs. side-by-side, ** TCL, JSON, HTML vs. plain-text). ** ** * Number of lines of context surrounding each difference block ** ** * Width of output columns for text side-by-side diffop */ struct DiffConfig { u64 diffFlags; /* Diff flags */ int nContext; /* Number of lines of context */ int wColumn; /* Column width in -y mode */ u32 nFile; /* Number of files diffed so far */ const char *zDiffCmd; /* External diff command to use instead of builtin */ |
︙ | ︙ | |||
429 430 431 432 433 434 435 | A = p->aFrom; B = p->aTo; R = p->aEdit; mxr = p->nEdit; while( mxr>2 && R[mxr-1]==0 && R[mxr-2]==0 ){ mxr -= 3; } for(r=0; r<mxr; r += 3*nr){ /* Figure out how many triples to show in a single block */ | | | 420 421 422 423 424 425 426 427 428 429 430 431 432 433 434 | A = p->aFrom; B = p->aTo; R = p->aEdit; mxr = p->nEdit; while( mxr>2 && R[mxr-1]==0 && R[mxr-2]==0 ){ mxr -= 3; } for(r=0; r<mxr; r += 3*nr){ /* Figure out how many triples to show in a single block */ for(nr=1; R[r+nr*3]>0 && R[r+nr*3]<nContext*2; nr++){} /* printf("r=%d nr=%d\n", r, nr); */ /* For the current block comprising nr triples, figure out ** how many lines of A and B are to be displayed */ if( R[r]>nContext ){ na = nb = nContext; |
︙ | ︙ | |||
914 915 916 917 918 919 920 | /* ** This is an abstract superclass for an object that accepts difference ** lines and formats them for display. Subclasses of this object format ** the diff output in different ways. ** ** To subclass, create an instance of the DiffBuilder object and fill ** in appropriate method implementations. | | | 905 906 907 908 909 910 911 912 913 914 915 916 917 918 919 | /* ** This is an abstract superclass for an object that accepts difference ** lines and formats them for display. Subclasses of this object format ** the diff output in different ways. ** ** To subclass, create an instance of the DiffBuilder object and fill ** in appropriate method implementations. */ typedef struct DiffBuilder DiffBuilder; struct DiffBuilder { void (*xSkip)(DiffBuilder*, unsigned int, int); void (*xCommon)(DiffBuilder*,const DLine*); void (*xInsert)(DiffBuilder*,const DLine*); void (*xDelete)(DiffBuilder*,const DLine*); void (*xReplace)(DiffBuilder*,const DLine*,const DLine*); |
︙ | ︙ | |||
1099 1100 1101 1102 1103 1104 1105 | blob_append_char(p->pOut, ' '); blob_append_tcl_literal(p->pOut, pX->z + x, chng.a[i].iStart1 - x); x = chng.a[i].iStart1; blob_append_char(p->pOut, ' '); blob_append_tcl_literal(p->pOut, pX->z + x, chng.a[i].iLen1); x += chng.a[i].iLen1; blob_append_char(p->pOut, ' '); | | | 1090 1091 1092 1093 1094 1095 1096 1097 1098 1099 1100 1101 1102 1103 1104 | blob_append_char(p->pOut, ' '); blob_append_tcl_literal(p->pOut, pX->z + x, chng.a[i].iStart1 - x); x = chng.a[i].iStart1; blob_append_char(p->pOut, ' '); blob_append_tcl_literal(p->pOut, pX->z + x, chng.a[i].iLen1); x += chng.a[i].iLen1; blob_append_char(p->pOut, ' '); blob_append_tcl_literal(p->pOut, pY->z + chng.a[i].iStart2, chng.a[i].iLen2); } if( x<pX->n ){ blob_append_char(p->pOut, ' '); blob_append_tcl_literal(p->pOut, pX->z + x, pX->n - x); } blob_append_char(p->pOut, '\n'); |
︙ | ︙ | |||
1185 1186 1187 1188 1189 1190 1191 | } blob_append_json_literal(p->pOut, pX->z + x, chng.a[i].iStart1 - x); x = chng.a[i].iStart1; blob_append_char(p->pOut, ','); blob_append_json_literal(p->pOut, pX->z + x, chng.a[i].iLen1); x += chng.a[i].iLen1; blob_append_char(p->pOut, ','); | | | 1176 1177 1178 1179 1180 1181 1182 1183 1184 1185 1186 1187 1188 1189 1190 | } blob_append_json_literal(p->pOut, pX->z + x, chng.a[i].iStart1 - x); x = chng.a[i].iStart1; blob_append_char(p->pOut, ','); blob_append_json_literal(p->pOut, pX->z + x, chng.a[i].iLen1); x += chng.a[i].iLen1; blob_append_char(p->pOut, ','); blob_append_json_literal(p->pOut, pY->z + chng.a[i].iStart2, chng.a[i].iLen2); } blob_append_char(p->pOut, ','); blob_append_json_literal(p->pOut, pX->z + x, pX->n - x); blob_append(p->pOut, "],\n",3); } static void dfjsonEnd(DiffBuilder *p){ |
︙ | ︙ | |||
1267 1268 1269 1270 1271 1272 1273 | /* "+" marks for the separator on inserted lines */ for(i=0; i<p->nPending; i++) blob_append(&p->aCol[1], "+\n", 2); /* Text of the inserted lines */ blob_append(&p->aCol[2], "<ins>", 5); blob_append_xfer(&p->aCol[2], &p->aCol[4]); blob_append(&p->aCol[2], "</ins>", 6); | | | 1258 1259 1260 1261 1262 1263 1264 1265 1266 1267 1268 1269 1270 1271 1272 | /* "+" marks for the separator on inserted lines */ for(i=0; i<p->nPending; i++) blob_append(&p->aCol[1], "+\n", 2); /* Text of the inserted lines */ blob_append(&p->aCol[2], "<ins>", 5); blob_append_xfer(&p->aCol[2], &p->aCol[4]); blob_append(&p->aCol[2], "</ins>", 6); p->nPending = 0; } static void dfunifiedFinishRow(DiffBuilder *p){ dfunifiedFinishDelete(p); dfunifiedFinishInsert(p); if( blob_size(&p->aCol[0])==0 ) return; blob_append(p->pOut, "</pre></td><td class=\"diffln difflnr\"><pre>\n", -1); |
︙ | ︙ | |||
2004 2005 2006 2007 2008 2009 2010 | aBig = aRight; nBig = nRight; } iDivBig = nBig/2; iDivSmall = nSmall/2; if( pCfg->diffFlags & DIFF_DEBUG ){ | | | 1995 1996 1997 1998 1999 2000 2001 2002 2003 2004 2005 2006 2007 2008 2009 | aBig = aRight; nBig = nRight; } iDivBig = nBig/2; iDivSmall = nSmall/2; if( pCfg->diffFlags & DIFF_DEBUG ){ fossil_print(" Divide at [%.*s]\n", aBig[iDivBig].n, aBig[iDivBig].z); } bestScore = 10000; for(i=0; i<nSmall; i++){ score = match_dline(aBig+iDivBig, aSmall+i) + abs(i-nSmall/2)*2; if( score<bestScore ){ |
︙ | ︙ | |||
2230 2231 2232 2233 2234 2235 2236 | B = p->aTo; R = p->aEdit; mxr = p->nEdit; while( mxr>2 && R[mxr-1]==0 && R[mxr-2]==0 ){ mxr -= 3; } for(r=0; r<mxr; r += 3*nr){ /* Figure out how many triples to show in a single block */ | | | 2221 2222 2223 2224 2225 2226 2227 2228 2229 2230 2231 2232 2233 2234 2235 | B = p->aTo; R = p->aEdit; mxr = p->nEdit; while( mxr>2 && R[mxr-1]==0 && R[mxr-2]==0 ){ mxr -= 3; } for(r=0; r<mxr; r += 3*nr){ /* Figure out how many triples to show in a single block */ for(nr=1; R[r+nr*3]>0 && R[r+nr*3]<(int)nContext*2; nr++){} /* If there is a regex, skip this block (generate no diff output) ** if the regex matches or does not match both insert and delete. ** Only display the block if one side matches but the other side does ** not. */ if( pCfg->pRe ){ |
︙ | ︙ | |||
3156 3157 3158 3159 3160 3161 3162 | } /* Undocumented and unsupported flags used for development ** debugging and analysis: */ if( find_option("debug",0,0)!=0 ) diffFlags |= DIFF_DEBUG; if( find_option("raw",0,0)!=0 ) diffFlags |= DIFF_RAW; } | | < < < | | < < | 3147 3148 3149 3150 3151 3152 3153 3154 3155 3156 3157 3158 3159 3160 3161 3162 3163 3164 3165 3166 3167 3168 3169 3170 3171 | } /* Undocumented and unsupported flags used for development ** debugging and analysis: */ if( find_option("debug",0,0)!=0 ) diffFlags |= DIFF_DEBUG; if( find_option("raw",0,0)!=0 ) diffFlags |= DIFF_RAW; } if( (z = find_option("context","c",1))!=0 && (f = atoi(z))!=0 ){ pCfg->nContext = f; diffFlags |= DIFF_CONTEXT_EX; } if( (z = find_option("width","W",1))!=0 && (f = atoi(z))>0 ){ pCfg->wColumn = f; } if( find_option("linenum","n",0)!=0 ) diffFlags |= DIFF_LINENO; if( find_option("noopt",0,0)!=0 ) diffFlags |= DIFF_NOOPT; if( find_option("numstat",0,0)!=0 ) diffFlags |= DIFF_NUMSTAT; if( find_option("versions","h",0)!=0 ) diffFlags |= DIFF_SHOW_VERS; if( find_option("invert",0,0)!=0 ) diffFlags |= DIFF_INVERT; if( find_option("brief",0,0)!=0 ) diffFlags |= DIFF_BRIEF; if( find_option("internal","i",0)==0 && (diffFlags & (DIFF_HTML|DIFF_TCL|DIFF_DEBUG|DIFF_JSON))==0 ){ pCfg->zDiffCmd = find_option("command", 0, 1); if( pCfg->zDiffCmd==0 ) pCfg->zDiffCmd = diff_command_external(isGDiff); |
︙ | ︙ | |||
3496 3497 3498 3499 3500 3501 3502 | } p->nVers++; cnt++; } if( p->nVers==0 ){ if( zRevision ){ | | < | 3482 3483 3484 3485 3486 3487 3488 3489 3490 3491 3492 3493 3494 3495 3496 | } p->nVers++; cnt++; } if( p->nVers==0 ){ if( zRevision ){ fossil_fatal("file %s does not exist in check-in %s", zFilename, zRevision); }else{ fossil_fatal("no history for file: %s", zFilename); } } db_finalize(&q); db_end_transaction(0); |
︙ | ︙ |
Changes to src/diff.tcl.
1 2 3 4 5 6 7 8 9 10 11 12 | # The "diff --tk" command outputs prepends a "set fossilcmd {...}" line # to this file, then runs this file using "tclsh" in order to display the # graphical diff in a separate window. A typical "set fossilcmd" line # looks like this: # # set fossilcmd {| "./fossil" diff --html -y -i -v} # # This header comment is stripped off by the "mkbuiltin.c" program. # set prog { package require Tk | | | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 | # The "diff --tk" command outputs prepends a "set fossilcmd {...}" line # to this file, then runs this file using "tclsh" in order to display the # graphical diff in a separate window. A typical "set fossilcmd" line # looks like this: # # set fossilcmd {| "./fossil" diff --html -y -i -v} # # This header comment is stripped off by the "mkbuiltin.c" program. # set prog { package require Tk array set CFG { TITLE {Fossil Diff} LN_COL_BG #dddddd LN_COL_FG #444444 TXT_COL_BG #ffffff TXT_COL_FG #000000 MKR_COL_BG #444444 MKR_COL_FG #dddddd |
︙ | ︙ | |||
30 31 32 33 34 35 36 | ERR_FG #ee0000 PADX 5 WIDTH 80 HEIGHT 45 LB_HEIGHT 25 } | < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < | 30 31 32 33 34 35 36 37 38 39 40 41 42 43 | ERR_FG #ee0000 PADX 5 WIDTH 80 HEIGHT 45 LB_HEIGHT 25 } if {![namespace exists ttk]} { interp alias {} ::ttk::scrollbar {} ::scrollbar interp alias {} ::ttk::menubutton {} ::menubutton } proc dehtml {x} { set x [regsub -all {<[^>]*>} $x {}] |
︙ | ︙ |
Changes to src/diffcmd.c.
︙ | ︙ | |||
111 112 113 114 115 116 117 | } return 0; } /* ** Print details about the compared versions - possibly the working directory ** or the undo buffer. For check-ins, show hash and commit time. | | | 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 | } return 0; } /* ** Print details about the compared versions - possibly the working directory ** or the undo buffer. For check-ins, show hash and commit time. ** ** This is intended primarily to go into the "header garbage" that is ignored ** by patch(1). ** ** zFrom and zTo are interpreted as symbolic version names, unless they ** start with '(', in which case they are printed directly. */ void diff_print_versions(const char *zFrom, const char *zTo, DiffConfig *pCfg){ |
︙ | ︙ | |||
159 160 161 162 163 164 165 | void diff_print_filenames( const char *zLeft, /* Name of the left file */ const char *zRight, /* Name of the right file */ DiffConfig *pCfg, /* Diff configuration */ Blob *pOut /* Write to this blob, or stdout of this is NULL */ ){ u64 diffFlags = pCfg->diffFlags; | < < < | 159 160 161 162 163 164 165 166 167 168 169 170 171 172 | void diff_print_filenames( const char *zLeft, /* Name of the left file */ const char *zRight, /* Name of the right file */ DiffConfig *pCfg, /* Diff configuration */ Blob *pOut /* Write to this blob, or stdout of this is NULL */ ){ u64 diffFlags = pCfg->diffFlags; if( diffFlags & (DIFF_BRIEF|DIFF_RAW) ){ /* no-op */ }else if( diffFlags & DIFF_DEBUG ){ blob_appendf(pOut, "FILE-LEFT %s\nFILE-RIGHT %s\n", zLeft, zRight); }else if( diffFlags & DIFF_WEBPAGE ){ if( fossil_strcmp(zLeft,zRight)==0 ){ blob_appendf(pOut,"<h1>%h</h1>\n", zLeft); |
︙ | ︙ | |||
214 215 216 217 218 219 220 | }else{ blob_appendf(pOut, "--- %s\n+++ %s\n", zLeft, zRight); } } /* | | | | 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 | }else{ blob_appendf(pOut, "--- %s\n+++ %s\n", zLeft, zRight); } } /* ** Default header text for diff with --webpage */ static const char zWebpageHdr[] = @ <!DOCTYPE html> @ <html> @ <head> @ <meta charset="UTF-8"> @ <style> @ body { @ background-color: white; |
︙ | ︙ | |||
311 312 313 314 315 316 317 | @ font-weight: bold; @ } @ td.difftxt ins > ins.edit { @ background-color: #c0c0ff; @ text-decoration: none; @ font-weight: bold; @ } | < < < < < < < < < < < < < < < < < < < < < < < < < < < < < | < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < | | 308 309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324 325 326 327 | @ font-weight: bold; @ } @ td.difftxt ins > ins.edit { @ background-color: #c0c0ff; @ text-decoration: none; @ font-weight: bold; @ } @ @ </style> @ </head> @ <body> ; const char zWebpageEnd[] = @ </body> @ </html> ; /* ** State variables used by the --browser option for diff. These must ** be static variables, not elements of DiffConfig, since they are |
︙ | ︙ | |||
514 515 516 517 518 519 520 | #ifndef _WIN32 signal(SIGINT, diff_www_interrupt); #else SetConsoleCtrlHandler(diff_console_ctrl_handler, TRUE); #endif } if( (pCfg->diffFlags & DIFF_WEBPAGE)!=0 ){ | < | | | 376 377 378 379 380 381 382 383 384 385 386 387 388 389 390 391 392 393 394 395 396 397 398 | #ifndef _WIN32 signal(SIGINT, diff_www_interrupt); #else SetConsoleCtrlHandler(diff_console_ctrl_handler, TRUE); #endif } if( (pCfg->diffFlags & DIFF_WEBPAGE)!=0 ){ fossil_print("%s",zWebpageHdr); fflush(stdout); } } /* Do any final output required by a diff and complete the diff ** process. ** ** For --browser and --webpage, output any javascript required by ** the diff. (Currently JS is only needed for side-by-side diffs). ** ** For --browser, close the connection to the temporary file, then ** launch a web browser to view the file. After a delay ** of FOSSIL_BROWSER_DIFF_DELAY milliseconds, delete the temp file. */ void diff_end(DiffConfig *pCfg, int nErr){ |
︙ | ︙ | |||
575 576 577 578 579 580 581 | if( pCfg->zDiffCmd==0 ){ Blob out; /* Diff output text */ Blob file2; /* Content of zFile2 */ const char *zName2; /* Name of zFile2 for display */ /* Read content of zFile2 into memory */ blob_zero(&file2); | | | 436 437 438 439 440 441 442 443 444 445 446 447 448 449 450 | if( pCfg->zDiffCmd==0 ){ Blob out; /* Diff output text */ Blob file2; /* Content of zFile2 */ const char *zName2; /* Name of zFile2 for display */ /* Read content of zFile2 into memory */ blob_zero(&file2); if( file_size(zFile2, ExtFILE)<0 ){ zName2 = NULL_DEVICE; }else{ blob_read_from_file(&file2, zFile2, ExtFILE); zName2 = zName; } /* Compute and output the differences */ |
︙ | ︙ | |||
606 607 608 609 610 611 612 | } /* Release memory resources */ blob_reset(&file2); }else{ Blob nameFile1; /* Name of temporary file to old pFile1 content */ Blob cmd; /* Text of command to run */ | < | 467 468 469 470 471 472 473 474 475 476 477 478 479 480 | } /* Release memory resources */ blob_reset(&file2); }else{ Blob nameFile1; /* Name of temporary file to old pFile1 content */ Blob cmd; /* Text of command to run */ if( (pCfg->diffFlags & DIFF_INCBINARY)==0 ){ Blob file2; if( looks_like_binary(pFile1) ){ fossil_print("%s",DIFF_CANNOT_COMPUTE_BINARY); return; } |
︙ | ︙ | |||
638 639 640 641 642 643 644 | } blob_reset(&file2); } /* Construct a temporary file to hold pFile1 based on the name of ** zFile2 */ file_tempname(&nameFile1, zFile2, "orig"); | < < < < < < < < < | | | 498 499 500 501 502 503 504 505 506 507 508 509 510 511 512 513 514 515 516 517 518 519 520 521 522 523 524 525 526 527 528 529 | } blob_reset(&file2); } /* Construct a temporary file to hold pFile1 based on the name of ** zFile2 */ file_tempname(&nameFile1, zFile2, "orig"); blob_write_to_file(pFile1, blob_str(&nameFile1)); /* Construct the external diff command */ blob_zero(&cmd); blob_append(&cmd, pCfg->zDiffCmd, -1); if( pCfg->diffFlags & DIFF_INVERT ){ blob_append_escaped_arg(&cmd, zFile2, 1); blob_append_escaped_arg(&cmd, blob_str(&nameFile1), 1); }else{ blob_append_escaped_arg(&cmd, blob_str(&nameFile1), 1); blob_append_escaped_arg(&cmd, zFile2, 1); } /* Run the external diff command */ fossil_system(blob_str(&cmd)); /* Delete the temporary file and clean up memory used */ file_delete(blob_str(&nameFile1)); blob_reset(&nameFile1); blob_reset(&cmd); } } /* ** Show the difference between two files, both in memory. |
︙ | ︙ | |||
708 709 710 711 712 713 714 | /* Release memory resources */ blob_reset(&out); }else{ Blob cmd; Blob temp1; Blob temp2; | < < | < < < < < < < < < < | | | | | 559 560 561 562 563 564 565 566 567 568 569 570 571 572 573 574 575 576 577 578 579 580 581 582 583 584 585 586 587 588 589 590 591 592 593 594 595 596 597 598 599 600 601 602 603 604 605 606 607 | /* Release memory resources */ blob_reset(&out); }else{ Blob cmd; Blob temp1; Blob temp2; if( (pCfg->diffFlags & DIFF_INCBINARY)==0 ){ if( looks_like_binary(pFile1) || looks_like_binary(pFile2) ){ fossil_print("%s",DIFF_CANNOT_COMPUTE_BINARY); return; } if( pCfg->zBinGlob ){ Glob *pBinary = glob_create(pCfg->zBinGlob); if( glob_match(pBinary, zName) ){ fossil_print("%s",DIFF_CANNOT_COMPUTE_BINARY); glob_free(pBinary); return; } glob_free(pBinary); } } /* Construct a temporary file names */ file_tempname(&temp1, zName, "before"); file_tempname(&temp2, zName, "after"); blob_write_to_file(pFile1, blob_str(&temp1)); blob_write_to_file(pFile2, blob_str(&temp2)); /* Construct the external diff command */ blob_zero(&cmd); blob_append(&cmd, pCfg->zDiffCmd, -1); blob_append_escaped_arg(&cmd, blob_str(&temp1), 1); blob_append_escaped_arg(&cmd, blob_str(&temp2), 1); /* Run the external diff command */ fossil_system(blob_str(&cmd)); /* Delete the temporary file and clean up memory used */ file_delete(blob_str(&temp1)); file_delete(blob_str(&temp2)); blob_reset(&temp1); blob_reset(&temp2); blob_reset(&cmd); } } |
︙ | ︙ | |||
873 874 875 876 877 878 879 | blob_zero(&fname); file_relative_name(zPathname, &fname, 1); }else{ blob_set(&fname, g.zLocalRoot); blob_append(&fname, zPathname, -1); } zFullName = blob_str(&fname); | < < < < < < < | < | 712 713 714 715 716 717 718 719 720 721 722 723 724 725 726 727 728 729 730 731 732 733 734 735 736 737 738 739 740 741 742 743 744 745 746 747 748 749 750 751 752 753 754 755 756 757 758 | blob_zero(&fname); file_relative_name(zPathname, &fname, 1); }else{ blob_set(&fname, g.zLocalRoot); blob_append(&fname, zPathname, -1); } zFullName = blob_str(&fname); if( isDeleted ){ if( !isNumStat ){ fossil_print("DELETED %s\n", zPathname); } if( !asNewFile ){ showDiff = 0; zFullName = NULL_DEVICE; } }else if( file_access(zFullName, F_OK) ){ if( !isNumStat ){ fossil_print("MISSING %s\n", zPathname); } if( !asNewFile ){ showDiff = 0; } }else if( isNew ){ if( !isNumStat ){ fossil_print("ADDED %s\n", zPathname); } srcid = 0; if( !asNewFile ){ showDiff = 0; } }else if( isChnged==3 ){ if( !isNumStat ){ fossil_print("ADDED_BY_MERGE %s\n", zPathname); } srcid = 0; if( !asNewFile ){ showDiff = 0; } }else if( isChnged==5 ){ if( !isNumStat ){ fossil_print("ADDED_BY_INTEGRATE %s\n", zPathname); } srcid = 0; if( !asNewFile ){ showDiff = 0; } } if( showDiff ){ Blob content; if( !isLink != !file_islink(zFullName) ){ diff_print_index(zPathname, pCfg, 0); diff_print_filenames(zPathname, zPathname, pCfg, 0); fossil_print("%s",DIFF_CANNOT_COMPUTE_SYMLINK); continue; } if( srcid>0 ){ content_get(srcid, &content); }else{ blob_zero(&content); } if( isChnged==0 || !file_same_as_blob(&content, zFullName) ){ diff_print_index(zPathname, pCfg, pOut); diff_file(&content, zFullName, zPathname, pCfg, pOut); } blob_reset(&content); } blob_reset(&fname); } |
︙ | ︙ | |||
945 946 947 948 949 950 951 | ){ Stmt q; Blob content; db_prepare(&q, "SELECT pathname, content FROM undo"); blob_init(&content, 0, 0); if( (pCfg->diffFlags & DIFF_SHOW_VERS)!=0 ){ diff_print_versions("(undo)", "(workdir)", pCfg); | | | 776 777 778 779 780 781 782 783 784 785 786 787 788 789 790 | ){ Stmt q; Blob content; db_prepare(&q, "SELECT pathname, content FROM undo"); blob_init(&content, 0, 0); if( (pCfg->diffFlags & DIFF_SHOW_VERS)!=0 ){ diff_print_versions("(undo)", "(workdir)", pCfg); } while( db_step(&q)==SQLITE_ROW ){ char *zFullName; const char *zFile = (const char*)db_column_text(&q, 0); if( !file_dir_match(pFileDir, zFile) ) continue; zFullName = mprintf("%s%s", g.zLocalRoot, zFile); db_column_blob(&q, 1, &content); diff_file(&content, zFullName, zFile, pCfg, 0); |
︙ | ︙ | |||
1032 1033 1034 1035 1036 1037 1038 | manifest_file_rewind(pFrom); pFromFile = manifest_file_next(pFrom,0); pTo = manifest_get_by_name(zTo, 0); manifest_file_rewind(pTo); pToFile = manifest_file_next(pTo,0); if( (pCfg->diffFlags & DIFF_SHOW_VERS)!=0 ){ diff_print_versions(zFrom, zTo, pCfg); | | < < < | 863 864 865 866 867 868 869 870 871 872 873 874 875 876 877 878 879 880 881 882 883 884 885 886 887 888 889 890 891 892 893 894 895 896 897 898 899 900 901 902 | manifest_file_rewind(pFrom); pFromFile = manifest_file_next(pFrom,0); pTo = manifest_get_by_name(zTo, 0); manifest_file_rewind(pTo); pToFile = manifest_file_next(pTo,0); if( (pCfg->diffFlags & DIFF_SHOW_VERS)!=0 ){ diff_print_versions(zFrom, zTo, pCfg); } while( pFromFile || pToFile ){ int cmp; if( pFromFile==0 ){ cmp = +1; }else if( pToFile==0 ){ cmp = -1; }else{ cmp = fossil_strcmp(pFromFile->zName, pToFile->zName); } if( cmp<0 ){ if( file_dir_match(pFileDir, pFromFile->zName) ){ if( (pCfg->diffFlags & (DIFF_NUMSTAT|DIFF_HTML))==0 ){ fossil_print("DELETED %s\n", pFromFile->zName); } if( asNewFlag ){ diff_manifest_entry(pFromFile, 0, pCfg); } } pFromFile = manifest_file_next(pFrom,0); }else if( cmp>0 ){ if( file_dir_match(pFileDir, pToFile->zName) ){ if( (pCfg->diffFlags & (DIFF_NUMSTAT|DIFF_HTML|DIFF_TCL|DIFF_JSON))==0 ){ fossil_print("ADDED %s\n", pToFile->zName); } if( asNewFlag ){ diff_manifest_entry(0, pToFile, pCfg); } } pToFile = manifest_file_next(pTo,0); }else if( fossil_strcmp(pFromFile->zUuid, pToFile->zUuid)==0 ){ /* No changes */ |
︙ | ︙ | |||
1126 1127 1128 1129 1130 1131 1132 | */ void diff_tk(const char *zSubCmd, int firstArg){ int i; Blob script; const char *zTempFile = 0; char *zCmd; const char *zTclsh; | < | 954 955 956 957 958 959 960 961 962 963 964 965 966 967 | */ void diff_tk(const char *zSubCmd, int firstArg){ int i; Blob script; const char *zTempFile = 0; char *zCmd; const char *zTclsh; blob_zero(&script); blob_appendf(&script, "set fossilcmd {| \"%/\" %s -tcl -i -v", g.nameOfExe, zSubCmd); find_option("tcl",0,0); find_option("html",0,0); find_option("side-by-side","y",0); find_option("internal","i",0); |
︙ | ︙ | |||
1153 1154 1155 1156 1157 1158 1159 | blob_appendf(&script, " {%/}", z); }else{ int j; blob_append(&script, " ", 1); for(j=0; z[j]; j++) blob_appendf(&script, "\\%03o", (unsigned char)z[j]); } } | < | | 980 981 982 983 984 985 986 987 988 989 990 991 992 993 994 | blob_appendf(&script, " {%/}", z); }else{ int j; blob_append(&script, " ", 1); for(j=0; z[j]; j++) blob_appendf(&script, "\\%03o", (unsigned char)z[j]); } } blob_appendf(&script, "}\n%s", builtin_file("diff.tcl", 0)); if( zTempFile ){ blob_write_to_file(&script, zTempFile); fossil_print("To see diff, run: %s \"%s\"\n", zTclsh, zTempFile); }else{ #if defined(FOSSIL_ENABLE_TCL) Th_FossilInit(TH_INIT_DEFAULT); if( evaluateTclWithEvents(g.interp, &g.tcl, blob_str(&script), |
︙ | ︙ | |||
1207 1208 1209 1210 1211 1212 1213 | ** out. Or if the FILE arguments are omitted, show all unsaved changes ** currently in the working check-out. ** ** The default output format is a "unified patch" (the same as the ** output of "diff -u" on most unix systems). Many alternative formats ** are available. A few of the more useful alternatives: ** | | | 1033 1034 1035 1036 1037 1038 1039 1040 1041 1042 1043 1044 1045 1046 1047 | ** out. Or if the FILE arguments are omitted, show all unsaved changes ** currently in the working check-out. ** ** The default output format is a "unified patch" (the same as the ** output of "diff -u" on most unix systems). Many alternative formats ** are available. A few of the more useful alternatives: ** ** --tk Pop up a TCL/TK-based GUI to show the diff ** --by Show a side-by-side diff in the default web browser ** -b Show a linear diff in the default web browser ** -y Show a text side-by-side diff ** --webpage Format output as HTML ** --webpage -y HTML output in the side-by-side format ** ** The "--from VERSION" option is used to specify the source check-in |
︙ | ︙ | |||
1248 1249 1250 1251 1252 1253 1254 | ** as binary ** --branch BRANCH Show diff of all changes on BRANCH ** --brief Show filenames only ** -b|--browser Show the diff output in a web-browser ** --by Shorthand for "--browser -y" ** -ci|--checkin VERSION Show diff of all changes in VERSION ** --command PROG External diff program. Overrides "diff-command" | | | < < < | | | 1074 1075 1076 1077 1078 1079 1080 1081 1082 1083 1084 1085 1086 1087 1088 1089 1090 1091 1092 1093 1094 1095 1096 1097 1098 1099 1100 1101 1102 | ** as binary ** --branch BRANCH Show diff of all changes on BRANCH ** --brief Show filenames only ** -b|--browser Show the diff output in a web-browser ** --by Shorthand for "--browser -y" ** -ci|--checkin VERSION Show diff of all changes in VERSION ** --command PROG External diff program. Overrides "diff-command" ** -c|--context N Show N lines of context around each change, with ** negative N meaning show all content ** --diff-binary BOOL Include binary files with external commands ** --exec-abs-paths Force absolute path names on external commands ** --exec-rel-paths Force relative path names on external commands ** -r|--from VERSION Select VERSION as source for the diff ** -w|--ignore-all-space Ignore white space when comparing lines ** -i|--internal Use internal diff logic ** --json Output formatted as JSON ** -N|--new-file Alias for --verbose ** --numstat Show only the number of added and deleted lines ** -y|--side-by-side Side-by-side diff ** --strip-trailing-cr Strip trailing CR ** --tcl TCL-formated output used internally by --tk ** --tclsh PATH TCL/TK used for --tk (default: "tclsh") ** --tk Launch a Tcl/Tk GUI for display ** --to VERSION Select VERSION as target for the diff ** --undo Diff against the "undo" buffer ** --unified Unified diff ** -v|--verbose Output complete text of added or deleted files ** -h|--versions Show compared versions in the diff header ** --webpage Format output as a stand-alone HTML webpage |
︙ | ︙ | |||
1381 1382 1383 1384 1385 1386 1387 | } fossil_free(pFileDir[i].zName); } fossil_free(pFileDir); } diff_end(&DCfg, 0); if ( DCfg.diffFlags & DIFF_NUMSTAT ){ | | | 1204 1205 1206 1207 1208 1209 1210 1211 1212 1213 1214 1215 1216 1217 1218 | } fossil_free(pFileDir[i].zName); } fossil_free(pFileDir); } diff_end(&DCfg, 0); if ( DCfg.diffFlags & DIFF_NUMSTAT ){ fossil_print("%10d %10d TOTAL over %d changed files\n", g.diffCnt[1], g.diffCnt[2], g.diffCnt[0]); } } /* ** WEBPAGE: vpatch ** URL: /vpatch?from=FROM&to=TO |
︙ | ︙ |
Changes to src/dispatch.c.
︙ | ︙ | |||
451 452 453 454 455 456 457 | aIndent[iLevel] = nIndent; azEnd[iLevel] = zEndUL; if( wantP ){ blob_append(pHtml,"<p>", 3); wantP = 0; } blob_append(pHtml, "<ul>\n", 5); | | | 451 452 453 454 455 456 457 458 459 460 461 462 463 464 465 | aIndent[iLevel] = nIndent; azEnd[iLevel] = zEndUL; if( wantP ){ blob_append(pHtml,"<p>", 3); wantP = 0; } blob_append(pHtml, "<ul>\n", 5); }else if( isDT || zHelp[nIndent]=='-' || hasGap(zHelp+nIndent,i-nIndent) ){ iLevel++; aIndent[iLevel] = nIndent; azEnd[iLevel] = zEndDL; wantP = 0; blob_append(pHtml, "<blockquote><dl>\n", -1); |
︙ | ︙ | |||
545 546 547 548 549 550 551 | if( c=='[' && (x = help_is_link(zHelp+i, 100000))!=0 ){ if( i>0 ) blob_append(pText, zHelp, i); zHelp += i+2; blob_append(pText, zHelp, x-3); zHelp += x-1; i = -1; continue; | | | | 545 546 547 548 549 550 551 552 553 554 555 556 557 558 559 560 561 562 563 | if( c=='[' && (x = help_is_link(zHelp+i, 100000))!=0 ){ if( i>0 ) blob_append(pText, zHelp, i); zHelp += i+2; blob_append(pText, zHelp, x-3); zHelp += x-1; i = -1; continue; } } if( i>0 ){ blob_append(pText, zHelp, i); } } /* ** Display help for all commands based on provided flags. */ static void display_all_help(int mask, int useHtml, int rawOut){ int i; |
︙ | ︙ | |||
633 634 635 636 637 638 639 | ** ** Show help text for commands and pages. Useful for proof-reading. ** Defaults to just the CLI commands. Specify --www to see only the ** web pages, or --everything to see both commands and pages. ** ** Options: ** -a|--aliases Show aliases | | | 633 634 635 636 637 638 639 640 641 642 643 644 645 646 647 | ** ** Show help text for commands and pages. Useful for proof-reading. ** Defaults to just the CLI commands. Specify --www to see only the ** web pages, or --everything to see both commands and pages. ** ** Options: ** -a|--aliases Show aliases ** -e|--everything Show all commands and pages. Omit aliases to ** avoid duplicates. ** -h|--html Transform output to HTML ** -o|--options Show global options ** -r|--raw No output formatting ** -s|--settings Show settings ** -t|--test Include test- commands ** -w|--www Show WWW pages |
︙ | ︙ | |||
659 660 661 662 663 664 665 | CMDFLAG_ALIAS | CMDFLAG_SETTING | CMDFLAG_TEST; } if( find_option("settings","s",0) ){ mask = CMDFLAG_SETTING; } if( find_option("aliases","a",0) ){ mask = CMDFLAG_ALIAS; | | | 659 660 661 662 663 664 665 666 667 668 669 670 671 672 673 | CMDFLAG_ALIAS | CMDFLAG_SETTING | CMDFLAG_TEST; } if( find_option("settings","s",0) ){ mask = CMDFLAG_SETTING; } if( find_option("aliases","a",0) ){ mask = CMDFLAG_ALIAS; } if( find_option("test","t",0) ){ mask |= CMDFLAG_TEST; } display_all_help(mask, useHtml, rawOut); } /* |
︙ | ︙ | |||
766 767 768 769 770 771 772 | iLast = FOSSIL_FIRST_CMD-1; }else{ iFirst = FOSSIL_FIRST_CMD; iLast = MX_COMMAND-1; } while( n<nArray ){ | | | 766 767 768 769 770 771 772 773 774 775 776 777 778 779 780 | iLast = FOSSIL_FIRST_CMD-1; }else{ iFirst = FOSSIL_FIRST_CMD; iLast = MX_COMMAND-1; } while( n<nArray ){ bestScore = mxScore; for(i=iFirst; i<=iLast; i++){ m = edit_distance(zIn, aCommand[i].zName); if( m<mnScore ) continue; if( m==mnScore ){ azArray[n++] = aCommand[i].zName; if( n>=nArray ) return n; }else if( m<bestScore ){ |
︙ | ︙ | |||
895 896 897 898 899 900 901 | @ <li><a href="%R/help?cmd=%s(z)">%s(zBoldOn)%s(z)%s(zBoldOff)</a> /* Output aliases */ if( occHelp[aCommand[i].iHelp] > 1 ){ int j; int aliases[MX_HELP_DUP], nAliases=0; for(j=0; j<occHelp[aCommand[i].iHelp]; j++){ if( bktHelp[aCommand[i].iHelp][j] != i ){ | | < | 895 896 897 898 899 900 901 902 903 904 905 906 907 908 909 | @ <li><a href="%R/help?cmd=%s(z)">%s(zBoldOn)%s(z)%s(zBoldOff)</a> /* Output aliases */ if( occHelp[aCommand[i].iHelp] > 1 ){ int j; int aliases[MX_HELP_DUP], nAliases=0; for(j=0; j<occHelp[aCommand[i].iHelp]; j++){ if( bktHelp[aCommand[i].iHelp][j] != i ){ if( aCommand[bktHelp[aCommand[i].iHelp][j]].eCmdFlags & CMDFLAG_ALIAS ){ aliases[nAliases++] = bktHelp[aCommand[i].iHelp][j]; } } } if( nAliases>0 ){ int k; @(\ |
︙ | ︙ | |||
986 987 988 989 990 991 992 | style_set_current_feature("test"); style_header("All Help Text"); @ <dl> /* Fill in help string buckets */ for(i=0; i<MX_COMMAND; i++){ if(aCommand[i].eCmdFlags & CMDFLAG_HIDDEN) continue; bktHelp[aCommand[i].iHelp][occHelp[aCommand[i].iHelp]++] = i; | | | 985 986 987 988 989 990 991 992 993 994 995 996 997 998 999 | style_set_current_feature("test"); style_header("All Help Text"); @ <dl> /* Fill in help string buckets */ for(i=0; i<MX_COMMAND; i++){ if(aCommand[i].eCmdFlags & CMDFLAG_HIDDEN) continue; bktHelp[aCommand[i].iHelp][occHelp[aCommand[i].iHelp]++] = i; } for(i=0; i<MX_COMMAND; i++){ const char *zDesc; unsigned int e = aCommand[i].eCmdFlags; if( e & CMDFLAG_1ST_TIER ){ zDesc = "1st tier command"; }else if( e & CMDFLAG_2ND_TIER ){ zDesc = "2nd tier command"; |
︙ | ︙ | |||
1038 1039 1040 1041 1042 1043 1044 | }else if( e & CMDFLAG_WEBPAGE ){ if( e & CMDFLAG_RAWCONTENT ){ zDesc = "raw-content web page"; }else{ zDesc = "web page"; } } | | | 1037 1038 1039 1040 1041 1042 1043 1044 1045 1046 1047 1048 1049 1050 1051 | }else if( e & CMDFLAG_WEBPAGE ){ if( e & CMDFLAG_RAWCONTENT ){ zDesc = "raw-content web page"; }else{ zDesc = "web page"; } } @ <dt><big><b>%s(aCommand[bktHelp[aCommand[i].iHelp][j]].zName)</b> @</big> (%s(zDesc))</dt> } @ <p><dd> help_to_html(aCommand[i].zHelp, cgi_output_blob()); @ </dd><p> occHelp[aCommand[i].iHelp] = 0; |
︙ | ︙ | |||
1117 1118 1119 1120 1121 1122 1123 | /* ** Documentation on universal command-line options. */ /* @-comment: # */ static const char zOptions[] = @ Command-line options common to all commands: | | | | | 1116 1117 1118 1119 1120 1121 1122 1123 1124 1125 1126 1127 1128 1129 1130 1131 1132 1133 1134 1135 1136 1137 | /* ** Documentation on universal command-line options. */ /* @-comment: # */ static const char zOptions[] = @ Command-line options common to all commands: @ @ --args FILENAME Read additional arguments and options from FILENAME @ --case-sensitive BOOL Set case sensitivity for file names @ --cgitrace Active CGI tracing @ --chdir PATH Change to PATH before performing any operations @ --comfmtflags VALUE Set comment formatting flags to VALUE @ --comment-format VALUE Alias for --comfmtflags @ --errorlog FILENAME Log errors to FILENAME @ --help Show help on the command rather than running it @ --httptrace Trace outbound HTTP requests @ --localtime Display times using the local timezone @ --nocgi Do not act as CGI @ --no-th-hook Do not run TH1 hooks @ --quiet Reduce the amount of output @ --sqlstats Show SQL usage statistics when done |
︙ | ︙ | |||
1486 1487 1488 1489 1490 1491 1492 | helptextVtab_cursor *pCur = (helptextVtab_cursor*)cur; return pCur->iRowid>=MX_COMMAND; } /* ** This method is called to "rewind" the helptextVtab_cursor object back ** to the first row of output. This method is always called at least | | | | 1485 1486 1487 1488 1489 1490 1491 1492 1493 1494 1495 1496 1497 1498 1499 1500 1501 1502 1503 | helptextVtab_cursor *pCur = (helptextVtab_cursor*)cur; return pCur->iRowid>=MX_COMMAND; } /* ** This method is called to "rewind" the helptextVtab_cursor object back ** to the first row of output. This method is always called at least ** once prior to any call to helptextVtabColumn() or helptextVtabRowid() or ** helptextVtabEof(). */ static int helptextVtabFilter( sqlite3_vtab_cursor *pVtabCursor, int idxNum, const char *idxStr, int argc, sqlite3_value **argv ){ helptextVtab_cursor *pCur = (helptextVtab_cursor *)pVtabCursor; pCur->iRowid = 1; return SQLITE_OK; } |
︙ | ︙ | |||
1515 1516 1517 1518 1519 1520 1521 | ){ pIdxInfo->estimatedCost = (double)MX_COMMAND; pIdxInfo->estimatedRows = MX_COMMAND; return SQLITE_OK; } /* | | | 1514 1515 1516 1517 1518 1519 1520 1521 1522 1523 1524 1525 1526 1527 1528 | ){ pIdxInfo->estimatedCost = (double)MX_COMMAND; pIdxInfo->estimatedRows = MX_COMMAND; return SQLITE_OK; } /* ** This following structure defines all the methods for the ** virtual table. */ static sqlite3_module helptextVtabModule = { /* iVersion */ 0, /* xCreate */ 0, /* Helptext is eponymous and read-only */ /* xConnect */ helptextVtabConnect, /* xBestIndex */ helptextVtabBestIndex, |
︙ | ︙ | |||
1542 1543 1544 1545 1546 1547 1548 | /* xCommit */ 0, /* xRollback */ 0, /* xFindMethod */ 0, /* xRename */ 0, /* xSavepoint */ 0, /* xRelease */ 0, /* xRollbackTo */ 0, | | < | 1541 1542 1543 1544 1545 1546 1547 1548 1549 1550 1551 1552 1553 1554 1555 1556 1557 1558 1559 1560 | /* xCommit */ 0, /* xRollback */ 0, /* xFindMethod */ 0, /* xRename */ 0, /* xSavepoint */ 0, /* xRelease */ 0, /* xRollbackTo */ 0, /* xShadowName */ 0 }; /* ** Register the helptext virtual table */ int helptext_vtab_register(sqlite3 *db){ int rc = sqlite3_create_module(db, "helptext", &helptextVtabModule, 0); return rc; } /* End of the helptext virtual table ******************************************************************************/ |
Changes to src/doc.c.
︙ | ︙ | |||
339 340 341 342 343 344 345 | static char * zList = 0; static char const * zEnd = 0; static int once = 0; char * z; int tokenizerState /* 0=expecting a key, 1=skip next token, ** 2=accept next token */; if(once==0){ | | | 339 340 341 342 343 344 345 346 347 348 349 350 351 352 353 | static char * zList = 0; static char const * zEnd = 0; static int once = 0; char * z; int tokenizerState /* 0=expecting a key, 1=skip next token, ** 2=accept next token */; if(once==0){ once = 1; zList = db_get("mimetypes",0); if(zList==0){ return 0; } /* Transform zList to simplify the main loop: replace non-newline spaces with NUL bytes. */ zEnd = zList + strlen(zList); |
︙ | ︙ | |||
727 728 729 730 731 732 733 | ** Transfer content to the output. During the transfer, when text of ** the following form is seen: ** ** href="$ROOT/..." ** action="$ROOT/..." ** href=".../doc/$CURRENT/..." ** | | | | 727 728 729 730 731 732 733 734 735 736 737 738 739 740 741 742 743 | ** Transfer content to the output. During the transfer, when text of ** the following form is seen: ** ** href="$ROOT/..." ** action="$ROOT/..." ** href=".../doc/$CURRENT/..." ** ** Convert $ROOT to the root URI of the repository, and $CURRENT to the ** version number of the /doc/ document currently being displayed (if any). ** Allow ' in place of " and any case for href or action. ** ** Efforts are made to limit this translation to cases where the text is ** fully contained with an HTML markup element. */ void convert_href_and_output(Blob *pIn){ int i, base; int n = blob_size(pIn); |
︙ | ︙ | |||
828 829 830 831 832 833 834 | convert_href_and_output(pBody); if( !isPopup ){ document_emit_js(); style_finish_page(); } }else if( fossil_strcmp(zMime, "text/x-pikchr")==0 ){ style_adunit_config(ADUNIT_RIGHT_OK); | | | | 828 829 830 831 832 833 834 835 836 837 838 839 840 841 842 843 844 | convert_href_and_output(pBody); if( !isPopup ){ document_emit_js(); style_finish_page(); } }else if( fossil_strcmp(zMime, "text/x-pikchr")==0 ){ style_adunit_config(ADUNIT_RIGHT_OK); style_header("%s", zDefaultTitle); wiki_render_by_mimetype(pBody, zMime); style_finish_page(); #ifdef FOSSIL_ENABLE_TH1_DOCS }else if( Th_AreDocsEnabled() && fossil_strcmp(zMime, "application/x-th1")==0 ){ int raw = P("raw")!=0; if( !raw ){ Blob tail; blob_zero(&tail); |
︙ | ︙ | |||
1209 1210 1211 1212 1213 1214 1215 | ** ** The intended use case here is to supply an icon for the "fossil ui" ** command. For a permanent website, the recommended process is for ** the admin to set up a project-specific icon and reference that icon ** in the HTML header using a line like: ** ** <link rel="icon" href="URL-FOR-YOUR-ICON" type="MIMETYPE"/> | | | 1209 1210 1211 1212 1213 1214 1215 1216 1217 1218 1219 1220 1221 1222 1223 | ** ** The intended use case here is to supply an icon for the "fossil ui" ** command. For a permanent website, the recommended process is for ** the admin to set up a project-specific icon and reference that icon ** in the HTML header using a line like: ** ** <link rel="icon" href="URL-FOR-YOUR-ICON" type="MIMETYPE"/> ** */ void favicon_page(void){ Blob icon; char *zMime; etag_check(ETAG_CONFIG, 0); zMime = db_get("icon-mimetype", "image/gif"); |
︙ | ︙ |
Changes to src/etag.c.
︙ | ︙ | |||
98 99 100 101 102 103 104 | char zBuf[50]; assert( zETag[0]==0 ); /* Only call this routine once! */ if( etagCancelled ) return; /* By default, ETagged URLs never expire since the ETag will change * when the content changes. Approximate this policy as 10 years. */ | | | | 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 | char zBuf[50]; assert( zETag[0]==0 ); /* Only call this routine once! */ if( etagCancelled ) return; /* By default, ETagged URLs never expire since the ETag will change * when the content changes. Approximate this policy as 10 years. */ iMaxAge = 10 * 365 * 24 * 60 * 60; md5sum_init(); /* Always include the executable ID as part of the hash */ md5sum_step_text("exe-id: ", -1); md5sum_step_text(fossil_exe_id(), -1); md5sum_step_text("\n", 1); if( (eFlags & ETAG_HASH)!=0 && zHash ){ md5sum_step_text("hash: ", -1); md5sum_step_text(zHash, -1); md5sum_step_text("\n", 1); iMaxAge = 0; } if( eFlags & ETAG_DATA ){ |
︙ | ︙ | |||
208 209 210 211 212 213 214 | /* Check to see the If-Modified-Since constraint is satisfied */ zIfModifiedSince = P("HTTP_IF_MODIFIED_SINCE"); if( zIfModifiedSince==0 ) return; x = cgi_rfc822_parsedate(zIfModifiedSince); if( x<mtime ) return; | | | 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 | /* Check to see the If-Modified-Since constraint is satisfied */ zIfModifiedSince = P("HTTP_IF_MODIFIED_SINCE"); if( zIfModifiedSince==0 ) return; x = cgi_rfc822_parsedate(zIfModifiedSince); if( x<mtime ) return; #if 0 /* If the Fossil executable is more recent than If-Modified-Since, ** go ahead and regenerate the resource. */ if( file_mtime(g.nameOfExe, ExtFILE)>x ) return; #endif /* If we reach this point, it means that the resource has not changed ** and that we should generate a 304 Not Modified reply */ |
︙ | ︙ | |||
242 243 244 245 246 247 248 | /* Return the last-modified time in seconds since 1970. Or return 0 if ** there is no last-modified time. */ sqlite3_int64 etag_mtime(void){ return iEtagMtime; } | | | 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 | /* Return the last-modified time in seconds since 1970. Or return 0 if ** there is no last-modified time. */ sqlite3_int64 etag_mtime(void){ return iEtagMtime; } /* ** COMMAND: test-etag ** ** Usage: fossil test-etag -key KEY-NUMBER -hash HASH ** ** Generate an etag given a KEY-NUMBER and/or a HASH. ** ** KEY-NUMBER is some combination of: |
︙ | ︙ |
Changes to src/export.c.
︙ | ︙ | |||
447 448 449 450 451 452 453 | }while( (rid = bag_next(vers, rid))!=0 ); } } } /* This is the original header command (and hence documentation) for ** the "fossil export" command: | | | 447 448 449 450 451 452 453 454 455 456 457 458 459 460 461 | }while( (rid = bag_next(vers, rid))!=0 ); } } } /* This is the original header command (and hence documentation) for ** the "fossil export" command: ** ** Usage: %fossil export --git ?OPTIONS? ?REPOSITORY? ** ** Write an export of all check-ins to standard output. The export is ** written in the git-fast-export file format assuming the --git option is ** provided. The git-fast-export format is currently the only VCS ** interchange format supported, though other formats may be added in ** the future. |
︙ | ︙ | |||
1002 1003 1004 1005 1006 1007 1008 | db_bind_int(&sIns, ":isfile", isFile!=0); db_step(&sIns); db_reset(&sIns); return mprintf(":%d", db_last_insert_rowid()); } /* This is the SHA3-256 hash of an empty file */ | | | 1002 1003 1004 1005 1006 1007 1008 1009 1010 1011 1012 1013 1014 1015 1016 | db_bind_int(&sIns, ":isfile", isFile!=0); db_step(&sIns); db_reset(&sIns); return mprintf(":%d", db_last_insert_rowid()); } /* This is the SHA3-256 hash of an empty file */ static const char zEmptySha3[] = "a7ffc6f8bf1ed76651c14756a061d662f580ff4de43b49fa82d80a4b80f8434a"; /* ** Export a single file named by zUuid. ** ** Return 0 on success and non-zero on any failure. ** |
︙ | ︙ | |||
1035 1036 1037 1038 1039 1040 1041 | }else{ rc = content_get(rid, &data); if( rc==0 ){ if( bPhantomOk ){ blob_init(&data, 0, 0); gitmirror_message(VERB_EXTRA, "missing file: %s\n", zUuid); zUuid = zEmptySha3; | | | 1035 1036 1037 1038 1039 1040 1041 1042 1043 1044 1045 1046 1047 1048 1049 | }else{ rc = content_get(rid, &data); if( rc==0 ){ if( bPhantomOk ){ blob_init(&data, 0, 0); gitmirror_message(VERB_EXTRA, "missing file: %s\n", zUuid); zUuid = zEmptySha3; }else{ return 1; } } } zMark = gitmirror_find_mark(zUuid, 1, 1); if( zMark[0]==':' ){ fprintf(xCmd, "blob\nmark %s\ndata %d\n", zMark, blob_size(&data)); |
︙ | ︙ | |||
1348 1349 1350 1351 1352 1353 1354 | int i; zCmd = "git symbolic-ref --short HEAD"; gitmirror_message(VERB_NORMAL, "%s\n", zCmd); xCmd = popen(zCmd, "r"); if( xCmd==0 ){ fossil_fatal("git command failed: %s", zCmd); } | | | | 1348 1349 1350 1351 1352 1353 1354 1355 1356 1357 1358 1359 1360 1361 1362 1363 1364 1365 1366 1367 1368 1369 1370 1371 1372 1373 | int i; zCmd = "git symbolic-ref --short HEAD"; gitmirror_message(VERB_NORMAL, "%s\n", zCmd); xCmd = popen(zCmd, "r"); if( xCmd==0 ){ fossil_fatal("git command failed: %s", zCmd); } z = fgets(zLine, sizeof(zLine), xCmd); pclose(xCmd); if( z==0 ){ fossil_fatal("no output from \"%s\"", zCmd); } for(i=0; z[i] && !fossil_isspace(z[i]); i++){} z[i] = 0; zMainBr = fossil_strdup(z); } return zMainBr; } /* ** Implementation of the "fossil git export" command. */ void gitmirror_export_command(void){ const char *zLimit; /* Text of the --limit flag */ int nLimit = 0x7fffffff; /* Numeric value of the --limit flag */ |
︙ | ︙ | |||
1434 1435 1436 1437 1438 1439 1440 | /* Make sure GIT has been initialized */ z = mprintf("%s/.git", zMirror); if( !file_isdir(z, ExtFILE) ){ zMainBr = gitmirror_init(zMirror, zMainBr); bNeedRepack = 1; } fossil_free(z); | | | 1434 1435 1436 1437 1438 1439 1440 1441 1442 1443 1444 1445 1446 1447 1448 | /* Make sure GIT has been initialized */ z = mprintf("%s/.git", zMirror); if( !file_isdir(z, ExtFILE) ){ zMainBr = gitmirror_init(zMirror, zMainBr); bNeedRepack = 1; } fossil_free(z); /* Make sure the .mirror_state subdirectory exists */ z = mprintf("%s/.mirror_state", zMirror); rc = file_mkdir(z, ExtFILE, 0); if( rc ) fossil_fatal("cannot create directory \"%s\"", z); fossil_free(z); /* Attach the .mirror_state/db database */ |
︙ | ︙ | |||
1741 1742 1743 1744 1745 1746 1747 | char *zSql; int bQuiet = 0; int bByAll = 0; /* Undocumented option meaning this command was invoked ** from "fossil all" and should modify output accordingly */ db_find_and_open_repository(0, 0); bQuiet = find_option("quiet","q",0)!=0; | | | 1741 1742 1743 1744 1745 1746 1747 1748 1749 1750 1751 1752 1753 1754 1755 | char *zSql; int bQuiet = 0; int bByAll = 0; /* Undocumented option meaning this command was invoked ** from "fossil all" and should modify output accordingly */ db_find_and_open_repository(0, 0); bQuiet = find_option("quiet","q",0)!=0; bByAll = find_option("by-all",0,0)!=0; verify_all_options(); zMirror = db_get("last-git-export-repo", 0); if( zMirror==0 ){ if( bQuiet ) return; if( bByAll ) return; fossil_print("Git mirror: none\n"); return; |
︙ | ︙ | |||
1785 1786 1787 1788 1789 1790 1791 | } } z = db_text(0, "SELECT value FROM mconfig WHERE key='autopush'"); if( z==0 ){ fossil_print("Autopush: off\n"); }else{ UrlData url; | < | | < < < | 1785 1786 1787 1788 1789 1790 1791 1792 1793 1794 1795 1796 1797 1798 1799 1800 | } } z = db_text(0, "SELECT value FROM mconfig WHERE key='autopush'"); if( z==0 ){ fossil_print("Autopush: off\n"); }else{ UrlData url; url_parse_local(z, 0, &url); fossil_print("Autopush: %s\n", url.canonical); fossil_free(z); } n = db_int(0, "SELECT count(*) FROM event" " WHERE type='ci'" " AND mtime>coalesce((SELECT value FROM mconfig" " WHERE key='start'),0.0)" |
︙ | ︙ | |||
1858 1859 1860 1861 1862 1863 1864 | ** mapped into this name. "master" is used if ** this option is omitted. ** -q|--quiet Reduce output. Repeat for even less output. ** -v|--verbose More output ** ** > fossil git import MIRROR ** | | | 1854 1855 1856 1857 1858 1859 1860 1861 1862 1863 1864 1865 1866 1867 1868 | ** mapped into this name. "master" is used if ** this option is omitted. ** -q|--quiet Reduce output. Repeat for even less output. ** -v|--verbose More output ** ** > fossil git import MIRROR ** ** TBD... ** ** > fossil git status ** ** Show the status of the current Git mirror, if there is one. ** ** -q|--quiet No output if there is nothing to report */ |
︙ | ︙ |
Changes to src/file.c.
︙ | ︙ | |||
1289 1290 1291 1292 1293 1294 1295 | Blob x; if( zOrigName==0 ) return 0; blob_init(&x, 0, 0); file_canonical_name(zOrigName, &x, 0); return blob_str(&x); } | < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < | 1289 1290 1291 1292 1293 1294 1295 1296 1297 1298 1299 1300 1301 1302 | Blob x; if( zOrigName==0 ) return 0; blob_init(&x, 0, 0); file_canonical_name(zOrigName, &x, 0); return blob_str(&x); } /* ** The input is the name of an executable, such as one might ** type on a command-line. This routine resolves that name into ** a full pathname. The result is obtained from fossil_malloc() ** and should be freed by the caller. */ char *file_fullexename(const char *zCmd){ |
︙ | ︙ | |||
2312 2313 2314 2315 2316 2317 2318 | /* ** Return non-NULL if zFilename contains pathname elements that ** are reserved on Windows. The returned string is the disallowed ** path element. */ const char *file_is_win_reserved(const char *zPath){ | | | 2243 2244 2245 2246 2247 2248 2249 2250 2251 2252 2253 2254 2255 2256 2257 | /* ** Return non-NULL if zFilename contains pathname elements that ** are reserved on Windows. The returned string is the disallowed ** path element. */ const char *file_is_win_reserved(const char *zPath){ static const char *const azRes[] = { "CON", "PRN", "AUX", "NUL", "COM", "LPT" }; static char zReturn[5]; int i; while( zPath[0] ){ for(i=0; i<count(azRes); i++){ if( sqlite3_strnicmp(zPath, azRes[i], 3)==0 && ((i>=4 && fossil_isdigit(zPath[3]) && (zPath[4]=='/' || zPath[4]=='.' || zPath[4]==0)) |
︙ | ︙ |
Changes to src/fileedit.c.
︙ | ︙ | |||
434 435 436 437 438 439 440 | ** pCI's ownership is not modified. ** ** This function validates pCI's state and fails if any validation ** fails. ** ** On error, returns false (0) and, if pErr is not NULL, writes a ** diagnostic message there. | | | 434 435 436 437 438 439 440 441 442 443 444 445 446 447 448 | ** pCI's ownership is not modified. ** ** This function validates pCI's state and fails if any validation ** fails. ** ** On error, returns false (0) and, if pErr is not NULL, writes a ** diagnostic message there. ** ** Returns true on success. If pRid is not NULL, the RID of the ** resulting manifest is written to *pRid. ** ** The check-in process is largely influenced by pCI->flags, and that ** must be populated before calling this. See the fossil_cimini_flags ** enum for the docs for each flag. */ |
︙ | ︙ | |||
571 572 573 574 575 576 577 | && blob_size(&pCI->fileContent)>0 ){ /* Convert to the requested EOL style. Note that this inherently ** runs a risk of breaking content, e.g. string literals which ** contain embedded newlines. Note that HTML5 specifies that ** form-submitted TEXTAREA content gets normalized to CRLF-style: ** | | | 571 572 573 574 575 576 577 578 579 580 581 582 583 584 585 | && blob_size(&pCI->fileContent)>0 ){ /* Convert to the requested EOL style. Note that this inherently ** runs a risk of breaking content, e.g. string literals which ** contain embedded newlines. Note that HTML5 specifies that ** form-submitted TEXTAREA content gets normalized to CRLF-style: ** ** https://html.spec.whatwg.org/multipage/form-elements.html#the-textarea-element */ const int pseudoBinary = LOOK_LONG | LOOK_NUL; const int lookFlags = LOOK_CRLF | LOOK_LONE_LF | pseudoBinary; const int lookNew = looks_like_utf8( &pCI->fileContent, lookFlags ); if(!(pseudoBinary & lookNew)){ int rehash = 0; /*fossil_print("lookNew=%08x\n",lookNew);*/ |
︙ | ︙ | |||
979 980 981 982 983 984 985 | char ** zRevUuid, int * pVid, const char * zFilename, int * frid){ char * zFileUuid = 0; /* file content UUID */ const int checkFile = zFilename!=0 || frid!=0; int vid = 0; | | | 979 980 981 982 983 984 985 986 987 988 989 990 991 992 993 | char ** zRevUuid, int * pVid, const char * zFilename, int * frid){ char * zFileUuid = 0; /* file content UUID */ const int checkFile = zFilename!=0 || frid!=0; int vid = 0; if(checkFile && !fileedit_ajax_check_filename(zFilename)){ return 0; } vid = symbolic_name_to_rid(zRev, "ci"); if(0==vid){ ajax_route_error(404,"Cannot resolve name as a check-in: %s", zRev); |
︙ | ︙ | |||
1174 1175 1176 1177 1178 1179 1180 | ** ** Intended to be used only by /filepage and /filepage_commit. */ static int fileedit_setup_cimi_from_p(CheckinMiniInfo * p, Blob * pErr, int * bIsMissingArg){ char * zFileUuid = 0; /* UUID of file content */ const char * zFlag; /* generic flag */ | | | 1174 1175 1176 1177 1178 1179 1180 1181 1182 1183 1184 1185 1186 1187 1188 | ** ** Intended to be used only by /filepage and /filepage_commit. */ static int fileedit_setup_cimi_from_p(CheckinMiniInfo * p, Blob * pErr, int * bIsMissingArg){ char * zFileUuid = 0; /* UUID of file content */ const char * zFlag; /* generic flag */ int rc = 0, vid = 0, frid = 0; /* result code, check-in/file rids */ #define fail(EXPR) blob_appendf EXPR; goto end_fail zFlag = PD("filename",P("fn")); if(zFlag==0 || !*zFlag){ rc = 400; if(bIsMissingArg){ *bIsMissingArg = 1; |
︙ | ︙ | |||
1369 1370 1371 1372 1373 1374 1375 | if(i++){ CX(","); } CX("%!j", zFilename); } } db_finalize(&q); | | | 1369 1370 1371 1372 1373 1374 1375 1376 1377 1378 1379 1380 1381 1382 1383 | if(i++){ CX(","); } CX("%!j", zFilename); } } db_finalize(&q); CX("]}"); } /* ** AJAX route /fileedit?ajax=filelist ** ** Fetches a JSON-format list of leaves and/or filenames for use in ** creating a file selection list in /fileedit. It has different modes |
︙ | ︙ | |||
1425 1426 1427 1428 1429 1430 1431 | } } /* ** AJAX route /fileedit?ajax=commit ** ** Required query parameters: | | | | 1425 1426 1427 1428 1429 1430 1431 1432 1433 1434 1435 1436 1437 1438 1439 1440 1441 1442 1443 1444 1445 1446 1447 1448 1449 1450 1451 1452 1453 | } } /* ** AJAX route /fileedit?ajax=commit ** ** Required query parameters: ** ** filename=FILENAME ** checkin=Parent check-in UUID ** content=text ** comment=non-empty text ** ** Optional query parameters: ** ** comment_mimetype=text (NOT currently honored) ** ** dry_run=int (1 or 0) ** ** include_manifest=int (1 or 0), whether to include ** the generated manifest in the response. ** ** ** User must have Write permissions to use this page. ** ** Responds with JSON (with some state repeated ** from the input in order to avoid certain race conditions ** client-side): ** |
︙ | ︙ | |||
1575 1576 1577 1578 1579 1580 1581 | ** use of the name parameter. ** ** Which additional parameters are used by each distinct ajax route ** is an internal implementation detail and may change with any ** given build of this code. An unknown "name" value triggers an ** error, as documented for ajax_route_error(). */ | | | 1575 1576 1577 1578 1579 1580 1581 1582 1583 1584 1585 1586 1587 1588 1589 | ** use of the name parameter. ** ** Which additional parameters are used by each distinct ajax route ** is an internal implementation detail and may change with any ** given build of this code. An unknown "name" value triggers an ** error, as documented for ajax_route_error(). */ /* Allow no access to this page without check-in privilege */ login_check_credentials(); if( !g.perm.Write ){ if(zAjax!=0){ ajax_route_error(403, "Write permissions required."); }else{ login_needed(g.anon.Write); |
︙ | ︙ | |||
1668 1669 1670 1671 1672 1673 1674 | ** have a common, page-specific container we can filter our CSS ** selectors, but we do have the BODY, which we can decorate with ** whatever CSS we wish... */ style_script_begin(__FILE__,__LINE__); CX("document.body.classList.add('fileedit');\n"); style_script_end(); | | | 1668 1669 1670 1671 1672 1673 1674 1675 1676 1677 1678 1679 1680 1681 1682 | ** have a common, page-specific container we can filter our CSS ** selectors, but we do have the BODY, which we can decorate with ** whatever CSS we wish... */ style_script_begin(__FILE__,__LINE__); CX("document.body.classList.add('fileedit');\n"); style_script_end(); /* Status bar */ CX("<div id='fossil-status-bar' " "title='Status message area. Double-click to clear them.'>" "Status messages will go here.</div>\n" /* will be moved into the tab container via JS */); CX("<div id='fileedit-edit-status'>" |
︙ | ︙ | |||
1696 1697 1698 1699 1700 1701 1702 | "data-tab-parent='fileedit-tabs' " "data-tab-label='File Selection' " "class='hidden'" ">"); CX("<div id='fileedit-file-selector'></div>"); CX("</div>"/*#fileedit-tab-fileselect*/); } | | < | < | 1696 1697 1698 1699 1700 1701 1702 1703 1704 1705 1706 1707 1708 1709 1710 1711 1712 1713 1714 1715 1716 1717 1718 1719 1720 1721 1722 1723 1724 1725 1726 1727 1728 1729 1730 1731 1732 1733 1734 1735 | "data-tab-parent='fileedit-tabs' " "data-tab-label='File Selection' " "class='hidden'" ">"); CX("<div id='fileedit-file-selector'></div>"); CX("</div>"/*#fileedit-tab-fileselect*/); } /******* Content tab *******/ { CX("<div id='fileedit-tab-content' " "data-tab-parent='fileedit-tabs' " "data-tab-label='File Content' " "class='hidden'" ">"); CX("<div class='flex-container flex-row child-gap-small'>"); CX("<div class='input-with-label'>" "<button class='fileedit-content-reload confirmer' " ">Discard & Reload</button>" "<div class='help-buttonlet'>" "Reload the file from the server, discarding " "any local edits. To help avoid accidental loss of " "edits, it requires confirmation (a second click) within " "a few seconds or it will not reload." "</div>" "</div>"); style_select_list_int("select-font-size", "editor_font_size", "Editor font size", NULL/*tooltip*/, 100, "100%", 100, "125%", 125, "150%", 150, "175%", 175, "200%", 200, NULL); CX("</div>"); CX("<div class='flex-container flex-column stretch'>"); CX("<textarea name='content' id='fileedit-content-editor' " "class='fileedit' rows='25'>"); CX("</textarea>"); CX("</div>"/*textarea wrapper*/); CX("</div>"/*#tab-file-content*/); |
︙ | ︙ | |||
1937 1938 1939 1940 1941 1942 1943 | */ style_select_list_str("comment-mimetype", "comment_mimetype", "Comment style:", "Specify how fossil will interpret the " "comment string.", NULL, "Fossil", "text/x-fossil-wiki", | | | 1935 1936 1937 1938 1939 1940 1941 1942 1943 1944 1945 1946 1947 1948 1949 | */ style_select_list_str("comment-mimetype", "comment_mimetype", "Comment style:", "Specify how fossil will interpret the " "comment string.", NULL, "Fossil", "text/x-fossil-wiki", "Markdown", "text/x-markdown", "Plain text", "text/plain", NULL); CX("</div>\n"); } CX("<div class='fileedit-hint flex-container flex-row'>" "(Warning: switching from multi- to single-line mode will " "strip out all newlines!)</div>"); |
︙ | ︙ |
Changes to src/finfo.c.
︙ | ︙ | |||
565 566 567 568 569 570 571 | if( ridTo ){ zUuid = db_text(0, "SELECT uuid FROM blob WHERE rid=%d", ridTo); zLink = href("%R/info/%!S", zUuid); blob_appendf(&title, " and check-in %z%S</a>", zLink, zUuid); fossil_free(zUuid); } }else if( ridCi ){ | | | 565 566 567 568 569 570 571 572 573 574 575 576 577 578 579 | if( ridTo ){ zUuid = db_text(0, "SELECT uuid FROM blob WHERE rid=%d", ridTo); zLink = href("%R/info/%!S", zUuid); blob_appendf(&title, " and check-in %z%S</a>", zLink, zUuid); fossil_free(zUuid); } }else if( ridCi ){ blob_appendf(&title, "History of the file that is called "); hyperlinked_path(zFilename, &title, 0, "tree", "", LINKPATH_FILE); if( fShowId ) blob_appendf(&title, " (%d)", fnid); blob_appendf(&title, " at check-in %z%h</a>", href("%R/info?name=%t",zCI), zCI); }else{ blob_appendf(&title, "History for "); hyperlinked_path(zFilename, &title, 0, "tree", "", LINKPATH_FILE); |
︙ | ︙ |
Changes to src/foci.c.
︙ | ︙ | |||
266 267 268 269 270 271 272 | 0, /* xCommit */ 0, /* xRollback */ 0, /* xFindFunction */ 0, /* xRename */ 0, /* xSavepoint */ 0, /* xRelease */ 0, /* xRollbackTo */ | | < | 266 267 268 269 270 271 272 273 274 275 276 277 | 0, /* xCommit */ 0, /* xRollback */ 0, /* xFindFunction */ 0, /* xRename */ 0, /* xSavepoint */ 0, /* xRelease */ 0, /* xRollbackTo */ 0 /* xShadowName */ }; sqlite3_create_module(db, "files_of_checkin", &foci_module, 0); return SQLITE_OK; } |
Changes to src/fossil.page.chat.js.
1 2 | /** This file contains the client-side implementation of fossil's /chat | | < | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 | /** This file contains the client-side implementation of fossil's /chat application. */ window.fossil.onPageLoad(function(){ const F = window.fossil, D = F.dom; const E1 = function(selector){ const e = document.querySelector(selector); if(!e) throw new Error("missing required DOM element: "+selector); return e; }; /** Returns true if e is entirely within the bounds of the window's viewport. */ const isEntirelyInViewport = function(e) { const rect = e.getBoundingClientRect(); return ( rect.top >= 0 && |
︙ | ︙ | |||
63 64 65 66 67 68 69 | let dbg = document.querySelector('#debugMsg'); if(dbg){ /* This can inadvertently influence our flexbox layouts, so move it out of the way. */ D.append(document.body,dbg); } })(); | < < < < < < < < | > > > > > | | | 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 | let dbg = document.querySelector('#debugMsg'); if(dbg){ /* This can inadvertently influence our flexbox layouts, so move it out of the way. */ D.append(document.body,dbg); } })(); const ForceResizeKludge = (function(){ /* Workaround for Safari mayhem regarding use of vh CSS units.... We tried to use vh units to set the content area size for the chat layout, but Safari chokes on that, so we calculate that height here: 85% when in "normal" mode and 95% in chat-only mode. Larger than ~95% is too big for Firefox on Android, causing the input area to move off-screen. While we're here, we also use this to cap the max-height of the input field so that pasting huge text does not scroll the upper area of the input widget off-screen. */ const elemsToCount = [ document.querySelector('body > div.header'), document.querySelector('body > div.mainmenu'), document.querySelector('body > #hbdrop'), document.querySelector('body > div.footer') ]; const contentArea = E1('div.content'); const bcl = document.body.classList; const resized = function f(){ if(f.$disabled) return; const wh = window.innerHeight, com = bcl.contains('chat-only-mode'); var ht; var extra = 0; if(com){ ht = wh; }else{ elemsToCount.forEach((e)=>e ? extra += D.effectiveHeight(e) : false); ht = wh - extra; } f.chat.e.inputX.style.maxHeight = (ht/2)+"px"; /* ^^^^ this is a middle ground between having no size cap on the input field and having a fixed arbitrary cap. */; contentArea.style.height = contentArea.style.maxHeight = [ "calc(", (ht>=100 ? ht : 100), "px", " - 0.75em"/*fudge value*/,")" /* ^^^^ hypothetically not needed, but both Chrome/FF on Linux will force scrollbars on the body if this value is too small (<0.75em in my tests). */ ].join(''); if(false){ console.debug("resized.",wh, extra, ht, window.getComputedStyle(contentArea).maxHeight, contentArea); console.debug("Set input max height to: ", f.chat.e.inputX.style.maxHeight); |
︙ | ︙ | |||
325 326 327 328 329 330 331 | "chat-only" mode. That mode hides the page's header and footer, leaving only the chat application visible to the user. */ chatOnlyMode: function f(yes){ if(undefined === f.elemsToToggle){ f.elemsToToggle = []; | > > > > > > | | 321 322 323 324 325 326 327 328 329 330 331 332 333 334 335 336 337 338 339 340 341 | "chat-only" mode. That mode hides the page's header and footer, leaving only the chat application visible to the user. */ chatOnlyMode: function f(yes){ if(undefined === f.elemsToToggle){ f.elemsToToggle = []; document.querySelectorAll( ["body > div.header", "body > div.mainmenu", "body > div.footer", "#debugMsg" ].join(',') ).forEach((e)=>f.elemsToToggle.push(e)); } if(!arguments.length) yes = true; if(yes === this.isChatOnlyMode()) return this; if(yes){ D.addClass(f.elemsToToggle, 'hidden'); D.addClass(document.body, 'chat-only-mode'); document.body.scroll(0,document.body.height); |
︙ | ︙ | |||
392 393 394 395 396 397 398 | ctrl-enter both send them. */ "edit-ctrl-send": false, /* When on, the edit field starts as a single line and expands as the user types, and the relevant buttons are laid out in a compact form. When off, the edit field and buttons are larger. */ "edit-compact-mode": true, | < < < < < | 394 395 396 397 398 399 400 401 402 403 404 405 406 407 | ctrl-enter both send them. */ "edit-ctrl-send": false, /* When on, the edit field starts as a single line and expands as the user types, and the relevant buttons are laid out in a compact form. When off, the edit field and buttons are larger. */ "edit-compact-mode": true, /* When on, sets the font-family on messages and the edit field to monospace. */ "monospace-messages": false, /* When on, non-chat UI elements (page header/footer) are hidden */ "chat-only-mode": false, /* When set to a URI, it is assumed to be an audio file, |
︙ | ︙ | |||
1500 1501 1502 1503 1504 1505 1506 | /* Shift-enter will run preview mode UNLESS preview mode is active AND the input field is empty, in which case it will switch back to message view. */ if(Chat.e.currentView===Chat.e.viewPreview && !text){ Chat.setCurrentView(Chat.e.viewMessages); }else if(!text){ f.$toggleCompact(compactMode); | | | | | 1497 1498 1499 1500 1501 1502 1503 1504 1505 1506 1507 1508 1509 1510 1511 1512 1513 1514 1515 1516 1517 1518 1519 1520 1521 1522 1523 1524 1525 1526 1527 1528 1529 1530 1531 1532 1533 1534 1535 1536 1537 1538 1539 1540 1541 1542 1543 1544 1545 | /* Shift-enter will run preview mode UNLESS preview mode is active AND the input field is empty, in which case it will switch back to message view. */ if(Chat.e.currentView===Chat.e.viewPreview && !text){ Chat.setCurrentView(Chat.e.viewMessages); }else if(!text){ f.$toggleCompact(compactMode); }else{ Chat.e.btnPreview.click(); } return false; } if(ev.ctrlKey && !text && !BlobXferState.blob){ /* Ctrl-enter on empty input field(s) toggles Enter/Ctrl-enter mode */ ev.preventDefault(); ev.stopPropagation(); f.$toggleCtrl(ctrlMode); return false; } if(!ctrlMode && ev.ctrlKey && text){ //console.debug("!ctrlMode && ev.ctrlKey && text."); /* Ctrl-enter in Enter-sends mode SHOULD, with this logic add a newline, but that is not happening, for unknown reasons (possibly related to this element being a conteneditable DIV instead of a textarea). Forcibly appending a newline do the input area does not work, also for unknown reasons, and would only be suitable when we're at the end of the input. Strangely, this approach DOES work for shift-enter, but we need shift-enter as a hotkey for preview mode. */ //return; // return here "should" cause newline to be added, but that doesn't work } if((!ctrlMode && !ev.ctrlKey) || (ev.ctrlKey/* && ctrlMode*/)){ /* Ship it! */ ev.preventDefault(); ev.stopPropagation(); Chat.submitMessage(); return false; } }; Chat.e.inputFields.forEach( (e)=>e.addEventListener('keydown', inputWidgetKeydown, false) ); Chat.e.btnSubmit.addEventListener('click',(e)=>{ e.preventDefault(); Chat.submitMessage(); return false; |
︙ | ︙ | |||
1668 1669 1670 1671 1672 1673 1674 | boolValue: 'edit-widget-x', hint: [ "When enabled, chat input uses a so-called 'contenteditable' ", "field. Though generally more comfortable and modern than ", "plain-text input fields, browser-specific quirks and bugs ", "may lead to frustration. Ideal for mobile devices." ].join('') | < < < < < < < | | 1665 1666 1667 1668 1669 1670 1671 1672 1673 1674 1675 1676 1677 1678 1679 1680 1681 1682 1683 1684 1685 | boolValue: 'edit-widget-x', hint: [ "When enabled, chat input uses a so-called 'contenteditable' ", "field. Though generally more comfortable and modern than ", "plain-text input fields, browser-specific quirks and bugs ", "may lead to frustration. Ideal for mobile devices." ].join('') }] },{ label: "Appearance Options...", children:[{ label: "Left-align my posts", hint: "Default alignment of your own messages is selected " + "based window width/height ratio.", boolValue: ()=>!document.body.classList.contains('my-messages-right'), callback: function f(){ document.body.classList[ this.checkbox.checked ? 'remove' : 'add' ]('my-messages-right'); } },{ |
︙ | ︙ | |||
1973 1974 1975 1976 1977 1978 1979 | D.enable(elemsToEnable); } }); return false; }; btnPreview.addEventListener('click', submit, false); })()/*message preview setup*/; | | | 1963 1964 1965 1966 1967 1968 1969 1970 1971 1972 1973 1974 1975 1976 1977 | D.enable(elemsToEnable); } }); return false; }; btnPreview.addEventListener('click', submit, false); })()/*message preview setup*/; /** Callback for poll() to inject new content into the page. jx == the response from /chat-poll. If atEnd is true, the message is appended to the end of the chat list (for loading older messages), else the beginning (the default). */ const newcontent = function f(jx,atEnd){ if(!f.processPost){ /** Processes chat message m, placing it either the start (if atEnd |
︙ | ︙ |
Changes to src/fossil.page.fileedit.js.
︙ | ︙ | |||
68 69 70 71 72 73 74 | ); */ const E = (s)=>document.querySelector(s), D = F.dom, P = F.page; P.config = { | | < < < < < < | 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 | ); */ const E = (s)=>document.querySelector(s), D = F.dom, P = F.page; P.config = { defaultMaxStashSize: 7 }; /** $stash is an internal-use-only object for managing "stashed" local edits, to help avoid that users accidentally lose content by switching tabs or following links or some such. The basic theory of operation is... |
︙ | ︙ | |||
574 575 576 577 578 579 580 | opt._finfo = finfo; if(0===f.compare(currentFinfo, finfo)){ D.attr(opt, 'selected', true); } }); } }/*P.stashWidget*/; | | | 568 569 570 571 572 573 574 575 576 577 578 579 580 581 582 | opt._finfo = finfo; if(0===f.compare(currentFinfo, finfo)){ D.attr(opt, 'selected', true); } }); } }/*P.stashWidget*/; /** Internal workaround to select the current preview mode and fire a change event if the value actually changes or if forceEvent is truthy. */ P.selectPreviewMode = function(modeValue, forceEvent){ const s = this.e.selectPreviewMode; |
︙ | ︙ | |||
728 729 730 731 732 733 734 | } } ); //////////////////////////////////////////////////////////// // Trigger preview on Ctrl-Enter. This only works on the built-in // editor widget, not a client-provided one. P.e.taEditor.addEventListener('keydown',function(ev){ | | | 722 723 724 725 726 727 728 729 730 731 732 733 734 735 736 | } } ); //////////////////////////////////////////////////////////// // Trigger preview on Ctrl-Enter. This only works on the built-in // editor widget, not a client-provided one. P.e.taEditor.addEventListener('keydown',function(ev){ if(ev.shiftKey && 13 === ev.keyCode){ ev.preventDefault(); ev.stopPropagation(); P.e.taEditor.blur(/*force change event, if needed*/); P.tabs.switchToTab(P.e.tabs.preview); if(!P.e.cbAutoPreview.checked){/* If NOT in auto-preview mode, trigger an update. */ P.preview(); } |
︙ | ︙ | |||
849 850 851 852 853 854 855 | } ); P.fileSelectWidget.init(); P.stashWidget.init( P.e.tabs.content.lastElementChild ); | < < < < < < < | 843 844 845 846 847 848 849 850 851 852 853 854 855 856 | } ); P.fileSelectWidget.init(); P.stashWidget.init( P.e.tabs.content.lastElementChild ); }/*F.onPageLoad()*/); /** Getter (if called with no args) or setter (if passed an arg) for the current file content. The setter form sets the content, dispatches a |
︙ | ︙ | |||
1172 1173 1174 1175 1176 1177 1178 | const target = this.e.previewTarget; D.clearElement(target); if('string'===typeof c) D.parseHtml(target,c); if(F.pikchr){ F.pikchr.addSrcView(target.querySelectorAll('svg.pikchr')); } }; | | | 1159 1160 1161 1162 1163 1164 1165 1166 1167 1168 1169 1170 1171 1172 1173 | const target = this.e.previewTarget; D.clearElement(target); if('string'===typeof c) D.parseHtml(target,c); if(F.pikchr){ F.pikchr.addSrcView(target.querySelectorAll('svg.pikchr')); } }; /** Callback for use with F.connectPagePreviewers() */ P._postPreview = function(content,callback){ if(!affirmHasFile()) return this; if(!content){ callback(content); |
︙ | ︙ |
Changes to src/fossil.page.pikchrshow.js.
︙ | ︙ | |||
314 315 316 317 318 319 320 | \u00a0 to , so...*/.split(' ').join('\u00a0')); if(needsPreview) P.preview(); else{ /*If it's from the server, it's already rendered, but this gets all labels/headers in sync.*/ P.renderPreview(); } | | | 314 315 316 317 318 319 320 321 322 323 324 325 326 327 328 | \u00a0 to , so...*/.split(' ').join('\u00a0')); if(needsPreview) P.preview(); else{ /*If it's from the server, it's already rendered, but this gets all labels/headers in sync.*/ P.renderPreview(); } } }/*F.onPageLoad()*/); /** Updates the preview view based on the current preview mode and error state. */ P.renderPreview = function f(){ |
︙ | ︙ |
Changes to src/fossil.page.pikchrshowasm.js.
︙ | ︙ | |||
390 391 392 393 394 395 396 | const val = ev.target.value; if(!val) return; setCurrentText(val); }, false); }/*Examples*/ /** | | | 390 391 392 393 394 395 396 397 398 399 400 401 402 403 404 | const val = ev.target.value; if(!val) return; setCurrentText(val); }, false); }/*Examples*/ /** TODO: Handle load/import of an external pikchr file. */ if(0) E('#load-pikchr').addEventListener('change',function(){ const f = this.files[0]; const r = new FileReader(); const status = {loaded: 0, total: 0}; this.setAttribute('disabled','disabled'); const that = this; |
︙ | ︙ | |||
477 478 479 480 481 482 483 | that height here. Larger than ~95% is too big for Firefox on Android, causing the input area to move off-screen. */ const appViews = EAll('.app-view'); const elemsToCount = [ /* Elements which we need to always count in the visible body size. */ | | | | | 477 478 479 480 481 482 483 484 485 486 487 488 489 490 491 492 493 | that height here. Larger than ~95% is too big for Firefox on Android, causing the input area to move off-screen. */ const appViews = EAll('.app-view'); const elemsToCount = [ /* Elements which we need to always count in the visible body size. */ E('body > div.header'), E('body > div.mainmenu'), E('body > div.footer') ]; const resized = function f(){ if(f.$disabled) return; const wh = window.innerHeight; var ht; var extra = 0; elemsToCount.forEach((e)=>e ? extra += F.dom.effectiveHeight(e) : false); |
︙ | ︙ |
Changes to src/fossil.page.whistory.js.
1 2 3 4 5 6 7 8 9 10 11 12 13 | /* This script adds interactivity for wiki-history webpages. * * The main code is within the 'on-click' handler of the "diff" links. * Instead of standard redirection it fills-in two hidden inputs with * the appropriate values and submits the corresponding form. * A special care should be taken if some intermediate edits are hidden. * * For the sake of compatibility with ascetic browsers the code tries * to avoid modern API and ECMAScript constructs. This makes it less * readable and may be reconsidered in the future. */ window.addEventListener( 'load', function() { | | < < < | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 | /* This script adds interactivity for wiki-history webpages. * * The main code is within the 'on-click' handler of the "diff" links. * Instead of standard redirection it fills-in two hidden inputs with * the appropriate values and submits the corresponding form. * A special care should be taken if some intermediate edits are hidden. * * For the sake of compatibility with ascetic browsers the code tries * to avoid modern API and ECMAScript constructs. This makes it less * readable and may be reconsidered in the future. */ window.addEventListener( 'load', function() { document.getElementById("wh-form").method = "GET"; var wh_id = document.getElementById("wh-id" ); var wh_pid = document.getElementById("wh-pid"); var wh_cleaner = document.getElementById("wh-cleaner"); var wh_collapser = document.getElementById("wh-collapser"); var wh_radios = []; // user-visible controls for baseline selection |
︙ | ︙ |
Changes to src/fossil.page.wikiedit.js.
︙ | ︙ | |||
73 74 75 76 77 78 79 | useConfirmerButtons:{ /* If true during fossil.page setup, certain buttons will use a "confirmer" step, else they will not. The confirmer topic has been the source of much contention in the forum. */ save: false, reload: true, discardStash: true | < < < < < < < | < < < < < < | 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 | useConfirmerButtons:{ /* If true during fossil.page setup, certain buttons will use a "confirmer" step, else they will not. The confirmer topic has been the source of much contention in the forum. */ save: false, reload: true, discardStash: true } }; /** $stash is an internal-use-only object for managing "stashed" local edits, to help avoid that users accidentally lose content by switching tabs or following links or some such. The basic theory of operation is... |
︙ | ︙ | |||
465 466 467 468 469 470 471 | opt.dataset.isDeleted = true; } self._refreshStashMarks(opt); }); D.enable(sel); if(P.winfo) sel.value = P.winfo.name; }, | | | 452 453 454 455 456 457 458 459 460 461 462 463 464 465 466 | opt.dataset.isDeleted = true; } self._refreshStashMarks(opt); }); D.enable(sel); if(P.winfo) sel.value = P.winfo.name; }, /** Loads the page list and populates the selection list. */ loadList: function callee(){ if(!callee.onload){ const self = this; callee.onload = function(list){ self.cache.pageList = list; self._rebuildList(); |
︙ | ︙ | |||
662 663 664 665 666 667 668 | }, false); D.append( parentElem, D.append(D.addClass(D.div(), 'fieldset-wrapper'), fsFilter, fsNewPage, fsLegend) ); | | | 649 650 651 652 653 654 655 656 657 658 659 660 661 662 663 | }, false); D.append( parentElem, D.append(D.addClass(D.div(), 'fieldset-wrapper'), fsFilter, fsNewPage, fsLegend) ); D.append(parentElem, btn); btn.addEventListener('click', ()=>this.loadList(), false); this.loadList(); const onSelect = (e)=>P.loadPage(e.target.value); sel.addEventListener('change', onSelect, false); sel.addEventListener('dblclick', onSelect, false); F.page.addEventListener('wiki-stash-updated', ()=>{ |
︙ | ︙ | |||
685 686 687 688 689 690 691 | if(page.isEmpty) opt.dataset.isDeleted = true; else delete opt.dataset.isDeleted; self._refreshStashMarks(opt); }else if('sandbox'!==page.type){ F.error("BUG: internal mis-handling of page object: missing OPTION for page "+page.name); } }); | < < < < < < < < > | 672 673 674 675 676 677 678 679 680 681 682 683 684 685 686 687 | if(page.isEmpty) opt.dataset.isDeleted = true; else delete opt.dataset.isDeleted; self._refreshStashMarks(opt); }else if('sandbox'!==page.type){ F.error("BUG: internal mis-handling of page object: missing OPTION for page "+page.name); } }); delete this.init; } }; /** Widget for listing and selecting $stash entries. */ P.stashWidget = { e:{/*DOM element(s)*/}, |
︙ | ︙ | |||
932 933 934 935 936 937 938 | } } ); //////////////////////////////////////////////////////////// // Trigger preview on Ctrl-Enter. This only works on the built-in // editor widget, not a client-provided one. P.e.taEditor.addEventListener('keydown',function(ev){ | | | 912 913 914 915 916 917 918 919 920 921 922 923 924 925 926 | } } ); //////////////////////////////////////////////////////////// // Trigger preview on Ctrl-Enter. This only works on the built-in // editor widget, not a client-provided one. P.e.taEditor.addEventListener('keydown',function(ev){ if(ev.shiftKey && 13 === ev.keyCode){ ev.preventDefault(); ev.stopPropagation(); P.e.taEditor.blur(/*force change event, if needed*/); P.tabs.switchToTab(P.e.tabs.preview); if(!P.e.cbAutoPreview.checked){/* If NOT in auto-preview mode, trigger an update. */ P.preview(); } |
︙ | ︙ | |||
1479 1480 1481 1482 1483 1484 1485 | const target = this.e.previewTarget; D.clearElement(target); if('string'===typeof c) D.parseHtml(target,c); if(F.pikchr){ F.pikchr.addSrcView(target.querySelectorAll('svg.pikchr')); } }; | | | 1459 1460 1461 1462 1463 1464 1465 1466 1467 1468 1469 1470 1471 1472 1473 | const target = this.e.previewTarget; D.clearElement(target); if('string'===typeof c) D.parseHtml(target,c); if(F.pikchr){ F.pikchr.addSrcView(target.querySelectorAll('svg.pikchr')); } }; /** Callback for use with F.connectPagePreviewers() */ P._postPreview = function(content,callback){ if(!affirmPageLoaded()) return this; if(!content){ callback(content); |
︙ | ︙ |
Changes to src/graph.c.
︙ | ︙ | |||
309 310 311 312 313 314 315 | dist = i - iNearto; if( dist<0 ) dist = -dist; if( dist<iBestDist ){ iBestDist = dist; iBest = i; } } | | | 309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 | dist = i - iNearto; if( dist<0 ) dist = -dist; if( dist<iBestDist ){ iBestDist = dist; iBest = i; } } /* If no match, consider all possible rails */ if( iBestDist>1000 ){ for(i=0; i<=p->mxRail+1; i++){ int dist; if( inUseMask & BIT(i) ) continue; if( iNearto<=0 ){ iBest = i; |
︙ | ︙ | |||
537 538 539 540 541 542 543 | ** the aParent[] array. */ if( (tmFlags & (TIMELINE_DISJOINT|TIMELINE_XMERGE))!=0 ){ for(pRow=p->pFirst; pRow; pRow=pRow->pNext){ for(i=1; i<pRow->nParent; i++){ GraphRow *pParent = hashFind(p, pRow->aParent[i]); if( pParent==0 ){ | | | | 537 538 539 540 541 542 543 544 545 546 547 548 549 550 551 552 553 554 555 556 557 558 559 560 561 562 563 564 | ** the aParent[] array. */ if( (tmFlags & (TIMELINE_DISJOINT|TIMELINE_XMERGE))!=0 ){ for(pRow=p->pFirst; pRow; pRow=pRow->pNext){ for(i=1; i<pRow->nParent; i++){ GraphRow *pParent = hashFind(p, pRow->aParent[i]); if( pParent==0 ){ memmove(pRow->aParent+i, pRow->aParent+i+1, sizeof(pRow->aParent[0])*(pRow->nParent-i-1)); pRow->nParent--; if( i<pRow->nNonCherrypick ){ pRow->nNonCherrypick--; }else{ pRow->nCherrypick--; } i--; } } } } /* Put the deepest (earliest) merge parent first in the list. ** An off-screen merge parent is considered deepest. */ for(pRow=p->pFirst; pRow; pRow=pRow->pNext ){ if( pRow->nParent<=1 ) continue; for(i=1; i<pRow->nParent; i++){ GraphRow *pParent = hashFind(p, pRow->aParent[i]); |
︙ | ︙ | |||
938 939 940 941 942 943 944 | /* The parent branch from which this branch emerges is on the ** same rail as pRow. Do not shift as that would stack a child ** branch directly above its parent. */ continue; } /* All clear. Make the translation | | | 938 939 940 941 942 943 944 945 946 947 948 949 950 951 952 | /* The parent branch from which this branch emerges is on the ** same rail as pRow. Do not shift as that would stack a child ** branch directly above its parent. */ continue; } /* All clear. Make the translation */ for(pLoop=pRow; pLoop && pLoop->idx<=pBottom->idx; pLoop=pLoop->pNext){ if( pLoop->iRail==iFrom ){ pLoop->iRail = iTo; pLoop->aiRiser[iTo] = pLoop->aiRiser[iFrom]; pLoop->aiRiser[iFrom] = -1; } } |
︙ | ︙ |
Changes to src/graph.js.
︙ | ︙ | |||
132 133 134 135 136 137 138 | function hideGraphTooltip(){ /* Hide the tooltip */ document.removeEventListener('keydown',onKeyDown,/* useCapture == */true); stopCloseTimer(); tooltipObj.style.display = "none"; tooltipInfo.ixActive = -1; tooltipInfo.idNodeActive = 0; } | | | 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 | function hideGraphTooltip(){ /* Hide the tooltip */ document.removeEventListener('keydown',onKeyDown,/* useCapture == */true); stopCloseTimer(); tooltipObj.style.display = "none"; tooltipInfo.ixActive = -1; tooltipInfo.idNodeActive = 0; } document.body.onunload = hideGraphTooltip function stopDwellTimer(){ if(tooltipInfo.idTimer!=0){ clearTimeout(tooltipInfo.idTimer); tooltipInfo.idTimer = 0; } } function resumeCloseTimer(){ |
︙ | ︙ |
Changes to src/hbmenu.js.
︙ | ︙ | |||
19 20 21 22 23 24 25 | ** ** This was original the "js.txt" file for the default skin. It was subsequently ** moved into src/hbmenu.js so that it could be more easily reused by other skins ** using the "builtin_request_js" TH1 command. ** ** Operation: ** | | | | 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 | ** ** This was original the "js.txt" file for the default skin. It was subsequently ** moved into src/hbmenu.js so that it could be more easily reused by other skins ** using the "builtin_request_js" TH1 command. ** ** Operation: ** ** This script request that the HTML contain two elements: ** ** <a id="hbbtn"> <--- The hamburger menu button ** <div id="hbdrop"> <--- Container for the hamburger menu ** ** Bindings are made on hbbtn so that when it is clicked, the following ** happens: ** ** 1. An XHR is made to /sitemap?popup to fetch the HTML for the ** popup menu. ** |
︙ | ︙ |
Changes to src/hook.c.
︙ | ︙ | |||
230 231 232 233 234 235 236 | ** ** > fossil hook test [OPTIONS] ID ** ** Run the hook script given by ID for testing purposes. ** Options: ** ** --dry-run Print the script on stdout rather than run it | | | 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 | ** ** > fossil hook test [OPTIONS] ID ** ** Run the hook script given by ID for testing purposes. ** Options: ** ** --dry-run Print the script on stdout rather than run it ** --base-rcvid N Pretend that the hook-last-rcvid value is N ** --new-rcvid M Pretend that the last rcvid valud is M ** --aux-file NAME NAME is substituted for %A in the script ** ** The --base-rcvid and --new-rcvid options are silently ignored if ** the hook type is not "after-receive". The default values for ** --base-rcvid and --new-rcvid cause the last receive to be processed. */ |
︙ | ︙ |
Changes to src/http.c.
︙ | ︙ | |||
45 46 47 48 49 50 51 | /* Maximum number of HTTP Authorization attempts */ #define MAX_HTTP_AUTH 2 /* Keep track of HTTP Basic Authorization failures */ static int fSeenHttpAuth = 0; | < < < < < | 45 46 47 48 49 50 51 52 53 54 55 56 57 58 | /* Maximum number of HTTP Authorization attempts */ #define MAX_HTTP_AUTH 2 /* Keep track of HTTP Basic Authorization failures */ static int fSeenHttpAuth = 0; /* ** Construct the "login" card with the client credentials. ** ** login LOGIN NONCE SIGNATURE ** ** The LOGIN is the user id of the client. NONCE is the sha1 checksum ** of all payload that follows the login card. SIGNATURE is the sha1 |
︙ | ︙ | |||
101 102 103 104 105 106 107 | ** sha1_shared_secret()), not the original password. So convert the ** password to its SHA1 encoding if it isn't already a SHA1 hash. ** ** We assume that a hexadecimal string of exactly 40 characters is a ** SHA1 hash, not an original password. If a user has a password which ** just happens to be a 40-character hex string, then this routine won't ** be able to distinguish it from a hash, the translation will not be | | | 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 | ** sha1_shared_secret()), not the original password. So convert the ** password to its SHA1 encoding if it isn't already a SHA1 hash. ** ** We assume that a hexadecimal string of exactly 40 characters is a ** SHA1 hash, not an original password. If a user has a password which ** just happens to be a 40-character hex string, then this routine won't ** be able to distinguish it from a hash, the translation will not be ** performed, and the sync won't work. */ if( zPw && zPw[0] && (strlen(zPw)!=40 || !validate16(zPw,40)) ){ const char *zProjectCode = 0; if( g.url.flags & URL_USE_PARENT ){ zProjectCode = db_get("parent-project-code", 0); }else{ zProjectCode = db_get("project-code", 0); |
︙ | ︙ | |||
261 262 263 264 265 266 267 | blob_write_to_file(pSend, zUplink); if( g.fHttpTrace ){ fossil_print("RUN %s\n", zCmd); } rc = fossil_system(zCmd); if( rc ){ fossil_warning("Transport command failed: %s\n", zCmd); | | | < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < | 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 | blob_write_to_file(pSend, zUplink); if( g.fHttpTrace ){ fossil_print("RUN %s\n", zCmd); } rc = fossil_system(zCmd); if( rc ){ fossil_warning("Transport command failed: %s\n", zCmd); } fossil_free(zCmd); file_delete(zUplink); if( file_size(zDownlink, ExtFILE)<0 ){ blob_zero(pReply); }else{ blob_read_from_file(pReply, zDownlink, ExtFILE); file_delete(zDownlink); } return rc; } /* ** Sign the content in pSend, compress it, and send it to the server ** via HTTP or HTTPS. Get a reply, uncompress the reply, and store the reply ** in pRecv. pRecv is assumed to be uninitialized when ** this routine is called - this routine will initialize it. |
︙ | ︙ | |||
434 435 436 437 438 439 440 441 442 443 444 445 446 447 448 449 450 451 452 | const char *zAltMimetype /* Alternative mimetype if not NULL */ ){ Blob login; /* The login card */ Blob payload; /* The complete payload including login card */ Blob hdr; /* The HTTP request header */ int closeConnection; /* True to close the connection when done */ int iLength; /* Expected length of the reply payload */ int rc = 0; /* Result code */ int iHttpVersion; /* Which version of HTTP protocol server uses */ char *zLine; /* A single line of the reply header */ int i; /* Loop counter */ int isError = 0; /* True if the reply is an error message */ int isCompressed = 1; /* True if the reply is compressed */ if( g.zHttpCmd!=0 ){ /* Handle the --transport-command option for "fossil sync" and similar */ return http_exchange_external(pSend,pReply,mHttpFlags,zAltMimetype); } | > < < < < < < < < < < | 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 311 312 313 314 315 316 | const char *zAltMimetype /* Alternative mimetype if not NULL */ ){ Blob login; /* The login card */ Blob payload; /* The complete payload including login card */ Blob hdr; /* The HTTP request header */ int closeConnection; /* True to close the connection when done */ int iLength; /* Expected length of the reply payload */ int iRecvLen; /* Received length of the reply payload */ int rc = 0; /* Result code */ int iHttpVersion; /* Which version of HTTP protocol server uses */ char *zLine; /* A single line of the reply header */ int i; /* Loop counter */ int isError = 0; /* True if the reply is an error message */ int isCompressed = 1; /* True if the reply is compressed */ if( g.zHttpCmd!=0 ){ /* Handle the --transport-command option for "fossil sync" and similar */ return http_exchange_external(pSend,pReply,mHttpFlags,zAltMimetype); } if( transport_open(&g.url) ){ fossil_warning("%s", transport_errmsg(&g.url)); return 1; } /* Construct the login card and prepare the complete payload */ if( blob_size(pSend)==0 ){ |
︙ | ︙ | |||
486 487 488 489 490 491 492 493 494 495 496 497 498 499 | /* When tracing, write the transmitted HTTP message both to standard ** output and into a file. The file can then be used to drive the ** server-side like this: ** ** ./fossil test-http <http-request-1.txt */ if( g.fHttpTrace ){ char *zOutFile; FILE *out; traceCnt++; zOutFile = mprintf("http-request-%d.txt", traceCnt); out = fopen(zOutFile, "wb"); if( out ){ fwrite(blob_buffer(&hdr), 1, blob_size(&hdr), out); | > | 333 334 335 336 337 338 339 340 341 342 343 344 345 346 347 | /* When tracing, write the transmitted HTTP message both to standard ** output and into a file. The file can then be used to drive the ** server-side like this: ** ** ./fossil test-http <http-request-1.txt */ if( g.fHttpTrace ){ static int traceCnt = 0; char *zOutFile; FILE *out; traceCnt++; zOutFile = mprintf("http-request-%d.txt", traceCnt); out = fopen(zOutFile, "wb"); if( out ){ fwrite(blob_buffer(&hdr), 1, blob_size(&hdr), out); |
︙ | ︙ | |||
522 523 524 525 526 527 528 | transport_flip(&g.url); /* ** Read and interpret the server reply */ closeConnection = 1; iLength = -1; | < | 370 371 372 373 374 375 376 377 378 379 380 381 382 383 | transport_flip(&g.url); /* ** Read and interpret the server reply */ closeConnection = 1; iLength = -1; while( (zLine = transport_receive_line(&g.url))!=0 && zLine[0]!=0 ){ if( mHttpFlags & HTTP_VERBOSE ){ fossil_print("Read: [%s]\n", zLine); } if( fossil_strnicmp(zLine, "http/1.", 7)==0 ){ if( sscanf(zLine, "HTTP/1.%d %d", &iHttpVersion, &rc)!=2 ) goto write_err; if( rc==401 ){ |
︙ | ︙ | |||
561 562 563 564 565 566 567 | if( rc!=200 && rc!=301 && rc!=302 && rc!=307 && rc!=308 ){ int ii; for(ii=7; zLine[ii] && zLine[ii]!=' '; ii++){} while( zLine[ii]==' ' ) ii++; fossil_warning("server says: %s", &zLine[ii]); goto write_err; } | < | 408 409 410 411 412 413 414 415 416 417 418 419 420 421 | if( rc!=200 && rc!=301 && rc!=302 && rc!=307 && rc!=308 ){ int ii; for(ii=7; zLine[ii] && zLine[ii]!=' '; ii++){} while( zLine[ii]==' ' ) ii++; fossil_warning("server says: %s", &zLine[ii]); goto write_err; } closeConnection = 0; }else if( fossil_strnicmp(zLine, "content-length:", 15)==0 ){ for(i=15; fossil_isspace(zLine[i]); i++){} iLength = atoi(&zLine[i]); }else if( fossil_strnicmp(zLine, "connection:", 11)==0 ){ char c; for(i=11; fossil_isspace(zLine[i]); i++){} |
︙ | ︙ | |||
635 636 637 638 639 640 641 | if( mHttpFlags & HTTP_NOCOMPRESS ) isCompressed = 0; }else if( fossil_strnicmp(&zLine[14], "application/x-fossil", -1)!=0 ){ isError = 1; } } } } | < < < < < < < < < < < < < < < < < < < < | < < < < < < < < < < < | < | < < < < < < | | < < < | | < | | < < < < < < < | < < < < < < < < < < < | 481 482 483 484 485 486 487 488 489 490 491 492 493 494 495 496 497 498 499 500 501 502 503 504 505 506 507 508 509 510 511 512 513 514 | if( mHttpFlags & HTTP_NOCOMPRESS ) isCompressed = 0; }else if( fossil_strnicmp(&zLine[14], "application/x-fossil", -1)!=0 ){ isError = 1; } } } } if( iLength<0 ){ fossil_warning("server did not reply"); goto write_err; } if( rc!=200 ){ fossil_warning("\"location:\" missing from %d redirect reply", rc); goto write_err; } /* ** Extract the reply payload that follows the header */ blob_zero(pReply); blob_resize(pReply, iLength); iRecvLen = transport_receive(&g.url, blob_buffer(pReply), iLength); if( iRecvLen != iLength ){ fossil_warning("response truncated: got %d bytes of %d", iRecvLen, iLength); goto write_err; } blob_resize(pReply, iLength); if( isError ){ char *z; int i, j; z = blob_str(pReply); for(i=j=0; z[i]; i++, j++){ if( z[i]=='<' ){ while( z[i] && z[i]!='>' ) i++; |
︙ | ︙ |
Changes to src/http_ssl.c.
︙ | ︙ | |||
57 58 59 60 61 62 63 | } sException; static int sslNoCertVerify = 0; /* Do not verify SSL certs */ /* This is a self-signed cert in the PEM format that can be used when ** no other certs are available. */ | | | 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 | } sException; static int sslNoCertVerify = 0; /* Do not verify SSL certs */ /* This is a self-signed cert in the PEM format that can be used when ** no other certs are available. */ static const char sslSelfCert[] = "-----BEGIN CERTIFICATE-----\n" "MIIDMTCCAhkCFGrDmuJkkzWERP/ITBvzwwI2lv0TMA0GCSqGSIb3DQEBCwUAMFQx\n" "CzAJBgNVBAYTAlVTMQswCQYDVQQIDAJOQzESMBAGA1UEBwwJQ2hhcmxvdHRlMRMw\n" "EQYDVQQKDApGb3NzaWwtU0NNMQ8wDQYDVQQDDAZGb3NzaWwwIBcNMjExMjI3MTEz\n" "MTU2WhgPMjEyMTEyMjcxMTMxNTZaMFQxCzAJBgNVBAYTAlVTMQswCQYDVQQIDAJO\n" "QzESMBAGA1UEBwwJQ2hhcmxvdHRlMRMwEQYDVQQKDApGb3NzaWwtU0NNMQ8wDQYD\n" "VQQDDAZGb3NzaWwwggEiMA0GCSqGSIb3DQEBAQUAA4IBDwAwggEKAoIBAQCCbTU2\n" |
︙ | ︙ | |||
81 82 83 84 85 86 87 | "G6wxc4kN9dLK+5S29q3nzl24/qzXoF8P9Re5KBCbrwaHgy+OEEceq5jkmfGFxXjw\n" "pvVCNry5uAhH5NqbXZampUWqiWtM4eTaIPo7Y2mDA1uWhuWtO6F9PsnFJlQHCnwy\n" "s/TsrXk=\n" "-----END CERTIFICATE-----\n"; /* This is the private-key corresponding to the cert above */ | | | 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 | "G6wxc4kN9dLK+5S29q3nzl24/qzXoF8P9Re5KBCbrwaHgy+OEEceq5jkmfGFxXjw\n" "pvVCNry5uAhH5NqbXZampUWqiWtM4eTaIPo7Y2mDA1uWhuWtO6F9PsnFJlQHCnwy\n" "s/TsrXk=\n" "-----END CERTIFICATE-----\n"; /* This is the private-key corresponding to the cert above */ static const char sslSelfPKey[] = "-----BEGIN PRIVATE KEY-----\n" "MIIEvAIBADANBgkqhkiG9w0BAQEFAASCBKYwggSiAgEAAoIBAQCCbTU26GRQHQqL\n" "q7vyZ0OxpAxmgfAKCxt6eIz+jBi2ZM/CB5vVXWVh2+SkSiWEA3UZiUqXxZlzmS/C\n" "glZdiwLLDJML8B4OiV72oivFH/vJ7+cbvh1dTxnYiHuww7GfQngPrLfefiIYPDk1\n" "GTUJHBQ7Ue477F7F8vKuHdVgwktF/JDM6M60aSqlo2D/oysirrb+dlurTlv0rjsY\n" "Ofq6bLAajoL3qi/vek6DNssoywbge4PfbTgS9g7Gcgncbcet5pvaS12JavhFcd4J\n" "U4Ity49Hl9S/C2MfZ1tE53xVggRwKz4FPj65M5uymTdcxtjKXtCxIE1kKxJxXQh7\n" |
︙ | ︙ | |||
204 205 206 207 208 209 210 | "or the ssl-identity setting."); return 0; /* no cert available */ } /* ** Convert an OpenSSL ASN1_TIME to an ISO8601 timestamp. ** | | | 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 | "or the ssl-identity setting."); return 0; /* no cert available */ } /* ** Convert an OpenSSL ASN1_TIME to an ISO8601 timestamp. ** ** Per RFC 5280, ASN1 timestamps in X.509 certificates must ** be in UTC (Zulu timezone) with no fractional seconds. ** ** If showUtc==1, add " UTC" at the end of the returned string. This is ** not ISO8601-compliant, but makes the displayed value more user-friendly. */ static const char *ssl_asn1time_to_iso8601(ASN1_TIME *asn1_time, int showUtc){ |
︙ | ︙ | |||
410 411 412 413 414 415 416 | ** Invoke this routine to disable SSL cert verification. After ** this call is made, any SSL cert that the server provides will ** be accepted. Communication will still be encrypted, but the ** client has no way of knowing whether it is talking to the ** real server or a man-in-the-middle imposter. */ void ssl_disable_cert_verification(void){ | | | 410 411 412 413 414 415 416 417 418 419 420 421 422 423 424 | ** Invoke this routine to disable SSL cert verification. After ** this call is made, any SSL cert that the server provides will ** be accepted. Communication will still be encrypted, but the ** client has no way of knowing whether it is talking to the ** real server or a man-in-the-middle imposter. */ void ssl_disable_cert_verification(void){ sslNoCertVerify = 1; } /* ** Open an SSL connection as a client that is to connect to the server ** identified by pUrlData. ** * The identify of the server is determined as follows: |
︙ | ︙ | |||
563 564 565 566 567 568 569 | X509_NAME_print_ex(mem, X509_get_issuer_name(cert), 0, XN_FLAG_ONELINE); BIO_printf(mem, "\n notBefore: %s", ssl_asn1time_to_iso8601(X509_get_notBefore(cert), 1)); BIO_printf(mem, "\n notAfter: %s", ssl_asn1time_to_iso8601(X509_get_notAfter(cert), 1)); BIO_printf(mem, "\n sha256: %s", zHash); desclen = BIO_get_mem_data(mem, &desc); | | | | 563 564 565 566 567 568 569 570 571 572 573 574 575 576 577 578 579 580 581 582 | X509_NAME_print_ex(mem, X509_get_issuer_name(cert), 0, XN_FLAG_ONELINE); BIO_printf(mem, "\n notBefore: %s", ssl_asn1time_to_iso8601(X509_get_notBefore(cert), 1)); BIO_printf(mem, "\n notAfter: %s", ssl_asn1time_to_iso8601(X509_get_notAfter(cert), 1)); BIO_printf(mem, "\n sha256: %s", zHash); desclen = BIO_get_mem_data(mem, &desc); prompt = mprintf("Unable to verify SSL cert from %s\n%.*s\n" "accept this cert and continue (y/N/fingerprint)? ", pUrlData->name, desclen, desc); BIO_free(mem); prompt_user(prompt, &ans); free(prompt); cReply = blob_str(&ans)[0]; if( cReply!='y' && cReply!='Y' && fossil_stricmp(blob_str(&ans),zHash)!=0 ){ X509_free(cert); |
︙ | ︙ | |||
1183 1184 1185 1186 1187 1188 1189 | /* ** Return the OpenSSL version number being used. Space to hold ** this name is obtained from fossil_malloc() and should be ** freed by the caller. */ char *fossil_openssl_version(void){ | | | 1183 1184 1185 1186 1187 1188 1189 1190 1191 1192 1193 1194 1195 1196 | /* ** Return the OpenSSL version number being used. Space to hold ** this name is obtained from fossil_malloc() and should be ** freed by the caller. */ char *fossil_openssl_version(void){ #if defined(FOSSIL_ENABLE_SSL) return mprintf("%s (0x%09x)\n", SSLeay_version(SSLEAY_VERSION), OPENSSL_VERSION_NUMBER); #else return mprintf("none"); #endif } |
Changes to src/http_transport.c.
︙ | ︙ | |||
129 130 131 132 133 134 135 | if( pUrlData->user && pUrlData->user[0] ){ zHost = mprintf("%s@%s", pUrlData->user, pUrlData->name); blob_append_escaped_arg(&zCmd, zHost, 0); fossil_free(zHost); }else{ blob_append_escaped_arg(&zCmd, pUrlData->name, 0); } | < | < < < < < < | 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 | if( pUrlData->user && pUrlData->user[0] ){ zHost = mprintf("%s@%s", pUrlData->user, pUrlData->name); blob_append_escaped_arg(&zCmd, zHost, 0); fossil_free(zHost); }else{ blob_append_escaped_arg(&zCmd, pUrlData->name, 0); } if( !is_safe_fossil_command(pUrlData->fossil) ){ fossil_fatal("the ssh:// URL is asking to run an unsafe command [%s] on " "the server.", pUrlData->fossil); } blob_append_escaped_arg(&zCmd, pUrlData->fossil, 1); blob_append(&zCmd, " test-http", 10); if( pUrlData->path && pUrlData->path[0] ){ blob_append_escaped_arg(&zCmd, pUrlData->path, 1); }else{ fossil_fatal("ssh:// URI does not specify a path to the repository"); } |
︙ | ︙ | |||
317 318 319 320 321 322 323 | /* ** Read N bytes of content directly from the wire and write into ** the buffer. */ static int transport_fetch(UrlData *pUrlData, char *zBuf, int N){ int got; | | | 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324 | /* ** Read N bytes of content directly from the wire and write into ** the buffer. */ static int transport_fetch(UrlData *pUrlData, char *zBuf, int N){ int got; if( sshIn ){ int x; int wanted = N; got = 0; while( wanted>0 ){ x = read(sshIn, &zBuf[got], wanted); if( x<=0 ) break; got += x; |
︙ | ︙ |
Changes to src/import.c.
︙ | ︙ | |||
815 816 817 818 819 820 821 | gg.fromLoaded = 1; }else if( strncmp(zLine, "N ", 2)==0 ){ /* No-op */ }else if( strncmp(zLine, "property branch-nick ", 21)==0 ){ /* Breezy uses this property to store the branch name. | | | 815 816 817 818 819 820 821 822 823 824 825 826 827 828 829 | gg.fromLoaded = 1; }else if( strncmp(zLine, "N ", 2)==0 ){ /* No-op */ }else if( strncmp(zLine, "property branch-nick ", 21)==0 ){ /* Breezy uses this property to store the branch name. ** It has two values. Integer branch number, then the ** user-readable branch name. */ z = &zLine[21]; next_token(&z); fossil_free(gg.zBranch); gg.zBranch = fossil_strdup(next_token(&z)); }else if( strncmp(zLine, "property rebase-of ", 19)==0 ){ |
︙ | ︙ |
Changes to src/info.c.
︙ | ︙ | |||
1222 1223 1224 1225 1226 1227 1228 | } } pTo = vdiff_parse_manifest("to", &ridTo); if( pTo==0 ) return; pFrom = vdiff_parse_manifest("from", &ridFrom); if( pFrom==0 ) return; zGlob = P("glob"); | < < < < < < < | | | 1222 1223 1224 1225 1226 1227 1228 1229 1230 1231 1232 1233 1234 1235 1236 1237 | } } pTo = vdiff_parse_manifest("to", &ridTo); if( pTo==0 ) return; pFrom = vdiff_parse_manifest("from", &ridFrom); if( pFrom==0 ) return; zGlob = P("glob"); zFrom = P_NoBot("from"); zTo = P_NoBot("to"); if( bInvert ){ Manifest *pTemp = pTo; const char *zTemp = zTo; pTo = pFrom; pFrom = pTemp; zTo = zFrom; zFrom = zTemp; |
︙ | ︙ | |||
2507 2508 2509 2510 2511 2512 2513 | ) ){ if( P("ci")==0 ) cgi_set_query_parameter("ci","tip"); page_tree(); return; } /* No directory found, look for an historic version of the file ** that was subsequently deleted. */ | | | 2500 2501 2502 2503 2504 2505 2506 2507 2508 2509 2510 2511 2512 2513 2514 | ) ){ if( P("ci")==0 ) cgi_set_query_parameter("ci","tip"); page_tree(); return; } /* No directory found, look for an historic version of the file ** that was subsequently deleted. */ db_prepare(&q, "SELECT fid, uuid FROM mlink, filename, event, blob" " WHERE filename.name=%Q" " AND mlink.fnid=filename.fnid AND mlink.fid>0" " AND event.objid=mlink.mid" " AND blob.rid=mlink.mid" " ORDER BY event.mtime DESC", zName |
︙ | ︙ | |||
2805 2806 2807 2808 2809 2810 2811 | } } if( strcmp(zModAction,"approve")==0 ){ moderation_approve('t', rid); } } zTktTitle = db_table_has_column("repository", "ticket", "title" ) | | | 2798 2799 2800 2801 2802 2803 2804 2805 2806 2807 2808 2809 2810 2811 2812 | } } if( strcmp(zModAction,"approve")==0 ){ moderation_approve('t', rid); } } zTktTitle = db_table_has_column("repository", "ticket", "title" ) ? db_text("(No title)", "SELECT title FROM ticket WHERE tkt_uuid=%Q", zTktName) : 0; style_set_current_feature("tinfo"); style_header("Ticket Change Details"); style_submenu_element("Raw", "%R/artifact/%s", zUuid); style_submenu_element("History", "%R/tkthistory/%s#%S", zTktName,zUuid); style_submenu_element("Page", "%R/tktview/%t", zTktName); |
︙ | ︙ | |||
3552 3553 3554 3555 3556 3557 3558 3559 3560 3561 3562 3563 3564 3565 | Blob ctrl; Blob comment; char *zNow; int nTags, nCancels; int i; Stmt q; fEditComment = find_option("edit-comment","e",0)!=0; zNewComment = find_option("comment","m",1); zComFile = find_option("message-file","M",1); zNewBranch = find_option("branch",0,1); zNewColor = find_option("bgcolor",0,1); zNewBrColor = find_option("branchcolor",0,1); if( zNewBrColor ){ | > | 3545 3546 3547 3548 3549 3550 3551 3552 3553 3554 3555 3556 3557 3558 3559 | Blob ctrl; Blob comment; char *zNow; int nTags, nCancels; int i; Stmt q; if( g.argc==3 ) usage(AMEND_USAGE_STMT); fEditComment = find_option("edit-comment","e",0)!=0; zNewComment = find_option("comment","m",1); zComFile = find_option("message-file","M",1); zNewBranch = find_option("branch",0,1); zNewColor = find_option("bgcolor",0,1); zNewBrColor = find_option("branchcolor",0,1); if( zNewBrColor ){ |
︙ | ︙ | |||
3837 3838 3839 3840 3841 3842 3843 | ** If no VERSION is provided, describe the currently checked-out version. ** ** If VERSION and the found ancestor refer to the same commit, the last two ** components are omitted, unless --long is provided. When no fitting tagged ** ancestor is found, show only the short hash of VERSION. ** ** Options: | | | 3831 3832 3833 3834 3835 3836 3837 3838 3839 3840 3841 3842 3843 3844 3845 | ** If no VERSION is provided, describe the currently checked-out version. ** ** If VERSION and the found ancestor refer to the same commit, the last two ** components are omitted, unless --long is provided. When no fitting tagged ** ancestor is found, show only the short hash of VERSION. ** ** Options: ** --digits Display so many hex digits of the hash ** (default: the larger of 6 and the 'hash-digit' setting) ** -d|--dirty Show whether there are changes to be committed ** --long Always show all three components ** --match GLOB Consider only non-propagating tags matching GLOB */ void describe_cmd(void){ const char *zName; |
︙ | ︙ |
Changes to src/interwiki.c.
︙ | ︙ | |||
43 44 45 46 47 48 49 | ** ** { ** "base": Base URL for the remote site. ** "hash": Append this to "base" for Hash targets. ** "wiki": Append this to "base" for Wiki targets. ** } ** | | | | 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 | ** ** { ** "base": Base URL for the remote site. ** "hash": Append this to "base" for Hash targets. ** "wiki": Append this to "base" for Wiki targets. ** } ** ** If the remote wiki is Fossil, then the correct value for "hash" ** is "/info/" and the correct value for "wiki" is "/wiki?name=". ** If (for example) Wikipedia is the remote, then "hash" should be ** omitted and the correct value for "wiki" is "/wiki/". ** ** PageName is link name of the target wiki. Several different forms ** of PageName are recognized. ** ** Path If PageName is empty or begins with a "/" character, then ** it is a pathname that is appended to "base". ** |
︙ | ︙ | |||
80 81 82 83 84 85 86 | static Stmt q; for(i=0; fossil_isalnum(zTarget[i]); i++){} if( zTarget[i]!=':' ) return 0; nCode = i; if( nCode==4 && strncmp(zTarget,"wiki",4)==0 ) return 0; zPage = zTarget + nCode + 1; nPage = (int)strlen(zPage); | | | 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 | static Stmt q; for(i=0; fossil_isalnum(zTarget[i]); i++){} if( zTarget[i]!=':' ) return 0; nCode = i; if( nCode==4 && strncmp(zTarget,"wiki",4)==0 ) return 0; zPage = zTarget + nCode + 1; nPage = (int)strlen(zPage); db_static_prepare(&q, "SELECT value->>'base', value->>'hash', value->>'wiki'" " FROM config WHERE name=lower($name) AND json_valid(value)" ); zName = mprintf("interwiki:%.*s", nCode, zTarget); db_bind_text(&q, "$name", zName); while( db_step(&q)==SQLITE_ROW ){ const char *zBase = db_column_text(&q,0); |
︙ | ︙ | |||
220 221 222 223 224 225 226 | verify_all_options(); if( g.argc<4 ) usage("delete ID ..."); db_begin_write(); db_unprotect(PROTECT_CONFIG); for(i=3; i<g.argc; i++){ const char *zName = g.argv[i]; db_multi_exec( | | | 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 | verify_all_options(); if( g.argc<4 ) usage("delete ID ..."); db_begin_write(); db_unprotect(PROTECT_CONFIG); for(i=3; i<g.argc; i++){ const char *zName = g.argv[i]; db_multi_exec( "DELETE FROM config WHERE name='interwiki:%q'", zName ); } setup_incr_cfgcnt(); db_protect_pop(); db_commit_transaction(); }else |
︙ | ︙ |
Changes to src/json.c.
︙ | ︙ | |||
21 22 23 24 25 26 27 | ** The JSON API's public interface is documented at: ** ** https://fossil-scm.org/fossil/doc/trunk/www/json-api/index.md ** ** Notes for hackers... ** ** Here's how command/page dispatching works: json_page_top() (in HTTP mode) or | | | < | < | 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 | ** The JSON API's public interface is documented at: ** ** https://fossil-scm.org/fossil/doc/trunk/www/json-api/index.md ** ** Notes for hackers... ** ** Here's how command/page dispatching works: json_page_top() (in HTTP mode) or ** json_cmd_top() (in CLI mode) catch the "json" path/command. Those functions then ** dispatch to a JSON-mode-specific command/page handler with the type fossil_json_f(). ** See the API docs for that typedef (below) for the semantics of the callbacks. ** ** */ #include "VERSION.h" #include "config.h" #include "json.h" #include <assert.h> #include <time.h> #if INTERFACE #include "json_detail.h" /* workaround for apparent enum limitation in makeheaders */ #endif const FossilJsonKeys_ FossilJsonKeys = { "anonymousSeed" /*anonymousSeed*/, "authToken" /*authToken*/, "COMMAND_PATH" /*commandPath*/, "mtime" /*mtime*/, |
︙ | ︙ | |||
176 177 178 179 180 181 182 | return 0; } /* ** Convenience wrapper around cson_output() which appends the output ** to pDest. pOpt may be NULL, in which case g.json.outOpt will be used. */ | | < | 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 | return 0; } /* ** Convenience wrapper around cson_output() which appends the output ** to pDest. pOpt may be NULL, in which case g.json.outOpt will be used. */ int cson_output_Blob( cson_value const * pVal, Blob * pDest, cson_output_opt const * pOpt ){ return cson_output( pVal, cson_data_dest_Blob, pDest, pOpt ? pOpt : &g.json.outOpt ); } /* ** Convenience wrapper around cson_parse() which reads its input ** from pSrc. pSrc is rewound before parsing. |
︙ | ︙ | |||
708 709 710 711 712 713 714 | login_cookie_name(), there is(?) a potential(?) login hijacking window here. We may need to change the JSON auth token to be in the form: login_cookie_name()=... Then again, the hardened cookie value helps ensure that only a proper key/value match is valid. */ | | < | 705 706 707 708 709 710 711 712 713 714 715 716 717 718 719 | login_cookie_name(), there is(?) a potential(?) login hijacking window here. We may need to change the JSON auth token to be in the form: login_cookie_name()=... Then again, the hardened cookie value helps ensure that only a proper key/value match is valid. */ cgi_replace_parameter( login_cookie_name(), cson_value_get_cstr(g.json.authToken) ); }else if( g.isHTTP ){ /* try fossil's conventional cookie. */ /* Reminder: chicken/egg scenario regarding db access in CLI mode because login_cookie_name() needs the db. CLI mode does not use any authentication, so we don't need to support it here. */ |
︙ | ︙ | |||
906 907 908 909 910 911 912 | assert( head != p ); zPart = (char*)fossil_malloc(len+1); memcpy(zPart, head, len); zPart[len] = 0; if(doDeHttp){ dehttpize(zPart); } | < | | 902 903 904 905 906 907 908 909 910 911 912 913 914 915 916 | assert( head != p ); zPart = (char*)fossil_malloc(len+1); memcpy(zPart, head, len); zPart[len] = 0; if(doDeHttp){ dehttpize(zPart); } if( *zPart ){ /* should only fail if someone manages to url-encoded a NUL byte */ part = cson_value_new_string(zPart, strlen(zPart)); if( 0 != cson_array_append( target, part ) ){ cson_value_free(part); rc = -rc; break; } }else{ |
︙ | ︙ | |||
1089 1090 1091 1092 1093 1094 1095 | break; } /* g.json.reqPayload exists only to simplify some of our access to the request payload. We currently only use this in the context of Object payloads, not Arrays, strings, etc. */ | | | 1084 1085 1086 1087 1088 1089 1090 1091 1092 1093 1094 1095 1096 1097 1098 | break; } /* g.json.reqPayload exists only to simplify some of our access to the request payload. We currently only use this in the context of Object payloads, not Arrays, strings, etc. */ g.json.reqPayload.v = cson_object_get( g.json.post.o, FossilJsonKeys.payload ); if( g.json.reqPayload.v ){ g.json.reqPayload.o = cson_value_get_object( g.json.reqPayload.v ) /* g.json.reqPayload.o may legally be NULL, which means only that g.json.reqPayload.v is-not-a Object. */; } |
︙ | ︙ | |||
1118 1119 1120 1121 1122 1123 1124 | } if(!g.json.jsonp){ g.json.jsonp = json_find_option_cstr("jsonp",NULL,NULL); } if(!g.isHTTP){ | | | 1113 1114 1115 1116 1117 1118 1119 1120 1121 1122 1123 1124 1125 1126 1127 | } if(!g.json.jsonp){ g.json.jsonp = json_find_option_cstr("jsonp",NULL,NULL); } if(!g.isHTTP){ g.json.errorDetailParanoia = 0 /*disable error code dumb-down for CLI mode*/; } {/* set up JSON output formatting options. */ int indent = -1; indent = json_find_option_int("indent",NULL,"I",-1); g.json.outOpt.indentation = (0>indent) ? (g.isHTTP ? 0 : 1) |
︙ | ︙ | |||
1169 1170 1171 1172 1173 1174 1175 | ** Note that CLI options are not included in the command path. Use ** find_option() to get those. ** */ char const * json_command_arg(unsigned short ndx){ cson_array * ar = g.json.cmd.a; assert((NULL!=ar) && "Internal error. Was json_bootstrap_late() called?"); | | | 1164 1165 1166 1167 1168 1169 1170 1171 1172 1173 1174 1175 1176 1177 1178 | ** Note that CLI options are not included in the command path. Use ** find_option() to get those. ** */ char const * json_command_arg(unsigned short ndx){ cson_array * ar = g.json.cmd.a; assert((NULL!=ar) && "Internal error. Was json_bootstrap_late() called?"); assert((g.argc>1) && "Internal error - we never should have gotten this far."); if( g.json.cmd.offset < 0 ){ /* first-time setup. */ short i = 0; #define NEXT cson_string_cstr( \ cson_value_get_string( \ cson_array_get(ar,i) \ )) |
︙ | ︙ | |||
1195 1196 1197 1198 1199 1200 1201 | } } #undef NEXT if(g.json.cmd.offset < 0){ return NULL; }else{ ndx = g.json.cmd.offset + ndx; | | < | < | 1190 1191 1192 1193 1194 1195 1196 1197 1198 1199 1200 1201 1202 1203 1204 1205 1206 1207 1208 1209 1210 1211 1212 1213 1214 1215 1216 1217 1218 1219 1220 1221 1222 | } } #undef NEXT if(g.json.cmd.offset < 0){ return NULL; }else{ ndx = g.json.cmd.offset + ndx; return cson_string_cstr(cson_value_get_string(cson_array_get( ar, g.json.cmd.offset + ndx ))); } } /* Returns the C-string form of json_auth_token(), or NULL ** if json_auth_token() returns NULL. */ char const * json_auth_token_cstr(){ return cson_value_get_cstr( json_auth_token() ); } /* ** Returns the JsonPageDef with the given name, or NULL if no match is ** found. ** ** head must be a pointer to an array of JsonPageDefs in which the ** last entry has a NULL name. */ JsonPageDef const * json_handler_for_name( char const * name, JsonPageDef const * head ){ JsonPageDef const * pageDef = head; assert( head != NULL ); if(name && *name) for( ; pageDef->name; ++pageDef ){ if( 0 == strcmp(name, pageDef->name) ){ return pageDef; } } |
︙ | ︙ | |||
1297 1298 1299 1300 1301 1302 1303 | */ static cson_value * json_response_command_path(){ if(!g.json.cmd.a){ return NULL; }else{ cson_value * rc = NULL; Blob path = empty_blob; | | < | < | 1290 1291 1292 1293 1294 1295 1296 1297 1298 1299 1300 1301 1302 1303 1304 1305 1306 1307 | */ static cson_value * json_response_command_path(){ if(!g.json.cmd.a){ return NULL; }else{ cson_value * rc = NULL; Blob path = empty_blob; unsigned int aLen = g.json.dispatchDepth+1; /*cson_array_length_get(g.json.cmd.a);*/ unsigned int i = 1; for( ; i < aLen; ++i ){ char const * part = cson_string_cstr(cson_value_get_string(cson_array_get(g.json.cmd.a, i))); if(!part){ #if 1 fossil_warning("Iterating further than expected in %s.", __FILE__); #endif break; } |
︙ | ︙ | |||
1336 1337 1338 1339 1340 1341 1342 | */ cson_value * json_g_to_json(){ cson_object * o = NULL; cson_object * pay = NULL; pay = o = cson_new_object(); #define INT(OBJ,K) cson_object_set(o, #K, json_new_int(OBJ.K)) | | < | 1327 1328 1329 1330 1331 1332 1333 1334 1335 1336 1337 1338 1339 1340 1341 | */ cson_value * json_g_to_json(){ cson_object * o = NULL; cson_object * pay = NULL; pay = o = cson_new_object(); #define INT(OBJ,K) cson_object_set(o, #K, json_new_int(OBJ.K)) #define CSTR(OBJ,K) cson_object_set(o, #K, OBJ.K ? json_new_string(OBJ.K) : cson_value_null()) #define VAL(K,V) cson_object_set(o, #K, (V) ? (V) : cson_value_null()) VAL(capabilities, json_cap_value()); INT(g, argc); INT(g, isConst); CSTR(g, zConfigDbName); INT(g, repositoryOpen); INT(g, localOpen); |
︙ | ︙ | |||
1821 1822 1823 1824 1825 1826 1827 | cson_string * kDesc; cson_array_reserve( list, 35 ); kRC = cson_new_string("resultCode",10); kSymbol = cson_new_string("cSymbol",7); kNumber = cson_new_string("number",6); kDesc = cson_new_string("description",11); #define C(K) obj = cson_new_object(); \ | | | | | < | 1811 1812 1813 1814 1815 1816 1817 1818 1819 1820 1821 1822 1823 1824 1825 1826 1827 1828 | cson_string * kDesc; cson_array_reserve( list, 35 ); kRC = cson_new_string("resultCode",10); kSymbol = cson_new_string("cSymbol",7); kNumber = cson_new_string("number",6); kDesc = cson_new_string("description",11); #define C(K) obj = cson_new_object(); \ cson_object_set_s(obj, kRC, json_new_string(json_rc_cstr(FSL_JSON_E_##K)) ); \ cson_object_set_s(obj, kSymbol, json_new_string("FSL_JSON_E_"#K) ); \ cson_object_set_s(obj, kNumber, cson_value_new_integer(FSL_JSON_E_##K) ); \ cson_object_set_s(obj, kDesc, json_new_string(json_err_cstr(FSL_JSON_E_##K))); \ cson_array_append( list, cson_object_value(obj) ); obj = NULL; C(GENERIC); C(INVALID_REQUEST); C(UNKNOWN_COMMAND); C(UNKNOWN); C(TIMEOUT); |
︙ | ︙ | |||
2015 2016 2017 2018 2019 2020 2021 | if( !g.perm.Read ){ json_set_err(FSL_JSON_E_DENIED, "Requires 'o' permissions."); return NULL; } full = json_find_option_bool("full",NULL,"f", json_find_option_bool("verbose",NULL,"v",0)); | | < | | 2004 2005 2006 2007 2008 2009 2010 2011 2012 2013 2014 2015 2016 2017 2018 2019 2020 2021 2022 2023 2024 2025 2026 2027 2028 2029 2030 2031 | if( !g.perm.Read ){ json_set_err(FSL_JSON_E_DENIED, "Requires 'o' permissions."); return NULL; } full = json_find_option_bool("full",NULL,"f", json_find_option_bool("verbose",NULL,"v",0)); #define SETBUF(O,K) cson_object_set(O, K, cson_value_new_string(zBuf, strlen(zBuf))); jv = cson_value_new_object(); jo = cson_value_get_object(jv); zTmp = db_get("project-name",NULL); cson_object_set(jo, "projectName", json_new_string(zTmp)); fossil_free(zTmp); zTmp = db_get("project-description",NULL); cson_object_set(jo, "projectDescription", json_new_string(zTmp)); fossil_free(zTmp); zTmp = NULL; fsize = file_size(g.zRepositoryName, ExtFILE); cson_object_set(jo, "repositorySize", cson_value_new_integer((cson_int_t)fsize)); if(full){ n = db_int(0, "SELECT count(*) FROM blob"); m = db_int(0, "SELECT count(*) FROM delta"); cson_object_set(jo, "blobCount", cson_value_new_integer((cson_int_t)n)); cson_object_set(jo, "deltaCount", cson_value_new_integer((cson_int_t)m)); |
︙ | ︙ | |||
2078 2079 2080 2081 2082 2083 2084 | }/*full*/ n = db_int(0, "SELECT julianday('now') - (SELECT min(mtime) FROM event)" " + 0.99"); cson_object_set(jo, "ageDays", cson_value_new_integer((cson_int_t)n)); cson_object_set(jo, "ageYears", cson_value_new_double(n/365.2425)); sqlite3_snprintf(BufLen, zBuf, db_get("project-code","")); SETBUF(jo, "projectCode"); | | < | | | < | < | < | | < | < | 2066 2067 2068 2069 2070 2071 2072 2073 2074 2075 2076 2077 2078 2079 2080 2081 2082 2083 2084 2085 2086 2087 2088 2089 2090 2091 2092 2093 2094 | }/*full*/ n = db_int(0, "SELECT julianday('now') - (SELECT min(mtime) FROM event)" " + 0.99"); cson_object_set(jo, "ageDays", cson_value_new_integer((cson_int_t)n)); cson_object_set(jo, "ageYears", cson_value_new_double(n/365.2425)); sqlite3_snprintf(BufLen, zBuf, db_get("project-code","")); SETBUF(jo, "projectCode"); cson_object_set(jo, "compiler", cson_value_new_string(COMPILER_NAME, strlen(COMPILER_NAME))); jv2 = cson_value_new_object(); jo2 = cson_value_get_object(jv2); cson_object_set(jo, "sqlite", jv2); sqlite3_snprintf(BufLen, zBuf, "%.19s [%.10s] (%s)", sqlite3_sourceid(), &sqlite3_sourceid()[20], sqlite3_libversion()); SETBUF(jo2, "version"); cson_object_set(jo2, "pageCount", cson_value_new_integer((cson_int_t)db_int(0, "PRAGMA repository.page_count"))); cson_object_set(jo2, "pageSize", cson_value_new_integer((cson_int_t)db_int(0, "PRAGMA repository.page_size"))); cson_object_set(jo2, "freeList", cson_value_new_integer((cson_int_t)db_int(0, "PRAGMA repository.freelist_count"))); sqlite3_snprintf(BufLen, zBuf, "%s", db_text(0, "PRAGMA repository.encoding")); SETBUF(jo2, "encoding"); sqlite3_snprintf(BufLen, zBuf, "%s", db_text(0, "PRAGMA repository.journal_mode")); cson_object_set(jo2, "journalMode", *zBuf ? cson_value_new_string(zBuf, strlen(zBuf)) : cson_value_null()); return jv; #undef SETBUF } |
︙ | ︙ | |||
2253 2254 2255 2256 2257 2258 2259 | cson_value * json_page_status(void); /* ** Mapping of names to JSON pages/commands. Each name is a subpath of ** /json (in CGI mode) or a subcommand of the json command in CLI mode */ static const JsonPageDef JsonPageDefs[] = { | | < | 2235 2236 2237 2238 2239 2240 2241 2242 2243 2244 2245 2246 2247 2248 2249 | cson_value * json_page_status(void); /* ** Mapping of names to JSON pages/commands. Each name is a subpath of ** /json (in CGI mode) or a subcommand of the json command in CLI mode */ static const JsonPageDef JsonPageDefs[] = { /* please keep alphabetically sorted (case-insensitive) for maintenance reasons. */ {"anonymousPassword", json_page_anon_password, 0}, {"artifact", json_page_artifact, 0}, {"branch", json_page_branch,0}, {"cap", json_page_cap, 0}, {"config", json_page_config, 0 }, {"diff", json_page_diff, 0}, {"dir", json_page_dir, 0}, |
︙ | ︙ |
Changes to src/json_artifact.c.
︙ | ︙ | |||
209 210 211 212 213 214 215 | } /* ** Sub-impl of /json/artifact for check-ins. */ static cson_value * json_artifact_ci( cson_object * zParent, int rid ){ if(!g.perm.Read){ | | < | 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 | } /* ** Sub-impl of /json/artifact for check-ins. */ static cson_value * json_artifact_ci( cson_object * zParent, int rid ){ if(!g.perm.Read){ json_set_err( FSL_JSON_E_DENIED, "Viewing check-ins requires 'o' privileges." ); return NULL; }else{ cson_value * artV = json_artifact_for_ci(rid, 1); cson_object * art = cson_value_get_object(artV); if(art){ cson_object_merge( zParent, art, CSON_MERGE_REPLACE ); cson_free_object(art); |
︙ | ︙ | |||
249 250 251 252 253 254 255 | ** if either the includeContent (HTTP) or -content|-c boolean flags ** (CLI) are set. */ static int json_artifact_get_content_format_flag(void){ enum { MagicValue = -9 }; int contentFormat = json_wiki_get_content_format_flag(MagicValue); if(MagicValue == contentFormat){ | | < | | 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 | ** if either the includeContent (HTTP) or -content|-c boolean flags ** (CLI) are set. */ static int json_artifact_get_content_format_flag(void){ enum { MagicValue = -9 }; int contentFormat = json_wiki_get_content_format_flag(MagicValue); if(MagicValue == contentFormat){ contentFormat = json_find_option_bool("includeContent","content","c",0) /* deprecated */ ? -1 : 0; } return contentFormat; } extern int json_wiki_get_content_format_flag( int defaultValue ) /* json_wiki.c */; cson_value * json_artifact_wiki(cson_object * zParent, int rid){ if( ! g.perm.RdWiki ){ json_set_err(FSL_JSON_E_DENIED, "Requires 'j' privileges."); return NULL; }else{ |
︙ | ︙ | |||
380 381 382 383 384 385 386 | ); /* TODO: add a "state" flag for the file in each check-in, e.g. "modified", "new", "deleted". */ checkin_arr = cson_new_array(); cson_object_set(pay, "checkins", cson_array_value(checkin_arr)); while( (SQLITE_ROW==db_step(&q) ) ){ | | < | | | 378 379 380 381 382 383 384 385 386 387 388 389 390 391 392 393 394 395 396 397 398 399 | ); /* TODO: add a "state" flag for the file in each check-in, e.g. "modified", "new", "deleted". */ checkin_arr = cson_new_array(); cson_object_set(pay, "checkins", cson_array_value(checkin_arr)); while( (SQLITE_ROW==db_step(&q) ) ){ cson_object * row = cson_value_get_object(cson_sqlite3_row_to_object(q.pStmt)); /* FIXME: move this isNew/isDel stuff into an SQL CASE statement. */ char const isNew = cson_value_get_bool(cson_object_get(row,"isNew")); char const isDel = cson_value_get_bool(cson_object_get(row,"isDel")); cson_object_set(row, "isNew", NULL); cson_object_set(row, "isDel", NULL); cson_object_set(row, "state", json_new_string(json_artifact_status_to_string(isNew, isDel))); cson_array_append( checkin_arr, cson_object_value(row) ); } db_finalize(&q); return cson_object_value(pay); } /* |
︙ | ︙ |
Changes to src/json_branch.c.
︙ | ︙ | |||
200 201 202 203 204 205 206 | Manifest *pParent; /* Parsed parent manifest */ Blob mcksum; /* Self-checksum on the manifest */ int bAutoColor = 0; /* Value of "--bgcolor" is "auto" */ if( fossil_strncmp(zColor, "auto", 4)==0 ) { bAutoColor = 1; zColor = 0; | | | 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 | Manifest *pParent; /* Parsed parent manifest */ Blob mcksum; /* Self-checksum on the manifest */ int bAutoColor = 0; /* Value of "--bgcolor" is "auto" */ if( fossil_strncmp(zColor, "auto", 4)==0 ) { bAutoColor = 1; zColor = 0; } /* fossil branch new name */ if( zBranch==0 || zBranch[0]==0 ){ zOpt->rcErrMsg = "Branch name may not be null/empty."; return FSL_JSON_E_INVALID_ARGS; } if( db_exists( "SELECT 1 FROM tagxref" |
︙ | ︙ | |||
333 334 335 336 337 338 339 | } if(!opt.zName){ opt.zName = json_command_arg(g.json.dispatchDepth+1); } if(!opt.zName){ | | < | 333 334 335 336 337 338 339 340 341 342 343 344 345 346 347 | } if(!opt.zName){ opt.zName = json_command_arg(g.json.dispatchDepth+1); } if(!opt.zName){ json_set_err(FSL_JSON_E_MISSING_ARGS, "'name' parameter was not specified." ); return NULL; } opt.zColor = json_find_option_cstr("bgColor","bgcolor",NULL); opt.zBasis = json_find_option_cstr("basis",NULL,NULL); if(!opt.zBasis && !g.isHTTP){ opt.zBasis = json_command_arg(g.json.dispatchDepth+2); |
︙ | ︙ |
Changes to src/json_config.c.
︙ | ︙ | |||
255 256 257 258 259 260 261 | } for(i=0; i<nSetting; ++i){ const Setting *pSet = &aSetting[i]; cson_object * jSet; cson_value * pVal = 0, * pSrc = 0; jSet = cson_new_object(); cson_object_set(pay, pSet->name, cson_object_value(jSet)); | | | 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 | } for(i=0; i<nSetting; ++i){ const Setting *pSet = &aSetting[i]; cson_object * jSet; cson_value * pVal = 0, * pSrc = 0; jSet = cson_new_object(); cson_object_set(pay, pSet->name, cson_object_value(jSet)); cson_object_set(jSet, "versionable", cson_value_new_bool(pSet->versionable)); cson_object_set(jSet, "sensitive", cson_value_new_bool(pSet->sensitive)); cson_object_set(jSet, "defaultValue", (pSet->def && pSet->def[0]) ? json_new_string(pSet->def) : cson_value_null()); if( 0==pSet->sensitive || 0!=g.perm.Setup ){ if( pSet->versionable ){ /* Check to see if this is overridden by a versionable |
︙ | ︙ | |||
290 291 292 293 294 295 296 | Blob versionedPathname; blob_zero(&versionedPathname); blob_appendf(&versionedPathname, "%s.fossil-settings/%s", g.zLocalRoot, pSet->name); if( file_size(blob_str(&versionedPathname), ExtFILE)>=0 ){ Blob content; blob_zero(&content); | | | 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 | Blob versionedPathname; blob_zero(&versionedPathname); blob_appendf(&versionedPathname, "%s.fossil-settings/%s", g.zLocalRoot, pSet->name); if( file_size(blob_str(&versionedPathname), ExtFILE)>=0 ){ Blob content; blob_zero(&content); blob_read_from_file(&content, blob_str(&versionedPathname), ExtFILE); pSrc = json_new_string("versioned"); pVal = json_new_string(blob_str(&content)); blob_reset(&content); } blob_reset(&versionedPathname); } } |
︙ | ︙ |
Changes to src/json_finfo.c.
︙ | ︙ | |||
34 35 36 37 38 39 40 | Blob sql = empty_blob; Stmt q = empty_Stmt; char const * zAfter = NULL; char const * zBefore = NULL; int limit = -1; int currentRow = 0; char const * zCheckin = NULL; | | < | | | | | 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 | Blob sql = empty_blob; Stmt q = empty_Stmt; char const * zAfter = NULL; char const * zBefore = NULL; int limit = -1; int currentRow = 0; char const * zCheckin = NULL; char sort = -1; if(!g.perm.Read){ json_set_err(FSL_JSON_E_DENIED,"Requires 'o' privileges."); return NULL; } json_warn( FSL_JSON_W_UNKNOWN, "Achtung: the output of the finfo command is up for change."); /* For the "name" argument we have to jump through some hoops to make sure that we don't get the fossil-internally-assigned "name" option. */ zFilename = json_find_option_cstr2("name",NULL,NULL, g.json.dispatchDepth+1); if(!zFilename || !*zFilename){ json_set_err(FSL_JSON_E_MISSING_ARGS, "Missing 'name' parameter."); return NULL; } if(0==db_int(0,"SELECT 1 FROM filename WHERE name=%Q",zFilename)){ json_set_err(FSL_JSON_E_RESOURCE_NOT_FOUND, "File entry not found."); return NULL; } zBefore = json_find_option_cstr("before",NULL,"b"); zAfter = json_find_option_cstr("after",NULL,"a"); limit = json_find_option_int("limit",NULL,"n", -1); zCheckin = json_find_option_cstr("checkin",NULL,"ci"); blob_append_sql(&sql, /*0*/ "SELECT b.uuid," /*1*/ " ci.uuid," /*2*/ " (SELECT uuid FROM blob WHERE rid=mlink.fid)," /* Current file uuid */ /*3*/ " cast(strftime('%%s',event.mtime) AS INTEGER)," /*4*/ " coalesce(event.euser, event.user)," /*5*/ " coalesce(event.ecomment, event.comment)," /*6*/ " (SELECT uuid FROM blob WHERE rid=mlink.pid)," /* Parent file uuid */ /*7*/ " event.bgcolor," /*8*/ " b.size," /*9*/ " (mlink.pid==0) AS isNew," |
︙ | ︙ | |||
87 88 89 90 91 92 93 | ); if( zCheckin && *zCheckin ){ char * zU = NULL; int rc = name_to_uuid2( zCheckin, "ci", &zU ); /*printf("zCheckin=[%s], zU=[%s]", zCheckin, zU);*/ if(rc<=0){ | | < | < | < | < | | | 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 | ); if( zCheckin && *zCheckin ){ char * zU = NULL; int rc = name_to_uuid2( zCheckin, "ci", &zU ); /*printf("zCheckin=[%s], zU=[%s]", zCheckin, zU);*/ if(rc<=0){ json_set_err((rc<0) ? FSL_JSON_E_AMBIGUOUS_UUID : FSL_JSON_E_RESOURCE_NOT_FOUND, "Check-in hash %s.", (rc<0) ? "is ambiguous" : "not found"); blob_reset(&sql); return NULL; } blob_append_sql(&sql, " AND ci.uuid='%q'", zU); free(zU); }else{ if( zAfter && *zAfter ){ blob_append_sql(&sql, " AND event.mtime>=julianday('%q')", zAfter); sort = 1; }else if( zBefore && *zBefore ){ blob_append_sql(&sql, " AND event.mtime<=julianday('%q')", zBefore); } } blob_append_sql(&sql," ORDER BY event.mtime %s /*sort*/", (sort>0?"ASC":"DESC")); /*printf("SQL=\n%s\n",blob_str(&sql));*/ db_prepare(&q, "%s", blob_sql_text(&sql)); blob_reset(&sql); pay = cson_new_object(); cson_object_set(pay, "name", json_new_string(zFilename)); if( limit > 0 ){ cson_object_set(pay, "limit", json_new_int(limit)); } checkins = cson_new_array(); cson_object_set(pay, "checkins", cson_array_value(checkins)); while( db_step(&q)==SQLITE_ROW ){ cson_object * row = cson_new_object(); int const isNew = db_column_int(&q,9); int const isDel = db_column_int(&q,10); cson_array_append( checkins, cson_object_value(row) ); cson_object_set(row, "checkin", json_new_string( db_column_text(&q,1) )); cson_object_set(row, "uuid", json_new_string( db_column_text(&q,2) )); /*cson_object_set(row, "parentArtifact", json_new_string( db_column_text(&q,6) ));*/ cson_object_set(row, "timestamp", json_new_int( db_column_int64(&q,3) )); cson_object_set(row, "user", json_new_string( db_column_text(&q,4) )); cson_object_set(row, "comment", json_new_string( db_column_text(&q,5) )); /*cson_object_set(row, "bgColor", json_new_string( db_column_text(&q,7) ));*/ cson_object_set(row, "size", json_new_int( db_column_int64(&q,8) )); cson_object_set(row, "state", json_new_string(json_artifact_status_to_string(isNew,isDel))); if( (0 < limit) && (++currentRow >= limit) ){ break; } } db_finalize(&q); return pay ? cson_object_value(pay) : NULL; } #endif /* FOSSIL_ENABLE_JSON */ |
Changes to src/json_login.c.
︙ | ︙ | |||
153 154 155 156 157 158 159 | } payload = cson_value_new_object(); po = cson_value_get_object(payload); cson_object_set(po, "authToken", json_new_string(cookie)); free(cookie); cson_object_set(po, "name", json_new_string(name)); cap = db_text(NULL, "SELECT cap FROM user WHERE login=%Q", name); | | < | < | 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 | } payload = cson_value_new_object(); po = cson_value_get_object(payload); cson_object_set(po, "authToken", json_new_string(cookie)); free(cookie); cson_object_set(po, "name", json_new_string(name)); cap = db_text(NULL, "SELECT cap FROM user WHERE login=%Q", name); cson_object_set(po, "capabilities", cap ? json_new_string(cap) : cson_value_null() ); free(cap); cson_object_set(po, "loginCookieName", json_new_string( login_cookie_name() ) ); /* TODO: add loginExpiryTime to the payload. To do this properly we "should" add an ([unsigned] int *) to login_set_user_cookie() and login_set_anon_cookie(), to which the expiry time is assigned. (Remember that JSON doesn't do unsigned int.) For non-anonymous users we could also simply query the |
︙ | ︙ |
Changes to src/json_tag.c.
︙ | ︙ | |||
115 116 117 118 119 120 121 | cson_object_set(pay, "raw", cson_value_new_bool(fRaw)); { Blob uu = empty_blob; int rc; blob_append(&uu, zName, -1); rc = name_to_uuid(&uu, 9, "*"); if(0!=rc){ | | < | 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 | cson_object_set(pay, "raw", cson_value_new_bool(fRaw)); { Blob uu = empty_blob; int rc; blob_append(&uu, zName, -1); rc = name_to_uuid(&uu, 9, "*"); if(0!=rc){ json_set_err(FSL_JSON_E_UNKNOWN,"Could not convert name back to artifact hash!"); blob_reset(&uu); goto error; } cson_object_set(pay, "appliedTo", json_new_string(blob_buffer(&uu))); blob_reset(&uu); } |
︙ | ︙ |
Changes to src/json_timeline.c.
︙ | ︙ | |||
141 142 143 144 145 146 147 | ** ** If payload is not NULL then on success its "tag" or "branch" ** property is set to the tag/branch name found in the request. ** ** Only one of "tag" or "branch" modes will work at a time, and if ** both are specified, which one takes precedence is unspecified. */ | | | | 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 | ** ** If payload is not NULL then on success its "tag" or "branch" ** property is set to the tag/branch name found in the request. ** ** Only one of "tag" or "branch" modes will work at a time, and if ** both are specified, which one takes precedence is unspecified. */ static char json_timeline_add_tag_branch_clause(Blob *pSql, cson_object * pPayload){ char const * zTag = NULL; char const * zBranch = NULL; char const * zMiOnly = NULL; char const * zUnhide = NULL; int tagid = 0; if(! g.perm.Read ){ return 0; |
︙ | ︙ | |||
167 168 169 170 171 172 173 | zUnhide = json_find_option_cstr("unhide",NULL,NULL); tagid = db_int(0, "SELECT tagid FROM tag WHERE tagname='sym-%q'", zTag); if(tagid<=0){ return -1; } if(pPayload){ | | < | < | 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 | zUnhide = json_find_option_cstr("unhide",NULL,NULL); tagid = db_int(0, "SELECT tagid FROM tag WHERE tagname='sym-%q'", zTag); if(tagid<=0){ return -1; } if(pPayload){ cson_object_set( pPayload, zBranch ? "branch" : "tag", json_new_string(zTag) ); } blob_appendf(pSql, " AND (" " EXISTS(SELECT 1 FROM tagxref" " WHERE tagid=%d AND tagtype>0 AND rid=blob.rid)", tagid); if(!zUnhide){ blob_appendf(pSql, " AND NOT EXISTS(SELECT 1 FROM plink JOIN tagxref ON rid=blob.rid" " WHERE tagid=%d AND tagtype>0 AND rid=blob.rid)", TAG_HIDDEN); } if(zBranch){ /* from "r" flag code in page_timeline().*/ blob_appendf(pSql, " OR EXISTS(SELECT 1 FROM plink JOIN tagxref ON rid=cid" |
︙ | ︙ | |||
220 221 222 223 224 225 226 | ** of the "after" ("a") or "before" ("b") environment parameters. ** This function gives "after" precedence over "before", and only ** applies one of them. ** ** Returns -1 if it adds a "before" clause, 1 if it adds ** an "after" clause, and 0 if adds only an order-by clause. */ | | | 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 | ** of the "after" ("a") or "before" ("b") environment parameters. ** This function gives "after" precedence over "before", and only ** applies one of them. ** ** Returns -1 if it adds a "before" clause, 1 if it adds ** an "after" clause, and 0 if adds only an order-by clause. */ static char json_timeline_add_time_clause(Blob *pSql){ char const * zAfter = NULL; char const * zBefore = NULL; int rc = 0; zAfter = json_find_option_cstr("after",NULL,"a"); zBefore = zAfter ? NULL : json_find_option_cstr("before",NULL,"b"); if(zAfter&&*zAfter){ |
︙ | ︙ | |||
352 353 354 355 356 357 358 | cson_object_set(row, "uuid", json_new_string(db_column_text(&q,3))); if(!isNew && (flags & json_get_changed_files_ELIDE_PARENT)){ cson_object_set(row, "parent", json_new_string(db_column_text(&q,4))); } cson_object_set(row, "size", json_new_int(db_column_int(&q,5))); cson_object_set(row, "state", | | | < | 350 351 352 353 354 355 356 357 358 359 360 361 362 363 364 365 366 | cson_object_set(row, "uuid", json_new_string(db_column_text(&q,3))); if(!isNew && (flags & json_get_changed_files_ELIDE_PARENT)){ cson_object_set(row, "parent", json_new_string(db_column_text(&q,4))); } cson_object_set(row, "size", json_new_int(db_column_int(&q,5))); cson_object_set(row, "state", json_new_string(json_artifact_status_to_string(isNew,isDel))); zDownload = mprintf("/raw/%s?name=%s", /* reminder: g.zBaseURL is of course not set for CLI mode. */ db_column_text(&q,2), db_column_text(&q,3)); cson_object_set(row, "downloadPath", json_new_string(zDownload)); free(zDownload); } db_finalize(&q); return rowsV; |
︙ | ︙ | |||
506 507 508 509 510 511 512 | int const rid = db_column_int(&q,0); cson_value * rowV = json_artifact_for_ci(rid, verboseFlag); cson_object * row = cson_value_get_object(rowV); if(!row){ if( !warnRowToJsonFailed ){ warnRowToJsonFailed = 1; json_warn( FSL_JSON_W_ROW_TO_JSON_FAILED, | | | 503 504 505 506 507 508 509 510 511 512 513 514 515 516 517 | int const rid = db_column_int(&q,0); cson_value * rowV = json_artifact_for_ci(rid, verboseFlag); cson_object * row = cson_value_get_object(rowV); if(!row){ if( !warnRowToJsonFailed ){ warnRowToJsonFailed = 1; json_warn( FSL_JSON_W_ROW_TO_JSON_FAILED, "Could not convert at least one timeline result row to JSON." ); } continue; } cson_array_append(list, rowV); } #undef SET goto ok; |
︙ | ︙ | |||
549 550 551 552 553 554 555 | if(check){ json_set_err(check, "Query initialization failed."); goto error; } #if 0 /* only for testing! */ | | < | < < | 546 547 548 549 550 551 552 553 554 555 556 557 558 559 560 561 562 563 564 565 566 567 568 | if(check){ json_set_err(check, "Query initialization failed."); goto error; } #if 0 /* only for testing! */ cson_object_set(pay, "timelineSql", cson_value_new_string(blob_buffer(&sql),strlen(blob_buffer(&sql)))); #endif db_multi_exec("%s", blob_buffer(&sql) /*safe-for-%s*/); blob_reset(&sql); db_prepare(&q, "SELECT" /* For events, the name is generally more useful than the uuid, but the uuid is unambiguous and can be used with commands like 'artifact'. */ " substr((SELECT tagname FROM tag AS tn WHERE tn.tagid=json_timeline.tagId AND tagname LIKE 'event-%%'),7) AS name," " uuid as uuid," " mtime AS timestamp," " comment AS comment, " " user AS user," " eventType AS eventType" " FROM json_timeline" " ORDER BY rowid"); |
︙ | ︙ | |||
595 596 597 598 599 600 601 | cson_value * payV = NULL; cson_object * pay = NULL; cson_array * list = NULL; int check = 0; Stmt q = empty_Stmt; Blob sql = empty_blob; if( !g.perm.RdWiki && !g.perm.Read ){ | | < | < | 589 590 591 592 593 594 595 596 597 598 599 600 601 602 603 604 605 606 607 608 609 610 611 612 613 614 615 616 | cson_value * payV = NULL; cson_object * pay = NULL; cson_array * list = NULL; int check = 0; Stmt q = empty_Stmt; Blob sql = empty_blob; if( !g.perm.RdWiki && !g.perm.Read ){ json_set_err( FSL_JSON_E_DENIED, "Wiki timeline requires 'o' or 'j' access."); return NULL; } payV = cson_value_new_object(); pay = cson_value_get_object(payV); check = json_timeline_setup_sql( "w", &sql, pay ); if(check){ json_set_err(check, "Query initialization failed."); goto error; } #if 0 /* only for testing! */ cson_object_set(pay, "timelineSql", cson_value_new_string(blob_buffer(&sql),strlen(blob_buffer(&sql)))); #endif db_multi_exec("%s", blob_buffer(&sql) /*safe-for-%s*/); blob_reset(&sql); db_prepare(&q, "SELECT" " uuid AS uuid," " mtime AS timestamp," #if 0 |
︙ | ︙ | |||
660 661 662 663 664 665 666 | cson_value * tmp = NULL; cson_value * listV = NULL; cson_array * list = NULL; int check = 0; Stmt q = empty_Stmt; Blob sql = empty_blob; if( !g.perm.RdTkt && !g.perm.Read ){ | | < | 652 653 654 655 656 657 658 659 660 661 662 663 664 665 666 | cson_value * tmp = NULL; cson_value * listV = NULL; cson_array * list = NULL; int check = 0; Stmt q = empty_Stmt; Blob sql = empty_blob; if( !g.perm.RdTkt && !g.perm.Read ){ json_set_err(FSL_JSON_E_DENIED, "Ticket timeline requires 'o' or 'r' access."); return NULL; } payV = cson_value_new_object(); pay = cson_value_get_object(payV); check = json_timeline_setup_sql( "t", &sql, pay ); if(check){ json_set_err(check, "Query initialization failed."); |
︙ | ︙ | |||
733 734 735 736 737 738 739 | } rowV = cson_sqlite3_row_to_object(q.pStmt); row = cson_value_get_object(rowV); if(!row){ manifest_destroy(pMan); json_warn( FSL_JSON_W_ROW_TO_JSON_FAILED, | | | 724 725 726 727 728 729 730 731 732 733 734 735 736 737 738 | } rowV = cson_sqlite3_row_to_object(q.pStmt); row = cson_value_get_object(rowV); if(!row){ manifest_destroy(pMan); json_warn( FSL_JSON_W_ROW_TO_JSON_FAILED, "Could not convert at least one timeline result row to JSON." ); continue; } /* FIXME: certainly there's a more efficient way for use to get the ticket UUIDs? */ cson_object_set(row,"ticketUuid",json_new_string(pMan->zTicketUuid)); manifest_destroy(pMan); |
︙ | ︙ |
Changes to src/json_user.c.
︙ | ︙ | |||
168 169 170 171 172 173 174 | ** Requires either Admin, Setup, or Password access. Non-admin/setup ** users can only change their own information. Non-setup users may ** not modify the 's' permission. Admin users without setup ** permissions may not edit any other user who has the 's' permission. ** */ int json_user_update_from_json( cson_object * pUser ){ | | < | 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 | ** Requires either Admin, Setup, or Password access. Non-admin/setup ** users can only change their own information. Non-setup users may ** not modify the 's' permission. Admin users without setup ** permissions may not edit any other user who has the 's' permission. ** */ int json_user_update_from_json( cson_object * pUser ){ #define CSTR(X) cson_string_cstr(cson_value_get_string( cson_object_get(pUser, X ) )) char const * zName = CSTR("name"); char const * zNameNew = zName; char * zNameFree = NULL; char const * zInfo = CSTR("info"); char const * zCap = CSTR("capabilities"); char const * zPW = CSTR("password"); cson_value const * forceLogout = cson_object_get(pUser, "forceLogout"); |
︙ | ︙ |
Changes to src/json_wiki.c.
︙ | ︙ | |||
161 162 163 164 165 166 167 | } /* ** Searches for the latest version of a wiki page with the given ** name. If found it behaves like json_get_wiki_page_by_rid(theRid, ** contentFormat), else it returns NULL. */ | | < | 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 | } /* ** Searches for the latest version of a wiki page with the given ** name. If found it behaves like json_get_wiki_page_by_rid(theRid, ** contentFormat), else it returns NULL. */ cson_value * json_get_wiki_page_by_name(char const * zPageName, int contentFormat){ int rid; rid = db_int(0, "SELECT x.rid FROM tag t, tagxref x, blob b" " WHERE x.tagid=t.tagid AND t.tagname='wiki-%q' " " AND b.rid=x.rid" " ORDER BY x.mtime DESC LIMIT 1", zPageName |
︙ | ︙ | |||
258 259 260 261 262 263 264 | } zPageName = json_find_option_cstr2("name",NULL,"n",g.json.dispatchDepth+1); zSymName = json_find_option_cstr("uuid",NULL,"u"); if((!zPageName||!*zPageName) && (!zSymName || !*zSymName)){ json_set_err(FSL_JSON_E_MISSING_ARGS, | | | 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 | } zPageName = json_find_option_cstr2("name",NULL,"n",g.json.dispatchDepth+1); zSymName = json_find_option_cstr("uuid",NULL,"u"); if((!zPageName||!*zPageName) && (!zSymName || !*zSymName)){ json_set_err(FSL_JSON_E_MISSING_ARGS, "At least one of the 'name' or 'uuid' arguments must be provided."); return NULL; } /* TODO: see if we have a page named zPageName. If not, try to resolve zPageName as a UUID. */ |
︙ | ︙ | |||
296 297 298 299 300 301 302 | zMime = cson_value_get_cstr(cson_object_get(g.json.reqPayload.o, "mimetype")); }else{ sContent = cson_value_get_string(g.json.reqPayload.v); } if(!sContent) { json_set_err(FSL_JSON_E_MISSING_ARGS, | | | | | | < | 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324 325 326 327 | zMime = cson_value_get_cstr(cson_object_get(g.json.reqPayload.o, "mimetype")); }else{ sContent = cson_value_get_string(g.json.reqPayload.v); } if(!sContent) { json_set_err(FSL_JSON_E_MISSING_ARGS, "The 'payload' property must be either a string containing the " "Fossil wiki code to preview or an object with body + mimetype " "properties."); return NULL; } zContent = cson_string_cstr(sContent); blob_append( &contentOrig, zContent, (int)cson_string_length_bytes(sContent) ); zMime = wiki_filter_mimetypes(zMime); if( 0==fossil_strcmp(zMime, "text/x-markdown") ){ markdown_to_html(&contentOrig, 0, &contentHtml); }else if( 0==fossil_strcmp(zMime, "text/plain") ){ blob_append(&contentHtml, "<pre class='textPlain'>", -1); blob_append(&contentHtml, blob_str(&contentOrig), blob_size(&contentOrig)); blob_append(&contentHtml, "</pre>", -1); }else{ wiki_convert( &contentOrig, &contentHtml, 0 ); } blob_reset( &contentOrig ); pay = cson_value_new_string( blob_str(&contentHtml), (unsigned int)blob_size(&contentHtml)); blob_reset( &contentHtml ); return pay; } /* ** Internal impl of /wiki/save and /wiki/create. If createMode is 0 |
︙ | ︙ | |||
346 347 348 349 350 351 352 | char allowCreateIfNotExists){ Blob content = empty_blob; /* wiki page content */ cson_value * nameV; /* wiki page name */ char const * zPageName; /* cstr form of page name */ cson_value * contentV; /* passed-in content */ cson_value * emptyContent = NULL; /* placeholder for empty content. */ cson_value * payV = NULL; /* payload/return value */ | | < | 344 345 346 347 348 349 350 351 352 353 354 355 356 357 358 | char allowCreateIfNotExists){ Blob content = empty_blob; /* wiki page content */ cson_value * nameV; /* wiki page name */ char const * zPageName; /* cstr form of page name */ cson_value * contentV; /* passed-in content */ cson_value * emptyContent = NULL; /* placeholder for empty content. */ cson_value * payV = NULL; /* payload/return value */ cson_string const * jstr = NULL; /* temp for cson_value-to-cson_string conversions. */ char const * zMimeType = 0; unsigned int contentLen = 0; int rid; if( (createMode && !g.perm.NewWiki) || (!createMode && !g.perm.WrWiki)){ json_set_err(FSL_JSON_E_DENIED, "Requires '%c' permissions.", |
︙ | ︙ |
Changes to src/login.c.
︙ | ︙ | |||
52 53 54 55 56 57 58 | #include <time.h> /* ** Compute an appropriate Anti-CSRF token into g.zCsrfToken[]. */ static void login_create_csrf_secret(const char *zSeed){ unsigned char zResult[20]; | | | 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 | #include <time.h> /* ** Compute an appropriate Anti-CSRF token into g.zCsrfToken[]. */ static void login_create_csrf_secret(const char *zSeed){ unsigned char zResult[20]; int i; sha1sum_binary(zSeed, zResult); for(i=0; i<sizeof(g.zCsrfToken)-1; i++){ g.zCsrfToken[i] = "abcdefghijklmnopqrstuvwxyz" "ABCDEFGHIJKLMNOPQRSTUVWXYZ" "0123456789-/"[zResult[i]%64]; } |
︙ | ︙ | |||
252 253 254 255 256 257 258 | const char *zLogin = db_column_text(&q,0); if( (uid = login_search_uid(&zLogin, zPasswd) ) != 0 ){ *pzUsername = fossil_strdup(zLogin); break; } } db_finalize(&q); | | | 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 | const char *zLogin = db_column_text(&q,0); if( (uid = login_search_uid(&zLogin, zPasswd) ) != 0 ){ *pzUsername = fossil_strdup(zLogin); break; } } db_finalize(&q); } free(zSha1Pw); return uid; } /* ** Generates a login cookie value for a non-anonymous user. ** |
︙ | ︙ | |||
391 392 393 394 395 396 397 398 399 400 401 402 403 404 405 | " AND login NOT IN ('anonymous','nobody'," " 'developer','reader')", g.userUid); db_protect_pop(); cgi_replace_parameter(cookie, NULL); cgi_replace_parameter("anon", NULL); } } /* ** Look at the HTTP_USER_AGENT parameter and try to determine if the user agent ** is a manually operated browser or a bot. When in doubt, assume a bot. ** Return true if we believe the agent is a real person. */ static int isHuman(const char *zAgent){ if( zAgent==0 ) return 0; /* If no UserAgent, then probably a bot */ | > > > > > > > > > > > > > > > > > > | > | | | | > < < < | | 391 392 393 394 395 396 397 398 399 400 401 402 403 404 405 406 407 408 409 410 411 412 413 414 415 416 417 418 419 420 421 422 423 424 425 426 427 428 429 430 431 432 433 434 435 436 437 438 439 440 441 442 443 444 445 | " AND login NOT IN ('anonymous','nobody'," " 'developer','reader')", g.userUid); db_protect_pop(); cgi_replace_parameter(cookie, NULL); cgi_replace_parameter("anon", NULL); } } /* ** Return true if the prefix of zStr matches zPattern. Return false if ** they are different. ** ** A lowercase character in zPattern will match either upper or lower ** case in zStr. But an uppercase in zPattern will only match an ** uppercase in zStr. */ static int prefix_match(const char *zPattern, const char *zStr){ int i; char c; for(i=0; (c = zPattern[i])!=0; i++){ if( zStr[i]!=c && fossil_tolower(zStr[i])!=c ) return 0; } return 1; } /* ** Look at the HTTP_USER_AGENT parameter and try to determine if the user agent ** is a manually operated browser or a bot. When in doubt, assume a bot. ** Return true if we believe the agent is a real person. */ static int isHuman(const char *zAgent){ int i; if( zAgent==0 ) return 0; /* If no UserAgent, then probably a bot */ for(i=0; zAgent[i]; i++){ if( prefix_match("bot", zAgent+i) ) return 0; if( prefix_match("spider", zAgent+i) ) return 0; if( prefix_match("crawl", zAgent+i) ) return 0; /* If a URI appears in the User-Agent, it is probably a bot */ if( strncmp("http", zAgent+i,4)==0 ) return 0; } if( strncmp(zAgent, "Mozilla/", 8)==0 ){ if( atoi(&zAgent[8])<4 ) return 0; /* Many bots advertise as Mozilla/3 */ /* 2016-05-30: A pernicious spider that likes to walk Fossil timelines has ** been detected on the SQLite website. The spider changes its user-agent ** string frequently, but it always seems to include the following text: */ if( sqlite3_strglob("*Safari/537.36Mozilla/5.0*", zAgent)==0 ) return 0; if( sqlite3_strglob("*Firefox/[1-9]*", zAgent)==0 ) return 1; if( sqlite3_strglob("*Chrome/[1-9]*", zAgent)==0 ) return 1; if( sqlite3_strglob("*(compatible;?MSIE?[1789]*", zAgent)==0 ) return 1; if( sqlite3_strglob("*Trident/[1-9]*;?rv:[1-9]*", zAgent)==0 ){ return 1; /* IE11+ */ } |
︙ | ︙ | |||
756 757 758 759 760 761 762 | }else{ zAnonPw = 0; } @ <table class="login_out"> if( P("HTTPS")==0 ){ @ <tr><td class="form_label">Warning:</td> @ <td><span class='securityWarning'> | | | 773 774 775 776 777 778 779 780 781 782 783 784 785 786 787 | }else{ zAnonPw = 0; } @ <table class="login_out"> if( P("HTTPS")==0 ){ @ <tr><td class="form_label">Warning:</td> @ <td><span class='securityWarning'> @ Login information, including the password, @ will be sent in the clear over an unencrypted connection. if( !g.sslNotAvailable ){ @ Consider logging in at @ <a href='%s(g.zHttpsURL)'>%h(g.zHttpsURL)</a> instead. } @ </span></td></tr> } |
︙ | ︙ | |||
805 806 807 808 809 810 811 | @ </tr> } @ </table> if( zAnonPw && !noAnon ){ const char *zDecoded = captcha_decode(uSeed); int bAutoCaptcha = db_get_boolean("auto-captcha", 0); char *zCaptcha = captcha_render(zDecoded); | | | 822 823 824 825 826 827 828 829 830 831 832 833 834 835 836 | @ </tr> } @ </table> if( zAnonPw && !noAnon ){ const char *zDecoded = captcha_decode(uSeed); int bAutoCaptcha = db_get_boolean("auto-captcha", 0); char *zCaptcha = captcha_render(zDecoded); @ <p><input type="hidden" name="cs" value="%u(uSeed)"> @ Visitors may enter <b>anonymous</b> as the user-ID with @ the 8-character hexadecimal password shown below:</p> @ <div class="captcha"><table class="captcha"><tr><td>\ @ <pre class="captcha"> @ %h(zCaptcha) @ </pre></td></tr></table> |
︙ | ︙ | |||
834 835 836 837 838 839 840 | @ for user <b>%h(g.zLogin)</b></p> } if( db_table_exists("repository","forumpost") ){ @ <hr><p> @ <a href="%R/timeline?ss=v&y=f&vfx&u=%t(g.zLogin)">Forum @ post timeline</a> for user <b>%h(g.zLogin)</b></p> } | < < < | 851 852 853 854 855 856 857 858 859 860 861 862 863 864 | @ for user <b>%h(g.zLogin)</b></p> } if( db_table_exists("repository","forumpost") ){ @ <hr><p> @ <a href="%R/timeline?ss=v&y=f&vfx&u=%t(g.zLogin)">Forum @ post timeline</a> for user <b>%h(g.zLogin)</b></p> } if( g.perm.Password ){ char *zRPW = fossil_random_password(12); @ <hr> @ <p>Change Password for user <b>%h(g.zLogin)</b>:</p> form_begin(0, "%R/login"); @ <table> @ <tr><td class="form_label" id="oldpw">Old Password:</td> |
︙ | ︙ | |||
1013 1014 1015 1016 1017 1018 1019 | uid = login_resetpw_suffix_is_valid(zName); if( uid==0 ){ @ <p><span class="loginError"> @ This password-reset URL is invalid, probably because it has expired. @ Password-reset URLs have a short lifespan. @ </span></p> style_finish_page(); | | | 1027 1028 1029 1030 1031 1032 1033 1034 1035 1036 1037 1038 1039 1040 1041 | uid = login_resetpw_suffix_is_valid(zName); if( uid==0 ){ @ <p><span class="loginError"> @ This password-reset URL is invalid, probably because it has expired. @ Password-reset URLs have a short lifespan. @ </span></p> style_finish_page(); sleep(1); /* Introduce a small delay on an invalid suffix as an ** extra defense against search attacks */ return; } fossil_redirect_to_https_if_needed(1); login_set_uid(uid, 0); if( g.perm.Setup || g.perm.Admin || !g.perm.Password || g.zLogin==0 ){ @ <p><span class="loginError"> |
︙ | ︙ | |||
1147 1148 1149 1150 1151 1152 1153 | pStmt = 0; rc = sqlite3_prepare_v2(pOther, zSQL, -1, &pStmt, 0); if( rc==SQLITE_OK && sqlite3_step(pStmt)==SQLITE_ROW ){ db_unprotect(PROTECT_USER); db_multi_exec( "UPDATE user SET cookie=%Q, cexpire=%.17g" " WHERE login=%Q", | | | 1161 1162 1163 1164 1165 1166 1167 1168 1169 1170 1171 1172 1173 1174 1175 | pStmt = 0; rc = sqlite3_prepare_v2(pOther, zSQL, -1, &pStmt, 0); if( rc==SQLITE_OK && sqlite3_step(pStmt)==SQLITE_ROW ){ db_unprotect(PROTECT_USER); db_multi_exec( "UPDATE user SET cookie=%Q, cexpire=%.17g" " WHERE login=%Q", zHash, sqlite3_column_double(pStmt, 0), zLogin ); db_protect_pop(); nXfer++; } sqlite3_finalize(pStmt); } |
︙ | ︙ | |||
1564 1565 1566 1567 1568 1569 1570 | case 'a': p->Admin = p->RdTkt = p->WrTkt = p->Zip = p->RdWiki = p->WrWiki = p->NewWiki = p->ApndWiki = p->Hyperlink = p->Clone = p->NewTkt = p->Password = p->RdAddr = p->TktFmt = p->Attach = p->ApndTkt = p->ModWiki = p->ModTkt = p->RdForum = p->WrForum = p->ModForum = | | | 1578 1579 1580 1581 1582 1583 1584 1585 1586 1587 1588 1589 1590 1591 1592 | case 'a': p->Admin = p->RdTkt = p->WrTkt = p->Zip = p->RdWiki = p->WrWiki = p->NewWiki = p->ApndWiki = p->Hyperlink = p->Clone = p->NewTkt = p->Password = p->RdAddr = p->TktFmt = p->Attach = p->ApndTkt = p->ModWiki = p->ModTkt = p->RdForum = p->WrForum = p->ModForum = p->WrTForum = p->AdminForum = p->Chat = p->EmailAlert = p->Announce = p->Debug = 1; /* Fall thru into Read/Write */ case 'i': p->Read = p->Write = 1; break; case 'o': p->Read = 1; break; case 'z': p->Zip = 1; break; case 'h': p->Hyperlink = 1; break; |
︙ | ︙ | |||
1811 1812 1813 1814 1815 1816 1817 | */ void login_insert_csrf_secret(void){ @ <input type="hidden" name="csrf" value="%s(g.zCsrfToken)"> } /* ** Check to see if the candidate username zUserID is already used. | | | 1825 1826 1827 1828 1829 1830 1831 1832 1833 1834 1835 1836 1837 1838 1839 | */ void login_insert_csrf_secret(void){ @ <input type="hidden" name="csrf" value="%s(g.zCsrfToken)"> } /* ** Check to see if the candidate username zUserID is already used. ** Return 1 if it is already in use. Return 0 if the name is ** available for a self-registeration. */ static int login_self_choosen_userid_already_exists(const char *zUserID){ int rc = db_exists( "SELECT 1 FROM user WHERE login=%Q " "UNION ALL " "SELECT 1 FROM event WHERE user=%Q OR euser=%Q", |
︙ | ︙ | |||
1833 1834 1835 1836 1837 1838 1839 | ** searches for a user or subscriber that has that email address. If the ** email address is used no-where in the system, return 0. If the email ** address is assigned to a particular user return the UID for that user. ** If the email address is used, but not by a particular user, return -1. */ static int email_address_in_use(const char *zEMail){ int uid; | | | 1847 1848 1849 1850 1851 1852 1853 1854 1855 1856 1857 1858 1859 1860 1861 | ** searches for a user or subscriber that has that email address. If the ** email address is used no-where in the system, return 0. If the email ** address is assigned to a particular user return the UID for that user. ** If the email address is used, but not by a particular user, return -1. */ static int email_address_in_use(const char *zEMail){ int uid; uid = db_int(0, "SELECT uid FROM user" " WHERE info LIKE '%%<%q>%%'", zEMail); if( uid>0 ){ if( db_exists("SELECT 1 FROM user WHERE uid=%d AND (" " cap GLOB '*[as]*' OR" " find_emailaddr(info)<>%Q COLLATE nocase)", uid, zEMail) ){ |
︙ | ︙ | |||
1862 1863 1864 1865 1866 1867 1868 | } return uid; } /* ** COMMAND: test-email-used ** Usage: fossil test-email-used EMAIL ... | | | 1876 1877 1878 1879 1880 1881 1882 1883 1884 1885 1886 1887 1888 1889 1890 | } return uid; } /* ** COMMAND: test-email-used ** Usage: fossil test-email-used EMAIL ... ** ** Given a list of email addresses, show the UID and LOGIN associated ** with each one. */ void test_email_used(void){ int i; db_find_and_open_repository(0, 0); verify_all_options(); |
︙ | ︙ | |||
1887 1888 1889 1890 1891 1892 1893 | }else{ char *zLogin = db_text(0, "SELECT login FROM user WHERE uid=%d", uid); fossil_print("%s: UID %d (%s)\n", zEMail, uid, zLogin); fossil_free(zLogin); } } } | | | 1901 1902 1903 1904 1905 1906 1907 1908 1909 1910 1911 1912 1913 1914 1915 | }else{ char *zLogin = db_text(0, "SELECT login FROM user WHERE uid=%d", uid); fossil_print("%s: UID %d (%s)\n", zEMail, uid, zLogin); fossil_free(zLogin); } } } /* ** Check an email address and confirm that it is valid for self-registration. ** The email address is known already to be well-formed. Return true ** if the email address is on the allowed list. ** ** The default behavior is that any valid email address is accepted. |
︙ | ︙ | |||
1979 1980 1981 1982 1983 1984 1985 | zErr = "Incorrect CAPTCHA"; }else if( strlen(zUserID)<6 ){ iErrLine = 1; zErr = "User ID too short. Must be at least 6 characters."; }else if( sqlite3_strglob("*[^-a-zA-Z0-9_.]*",zUserID)==0 ){ iErrLine = 1; zErr = "User ID may not contain spaces or special characters."; | | | 1993 1994 1995 1996 1997 1998 1999 2000 2001 2002 2003 2004 2005 2006 2007 | zErr = "Incorrect CAPTCHA"; }else if( strlen(zUserID)<6 ){ iErrLine = 1; zErr = "User ID too short. Must be at least 6 characters."; }else if( sqlite3_strglob("*[^-a-zA-Z0-9_.]*",zUserID)==0 ){ iErrLine = 1; zErr = "User ID may not contain spaces or special characters."; }else if( sqlite3_strlike("anonymous%", zUserID, 0)==0 || sqlite3_strlike("nobody%", zUserID, 0)==0 || sqlite3_strlike("reader%", zUserID, 0)==0 || sqlite3_strlike("developer%", zUserID, 0)==0 ){ iErrLine = 1; zErr = "This User ID is reserved. Choose something different."; }else if( zDName[0]==0 ){ |
︙ | ︙ |
Changes to src/lookslike.c.
︙ | ︙ | |||
268 269 270 271 272 273 274 | const WCHAR_T *z = (WCHAR_T *)blob_buffer(pContent); unsigned int n = blob_size(pContent); int j, c, flags = LOOK_NONE; /* Assume UTF-16 text, prove otherwise */ if( n%sizeof(WCHAR_T) ){ flags |= LOOK_ODD; /* Odd number of bytes -> binary (UTF-8?) */ } | | | 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 | const WCHAR_T *z = (WCHAR_T *)blob_buffer(pContent); unsigned int n = blob_size(pContent); int j, c, flags = LOOK_NONE; /* Assume UTF-16 text, prove otherwise */ if( n%sizeof(WCHAR_T) ){ flags |= LOOK_ODD; /* Odd number of bytes -> binary (UTF-8?) */ } if( n<sizeof(WCHAR_T) ) return flags; /* Zero or One byte -> binary (UTF-8?) */ c = *z; if( bReverse ){ c = UTF16_SWAP(c); } if( c==0 ){ flags |= LOOK_NUL; /* NUL character in a file -> binary */ }else if( c=='\r' ){ |
︙ | ︙ |
Changes to src/main.c.
︙ | ︙ | |||
224 225 226 227 228 229 230 | Blob httpHeader; /* Complete text of the HTTP request header */ UrlData url; /* Information about current URL */ const char *zLogin; /* Login name. NULL or "" if not logged in. */ const char *zCkoutAlias; /* doc/ uses this branch as an alias for "ckout" */ const char *zMainMenuFile; /* --mainmenu FILE from server/ui/cgi */ const char *zSSLIdentity; /* Value of --ssl-identity option, filename of ** SSL client identity */ | < < | 224 225 226 227 228 229 230 231 232 233 234 235 236 237 | Blob httpHeader; /* Complete text of the HTTP request header */ UrlData url; /* Information about current URL */ const char *zLogin; /* Login name. NULL or "" if not logged in. */ const char *zCkoutAlias; /* doc/ uses this branch as an alias for "ckout" */ const char *zMainMenuFile; /* --mainmenu FILE from server/ui/cgi */ const char *zSSLIdentity; /* Value of --ssl-identity option, filename of ** SSL client identity */ #if USE_SEE const char *zPidKey; /* Saved value of the --usepidkey option. Only * applicable when using SEE on Windows or Linux. */ #endif int useLocalauth; /* No login required if from 127.0.0.1 */ int noPswd; /* Logged in without password (on 127.0.0.1) */ int userUid; /* Integer user id */ |
︙ | ︙ | |||
852 853 854 855 856 857 858 | zNewArgv[0] = g.argv[0]; zNewArgv[1] = "ui"; zNewArgv[2] = g.argv[1]; zNewArgv[3] = 0; g.argc = 3; g.argv = zNewArgv; #endif | | | 850 851 852 853 854 855 856 857 858 859 860 861 862 863 864 | zNewArgv[0] = g.argv[0]; zNewArgv[1] = "ui"; zNewArgv[2] = g.argv[1]; zNewArgv[3] = 0; g.argc = 3; g.argv = zNewArgv; #endif } zCmdName = g.argv[1]; } #ifndef _WIN32 /* There is a bug in stunnel4 in which it sometimes starts up client ** processes without first opening file descriptor 2 (standard error). ** If this happens, and a subsequent open() of a database returns file ** descriptor 2, and then an assert() fires and writes on fd 2, that |
︙ | ︙ | |||
1414 1415 1416 1417 1418 1419 1420 | /* Remove trailing ":443" from the HOST, if any */ if( i>4 && z[i-1]=='3' && z[i-2]=='4' && z[i-3]=='4' && z[i-4]==':' ){ i -= 4; } }else{ /* Remove trailing ":80" from the HOST */ if( i>3 && z[i-1]=='0' && z[i-2]=='8' && z[i-3]==':' ) i -= 3; | | | 1412 1413 1414 1415 1416 1417 1418 1419 1420 1421 1422 1423 1424 1425 1426 | /* Remove trailing ":443" from the HOST, if any */ if( i>4 && z[i-1]=='3' && z[i-2]=='4' && z[i-3]=='4' && z[i-4]==':' ){ i -= 4; } }else{ /* Remove trailing ":80" from the HOST */ if( i>3 && z[i-1]=='0' && z[i-2]=='8' && z[i-3]==':' ) i -= 3; } if( i && z[i-1]=='.' ) i--; z[i] = 0; zCur = PD("SCRIPT_NAME","/"); i = strlen(zCur); while( i>0 && zCur[i-1]=='/' ) i--; if( fossil_stricmp(zMode,"on")==0 ){ g.zBaseURL = mprintf("https://%s%.*s", z, i, zCur); |
︙ | ︙ | |||
1571 1572 1573 1574 1575 1576 1577 | size_t size; char **strings; size_t i; Blob out; size = backtrace(array, sizeof(array)/sizeof(array[0])); strings = backtrace_symbols(array, size); blob_init(&out, 0, 0); | | < < < < < < < < | < | < | 1569 1570 1571 1572 1573 1574 1575 1576 1577 1578 1579 1580 1581 1582 1583 1584 1585 1586 1587 1588 1589 | size_t size; char **strings; size_t i; Blob out; size = backtrace(array, sizeof(array)/sizeof(array[0])); strings = backtrace_symbols(array, size); blob_init(&out, 0, 0); blob_appendf(&out, "Segfault during %s", g.zPhase); for(i=0; i<size; i++){ blob_appendf(&out, "\n(%d) %s", i, strings[i]); } fossil_panic("%s", blob_str(&out)); #else fossil_panic("Segfault during %s", g.zPhase); #endif exit(1); } /* ** Called if a server gets a SIGPIPE. This often happens when a client ** webbrowser opens a connection but never sends the HTTP request |
︙ | ︙ | |||
1624 1625 1626 1627 1628 1629 1630 | if( db_get_int("redirect-to-https",0)<iLevel ) return 0; if( P("HTTPS")!=0 ) return 0; return 1; } /* ** Redirect to the equivalent HTTPS request if the current connection is | | | 1612 1613 1614 1615 1616 1617 1618 1619 1620 1621 1622 1623 1624 1625 1626 | if( db_get_int("redirect-to-https",0)<iLevel ) return 0; if( P("HTTPS")!=0 ) return 0; return 1; } /* ** Redirect to the equivalent HTTPS request if the current connection is ** insecure and if the redirect-to-https flag greater than or equal to ** iLevel. iLevel is 1 for /login pages and 2 for every other page. */ int fossil_redirect_to_https_if_needed(int iLevel){ if( fossil_wants_https(iLevel) ){ const char *zQS = P("QUERY_STRING"); char *zURL; if( zQS==0 || zQS[0]==0 ){ |
︙ | ︙ | |||
1979 1980 1981 1982 1983 1984 1985 | zPathInfo += 7; g.nExtraURL += 7; cgi_replace_parameter("PATH_INFO", zPathInfo); cgi_replace_parameter("SCRIPT_NAME", zNewScript); etag_cancel(); } | | | 1967 1968 1969 1970 1971 1972 1973 1974 1975 1976 1977 1978 1979 1980 1981 | zPathInfo += 7; g.nExtraURL += 7; cgi_replace_parameter("PATH_INFO", zPathInfo); cgi_replace_parameter("SCRIPT_NAME", zNewScript); etag_cancel(); } /* If the content type is application/x-fossil or ** application/x-fossil-debug, then a sync/push/pull/clone is ** desired, so default the PATH_INFO to /xfer */ if( g.zContentType && strncmp(g.zContentType, "application/x-fossil", 20)==0 ){ /* Special case: If the content mimetype shows that it is "fossil sync" ** payload, then pretend that the PATH_INFO is /xfer so that we always |
︙ | ︙ | |||
2350 2351 2352 2353 2354 2355 2356 2357 2358 2359 2360 2361 2362 2363 2364 2365 2366 2367 2368 | ** significant for "errorlog:", which should be set before "repository:" ** so that any warnings from the database when opening the repository ** go to that log file. ** ** See also: [[http]], [[server]], [[winsrv]] */ void cmd_cgi(void){ const char *zNotFound = 0; char **azRedirect = 0; /* List of repositories to redirect to */ int nRedirect = 0; /* Number of entries in azRedirect */ Glob *pFileGlob = 0; /* Pattern for files */ int allowRepoList = 0; /* Allow lists of repository files */ Blob config, line, key, value, value2; /* Initialize the CGI environment. */ g.httpOut = stdout; g.httpIn = stdin; fossil_binary_mode(g.httpOut); fossil_binary_mode(g.httpIn); g.cgiOutput = 1; | > < | | | | 2338 2339 2340 2341 2342 2343 2344 2345 2346 2347 2348 2349 2350 2351 2352 2353 2354 2355 2356 2357 2358 2359 2360 2361 2362 2363 2364 2365 2366 2367 2368 2369 2370 2371 2372 2373 2374 2375 | ** significant for "errorlog:", which should be set before "repository:" ** so that any warnings from the database when opening the repository ** go to that log file. ** ** See also: [[http]], [[server]], [[winsrv]] */ void cmd_cgi(void){ const char *zFile; const char *zNotFound = 0; char **azRedirect = 0; /* List of repositories to redirect to */ int nRedirect = 0; /* Number of entries in azRedirect */ Glob *pFileGlob = 0; /* Pattern for files */ int allowRepoList = 0; /* Allow lists of repository files */ Blob config, line, key, value, value2; /* Initialize the CGI environment. */ g.httpOut = stdout; g.httpIn = stdin; fossil_binary_mode(g.httpOut); fossil_binary_mode(g.httpIn); g.cgiOutput = 1; fossil_set_timeout(FOSSIL_DEFAULT_TIMEOUT); /* Find the name of the CGI control file */ if( g.argc==3 && fossil_strcmp(g.argv[1],"cgi")==0 ){ zFile = g.argv[2]; }else if( g.argc>=2 ){ zFile = g.argv[1]; }else{ cgi_panic("No CGI control file specified"); } /* Read and parse the CGI control file. */ blob_read_from_file(&config, zFile, ExtFILE); while( blob_line(&config, &line) ){ if( !blob_token(&line, &key) ) continue; if( blob_buffer(&key)[0]=='#' ) continue; if( blob_eq(&key, "repository:") && blob_tail(&line, &value) ){ /* repository: FILENAME ** ** The name of the Fossil repository to be served via CGI. Most |
︙ | ︙ | |||
2535 2536 2537 2538 2539 2540 2541 | ** the elements of the built-in skin. If LABEL does not match, ** this directive is a silent no-op. It may alternately be ** an absolute path to a directory which holds skin definition ** files (header.txt, footer.txt, etc.). If LABEL is empty, ** the skin stored in the CONFIG db table is used. */ blob_token(&line, &value); | | | 2523 2524 2525 2526 2527 2528 2529 2530 2531 2532 2533 2534 2535 2536 2537 | ** the elements of the built-in skin. If LABEL does not match, ** this directive is a silent no-op. It may alternately be ** an absolute path to a directory which holds skin definition ** files (header.txt, footer.txt, etc.). If LABEL is empty, ** the skin stored in the CONFIG db table is used. */ blob_token(&line, &value); fossil_free(skin_use_alternative(blob_str(&value), 1)); blob_reset(&value); continue; } if( blob_eq(&key, "jsmode:") && blob_token(&line, &value) ){ /* jsmode: MODE ** ** Change how JavaScript resources are delivered with each HTML |
︙ | ︙ | |||
2795 2796 2797 2798 2799 2800 2801 | ** --nocompress Do not compress HTTP replies ** --nodelay Omit backoffice processing if it would delay ** process exit ** --nojail Drop root privilege but do not enter the chroot jail ** --nossl Do not do http: to https: redirects, regardless of ** the redirect-to-https setting. ** --notfound URL Use URL as the "HTTP 404, object not found" page | | | 2783 2784 2785 2786 2787 2788 2789 2790 2791 2792 2793 2794 2795 2796 2797 | ** --nocompress Do not compress HTTP replies ** --nodelay Omit backoffice processing if it would delay ** process exit ** --nojail Drop root privilege but do not enter the chroot jail ** --nossl Do not do http: to https: redirects, regardless of ** the redirect-to-https setting. ** --notfound URL Use URL as the "HTTP 404, object not found" page ** --out FILE Write the HTTP reply to FILE instead of to ** standard output ** --pkey FILE Read the private key used for TLS from FILE ** --repolist If REPOSITORY is directory, URL "/" lists all repos ** --scgi Interpret input as SCGI rather than HTTP ** --skin LABEL Use override skin LABEL. Use an empty string ("") ** to force use of the current local skin config. ** --th-trace Trace TH1 execution (for debugging purposes) |
︙ | ︙ | |||
2846 2847 2848 2849 2850 2851 2852 | noJail = find_option("nojail",0,0)!=0; allowRepoList = find_option("repolist",0,0)!=0; g.useLocalauth = find_option("localauth", 0, 0)!=0; g.sslNotAvailable = find_option("nossl", 0, 0)!=0; g.fNoHttpCompress = find_option("nocompress",0,0)!=0; g.zExtRoot = find_option("extroot",0,1); g.zCkoutAlias = find_option("ckout-alias",0,1); | < | 2834 2835 2836 2837 2838 2839 2840 2841 2842 2843 2844 2845 2846 2847 | noJail = find_option("nojail",0,0)!=0; allowRepoList = find_option("repolist",0,0)!=0; g.useLocalauth = find_option("localauth", 0, 0)!=0; g.sslNotAvailable = find_option("nossl", 0, 0)!=0; g.fNoHttpCompress = find_option("nocompress",0,0)!=0; g.zExtRoot = find_option("extroot",0,1); g.zCkoutAlias = find_option("ckout-alias",0,1); zInFile = find_option("in",0,1); if( zInFile ){ backoffice_disable(); g.httpIn = fossil_fopen(zInFile, "rb"); if( g.httpIn==0 ) fossil_fatal("cannot open \"%s\" for reading", zInFile); }else{ g.httpIn = stdin; |
︙ | ︙ | |||
2870 2871 2872 2873 2874 2875 2876 | g.httpOut = stdout; #if defined(_WIN32) _setmode(_fileno(stdout), _O_BINARY); #endif } zIpAddr = find_option("ipaddr",0,1); useSCGI = find_option("scgi", 0, 0)!=0; | < | 2857 2858 2859 2860 2861 2862 2863 2864 2865 2866 2867 2868 2869 2870 | g.httpOut = stdout; #if defined(_WIN32) _setmode(_fileno(stdout), _O_BINARY); #endif } zIpAddr = find_option("ipaddr",0,1); useSCGI = find_option("scgi", 0, 0)!=0; zAltBase = find_option("baseurl", 0, 1); if( find_option("nodelay",0,0)!=0 ) backoffice_no_delay(); if( zAltBase ) set_base_url(zAltBase); if( find_option("https",0,0)!=0 ){ zIpAddr = fossil_getenv("REMOTE_HOST"); /* From stunnel */ cgi_replace_parameter("HTTPS","on"); } |
︙ | ︙ | |||
2997 2998 2999 3000 3001 3002 3003 | login_set_capabilities(zUserCap, 0); g.httpIn = stdin; g.httpOut = stdout; fossil_binary_mode(g.httpOut); fossil_binary_mode(g.httpIn); g.zExtRoot = find_option("extroot",0,1); find_server_repository(2, 0); | < | 2983 2984 2985 2986 2987 2988 2989 2990 2991 2992 2993 2994 2995 2996 | login_set_capabilities(zUserCap, 0); g.httpIn = stdin; g.httpOut = stdout; fossil_binary_mode(g.httpOut); fossil_binary_mode(g.httpIn); g.zExtRoot = find_option("extroot",0,1); find_server_repository(2, 0); g.cgiOutput = 1; g.fNoHttpCompress = 1; g.fullHttpReply = 1; g.sslNotAvailable = 1; /* Avoid attempts to redirect */ zIpAddr = bTest ? 0 : cgi_ssh_remote_addr(0); if( zIpAddr && zIpAddr[0] ){ g.fSshClient |= CGI_SSH_CLIENT; |
︙ | ︙ | |||
3023 3024 3025 3026 3027 3028 3029 | */ #ifndef _WIN32 static int nAlarmSeconds = 0; static void sigalrm_handler(int x){ sqlite3_uint64 tmUser = 0, tmKernel = 0; fossil_cpu_times(&tmUser, &tmKernel); if( fossil_strcmp(g.zPhase, "web-page reply")==0 | | | | 3008 3009 3010 3011 3012 3013 3014 3015 3016 3017 3018 3019 3020 3021 3022 3023 3024 3025 | */ #ifndef _WIN32 static int nAlarmSeconds = 0; static void sigalrm_handler(int x){ sqlite3_uint64 tmUser = 0, tmKernel = 0; fossil_cpu_times(&tmUser, &tmKernel); if( fossil_strcmp(g.zPhase, "web-page reply")==0 && tmUser+tmKernel<1000000 ){ /* Do not log time-outs during web-page reply unless more than ** 1 second of CPU time has been consumed */ return; } fossil_panic("Timeout after %d seconds during %s" " - user %,llu µs, sys %,llu µs", nAlarmSeconds, g.zPhase, tmUser, tmKernel); } #endif |
︙ | ︙ | |||
3078 3079 3080 3081 3082 3083 3084 | ** This only works for the "fossil ui" command, not the "fossil server" ** command. ** ** If REPOSITORY begins with a "HOST:" or "USER@HOST:" prefix, then ** the command is run on the remote host specified and the results are ** tunneled back to the local machine via SSH. This feature only works for ** the "fossil ui" command, not the "fossil server" command. The name of the | | | > | < < | 3063 3064 3065 3066 3067 3068 3069 3070 3071 3072 3073 3074 3075 3076 3077 3078 3079 3080 | ** This only works for the "fossil ui" command, not the "fossil server" ** command. ** ** If REPOSITORY begins with a "HOST:" or "USER@HOST:" prefix, then ** the command is run on the remote host specified and the results are ** tunneled back to the local machine via SSH. This feature only works for ** the "fossil ui" command, not the "fossil server" command. The name of the ** fossil executable on the remote host is specified by the --fossilcmd option, ** or if there is no --fossilcmd, it first tries "$HOME/bin/fossil" and if ** not found there it searches for any executable named "fossil" on the ** default $PATH set by SSH on the remote. ** ** REPOSITORY may also be a directory (aka folder) that contains one or ** more repositories with names ending in ".fossil". In this case, a ** prefix of the URL pathname is used to search the directory for an ** appropriate repository. To thwart mischief, the pathname in the URL must ** contain only alphanumerics, "_", "/", "-", and ".", and no "-" may ** occur after "/", and every "." must be surrounded on both sides by |
︙ | ︙ | |||
3155 3156 3157 3158 3159 3160 3161 | ** --nojail Drop root privileges but do not enter the chroot jail ** --nossl Do not force redirects to SSL even if the repository ** setting "redirect-to-https" requests it. This is set ** by default for the "ui" command. ** --notfound URL Redirect to URL if a page is not found. ** -p|--page PAGE Start "ui" on PAGE. ex: --page "timeline?y=ci" ** --pkey FILE Read the private key used for TLS from FILE | | | 3139 3140 3141 3142 3143 3144 3145 3146 3147 3148 3149 3150 3151 3152 3153 | ** --nojail Drop root privileges but do not enter the chroot jail ** --nossl Do not force redirects to SSL even if the repository ** setting "redirect-to-https" requests it. This is set ** by default for the "ui" command. ** --notfound URL Redirect to URL if a page is not found. ** -p|--page PAGE Start "ui" on PAGE. ex: --page "timeline?y=ci" ** --pkey FILE Read the private key used for TLS from FILE ** -P|--port TCPPORT Listen to request on port TCPPORT ** --repolist If REPOSITORY is dir, URL "/" lists repos ** --scgi Accept SCGI rather than HTTP ** --skin LABEL Use override skin LABEL ** --th-trace Trace TH1 execution (for debugging purposes) ** --usepidkey Use saved encryption key from parent process. This is ** only necessary when using SEE on Windows or Linux. ** |
︙ | ︙ | |||
3189 3190 3191 3192 3193 3194 3195 | int fCreate = 0; /* The --create flag */ int fNoBrowser = 0; /* Do not auto-launch web-browser */ const char *zInitPage = 0; /* Start on this page. --page option */ int findServerArg = 2; /* argv index for find_server_repository() */ char *zRemote = 0; /* Remote host on which to run "fossil ui" */ const char *zJsMode; /* The --jsmode parameter */ const char *zFossilCmd =0; /* Name of "fossil" binary on remote system */ | | | 3173 3174 3175 3176 3177 3178 3179 3180 3181 3182 3183 3184 3185 3186 3187 | int fCreate = 0; /* The --create flag */ int fNoBrowser = 0; /* Do not auto-launch web-browser */ const char *zInitPage = 0; /* Start on this page. --page option */ int findServerArg = 2; /* argv index for find_server_repository() */ char *zRemote = 0; /* Remote host on which to run "fossil ui" */ const char *zJsMode; /* The --jsmode parameter */ const char *zFossilCmd =0; /* Name of "fossil" binary on remote system */ #if USE_SEE db_setup_for_saved_encryption_key(); #endif #if defined(_WIN32) const char *zStopperFile; /* Name of file used to terminate server */ |
︙ | ︙ | |||
3234 3235 3236 3237 3238 3239 3240 | zFossilCmd = find_option("fossilcmd", 0, 1); } zNotFound = find_option("notfound", 0, 1); allowRepoList = find_option("repolist",0,0)!=0; if( find_option("nocompress",0,0)!=0 ) g.fNoHttpCompress = 1; zAltBase = find_option("baseurl", 0, 1); fCreate = find_option("create",0,0)!=0; | < | < < < | 3218 3219 3220 3221 3222 3223 3224 3225 3226 3227 3228 3229 3230 3231 3232 | zFossilCmd = find_option("fossilcmd", 0, 1); } zNotFound = find_option("notfound", 0, 1); allowRepoList = find_option("repolist",0,0)!=0; if( find_option("nocompress",0,0)!=0 ) g.fNoHttpCompress = 1; zAltBase = find_option("baseurl", 0, 1); fCreate = find_option("create",0,0)!=0; if( find_option("scgi", 0, 0)!=0 ) flags |= HTTP_SERVER_SCGI; if( zAltBase ){ set_base_url(zAltBase); } g.sslNotAvailable = find_option("nossl", 0, 0)!=0 || isUiCmd; fNoBrowser = find_option("nobrowser", "B", 0)!=0; decode_ssl_options(); if( find_option("https",0,0)!=0 || g.httpUseSSL ){ |
︙ | ︙ | |||
3343 3344 3345 3346 3347 3348 3349 | }else{ iPort = db_get_int("http-port", 8080); mxPort = iPort+100; } if( isUiCmd && !fNoBrowser ){ char *zBrowserArg; const char *zProtocol = g.httpUseSSL ? "https" : "http"; | | < < < < < | > | | | | | < < < | | | | | | | | | | | | | < < < < | | | | | | | | | < < < < < < | | | | | | | < | < | 3323 3324 3325 3326 3327 3328 3329 3330 3331 3332 3333 3334 3335 3336 3337 3338 3339 3340 3341 3342 3343 3344 3345 3346 3347 3348 3349 3350 3351 3352 3353 3354 3355 3356 3357 3358 3359 3360 3361 3362 3363 3364 3365 3366 3367 3368 3369 3370 3371 3372 3373 3374 3375 3376 3377 3378 3379 3380 3381 3382 3383 3384 3385 3386 3387 3388 3389 3390 3391 3392 3393 | }else{ iPort = db_get_int("http-port", 8080); mxPort = iPort+100; } if( isUiCmd && !fNoBrowser ){ char *zBrowserArg; const char *zProtocol = g.httpUseSSL ? "https" : "http"; if( zRemote ) db_open_config(0,0); zBrowser = fossil_web_browser(); if( zIpAddr==0 ){ zBrowserArg = mprintf("%s://localhost:%%d/%s", zProtocol, zInitPage); }else if( strchr(zIpAddr,':') ){ zBrowserArg = mprintf("%s://[%s]:%%d/%s", zProtocol, zIpAddr, zInitPage); }else{ zBrowserArg = mprintf("%s://%s:%%d/%s", zProtocol, zIpAddr, zInitPage); } zBrowserCmd = mprintf("%s %!$ &", zBrowser, zBrowserArg); fossil_free(zBrowserArg); } if( zRemote ){ /* If a USER@HOST:REPO argument is supplied, then use SSH to run ** "fossil ui --nobrowser" on the remote system and to set up a ** tunnel from the local machine to the remote. */ FILE *sshIn; Blob ssh; char zLine[1000]; blob_init(&ssh, 0, 0); transport_ssh_command(&ssh); db_close_config(); blob_appendf(&ssh, " -t -L 127.0.0.1:%d:127.0.0.1:%d %!$", iPort, iPort, zRemote ); if( zFossilCmd==0 ){ blob_appendf(&ssh, " %$ fossil", "PATH=$HOME/bin:$PATH"); }else{ blob_appendf(&ssh, " %$", zFossilCmd); } blob_appendf(&ssh, " ui --nobrowser --localauth --port %d", iPort); if( zNotFound ) blob_appendf(&ssh, " --notfound %!$", zNotFound); if( zFileGlob ) blob_appendf(&ssh, " --files-urlenc %T", zFileGlob); if( g.zCkoutAlias ) blob_appendf(&ssh, " --ckout-alias %!$",g.zCkoutAlias); if( g.zExtRoot ) blob_appendf(&ssh, " --extroot %$", g.zExtRoot); if( skin_in_use() ) blob_appendf(&ssh, " --skin %s", skin_in_use()); if( zJsMode ) blob_appendf(&ssh, " --jsmode %s", zJsMode); if( fCreate ) blob_appendf(&ssh, " --create"); blob_appendf(&ssh, " %$", g.argv[2]); fossil_print("%s\n", blob_str(&ssh)); sshIn = popen(blob_str(&ssh), "r"); if( sshIn==0 ){ fossil_fatal("unable to %s", blob_str(&ssh)); } while( fgets(zLine, sizeof(zLine), sshIn) ){ fputs(zLine, stdout); fflush(stdout); if( zBrowserCmd && sqlite3_strglob("*Listening for HTTP*",zLine)==0 ){ char *zCmd = mprintf(zBrowserCmd/*works-like:"%d"*/,iPort); fossil_system(zCmd); fossil_free(zCmd); fossil_free(zBrowserCmd); zBrowserCmd = 0; } } pclose(sshIn); fossil_free(zBrowserCmd); return; } if( g.repositoryOpen ) flags |= HTTP_SERVER_HAD_REPOSITORY; if( g.localOpen ) flags |= HTTP_SERVER_HAD_CHECKOUT; db_close(1); #if !defined(_WIN32) |
︙ | ︙ |
Changes to src/main.mk.
︙ | ︙ | |||
189 190 191 192 193 194 195 | $(SRCDIR)/../skins/default/details.txt \ $(SRCDIR)/../skins/default/footer.txt \ $(SRCDIR)/../skins/default/header.txt \ $(SRCDIR)/../skins/eagle/css.txt \ $(SRCDIR)/../skins/eagle/details.txt \ $(SRCDIR)/../skins/eagle/footer.txt \ $(SRCDIR)/../skins/eagle/header.txt \ | < < < < | 189 190 191 192 193 194 195 196 197 198 199 200 201 202 | $(SRCDIR)/../skins/default/details.txt \ $(SRCDIR)/../skins/default/footer.txt \ $(SRCDIR)/../skins/default/header.txt \ $(SRCDIR)/../skins/eagle/css.txt \ $(SRCDIR)/../skins/eagle/details.txt \ $(SRCDIR)/../skins/eagle/footer.txt \ $(SRCDIR)/../skins/eagle/header.txt \ $(SRCDIR)/../skins/khaki/css.txt \ $(SRCDIR)/../skins/khaki/details.txt \ $(SRCDIR)/../skins/khaki/footer.txt \ $(SRCDIR)/../skins/khaki/header.txt \ $(SRCDIR)/../skins/original/css.txt \ $(SRCDIR)/../skins/original/details.txt \ $(SRCDIR)/../skins/original/footer.txt \ |
︙ | ︙ |
Changes to src/manifest.c.
︙ | ︙ | |||
1224 1225 1226 1227 1228 1229 1230 | ** control artifact. Make a copy, and run it through the official ** artifact parser. This is the slow path, but it is rarely taken. */ blob_init(©, 0, 0); blob_init(&errmsg, 0, 0); blob_append(©, zIn, nIn); pManifest = manifest_parse(©, 0, &errmsg); | | | 1224 1225 1226 1227 1228 1229 1230 1231 1232 1233 1234 1235 1236 1237 1238 | ** control artifact. Make a copy, and run it through the official ** artifact parser. This is the slow path, but it is rarely taken. */ blob_init(©, 0, 0); blob_init(&errmsg, 0, 0); blob_append(©, zIn, nIn); pManifest = manifest_parse(©, 0, &errmsg); iRes = pManifest!=0; manifest_destroy(pManifest); blob_reset(&errmsg); return iRes; } /* ** COMMAND: test-parse-manifest |
︙ | ︙ | |||
1336 1337 1338 1339 1340 1341 1342 | id, blob_str(&err)); nErr++; }else if( !isWF && p!=0 ){ fossil_print("%d ERROR: manifest_is_well_formed() reported false " "but manifest_parse() found nothing wrong.\n", id); nErr++; } | | | 1336 1337 1338 1339 1340 1341 1342 1343 1344 1345 1346 1347 1348 1349 1350 | id, blob_str(&err)); nErr++; }else if( !isWF && p!=0 ){ fossil_print("%d ERROR: manifest_is_well_formed() reported false " "but manifest_parse() found nothing wrong.\n", id); nErr++; } }else{ p = manifest_get(id, CFTYPE_ANY, &err); if( p==0 ){ fossil_print("%d ERROR: %s\n", id, blob_str(&err)); nErr++; } } blob_reset(&err); |
︙ | ︙ | |||
2111 2112 2113 2114 2115 2116 2117 | ** Activate EVENT triggers if they do not already exist. */ void manifest_create_event_triggers(void){ if( manifest_event_triggers_are_enabled ){ return; /* Triggers already exists. No-op. */ } alert_create_trigger(); | | | 2111 2112 2113 2114 2115 2116 2117 2118 2119 2120 2121 2122 2123 2124 2125 | ** Activate EVENT triggers if they do not already exist. */ void manifest_create_event_triggers(void){ if( manifest_event_triggers_are_enabled ){ return; /* Triggers already exists. No-op. */ } alert_create_trigger(); manifest_event_triggers_are_enabled = 1; } /* ** Disable manifest event triggers. Drop them if they exist, but mark ** them has having been created so that they won't be recreated. This ** is used during "rebuild" to prevent triggers from firing then. */ |
︙ | ︙ |
Changes to src/markdown.c.
︙ | ︙ | |||
62 63 64 65 66 67 68 | void (*paragraph)(struct Blob *ob, struct Blob *text, void *opaque); void (*table)(struct Blob *ob, struct Blob *head_row, struct Blob *rows, void *opaque); void (*table_cell)(struct Blob *ob, struct Blob *text, int flags, void *opaque); void (*table_row)(struct Blob *ob, struct Blob *cells, int flags, void *opaque); | | | 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 | void (*paragraph)(struct Blob *ob, struct Blob *text, void *opaque); void (*table)(struct Blob *ob, struct Blob *head_row, struct Blob *rows, void *opaque); void (*table_cell)(struct Blob *ob, struct Blob *text, int flags, void *opaque); void (*table_row)(struct Blob *ob, struct Blob *cells, int flags, void *opaque); void (*footnote_item)(struct Blob *ob, const struct Blob *text, int index, int nUsed, void *opaque); /* span level callbacks - NULL or return 0 prints the span verbatim */ int (*autolink)(struct Blob *ob, struct Blob *link, enum mkd_autolink type, void *opaque); int (*codespan)(struct Blob *ob, struct Blob *text, int nSep, void *opaque); int (*double_emphasis)(struct Blob *ob, struct Blob *text, |
︙ | ︙ | |||
380 381 382 383 384 385 386 | /* release the given working buffer back to the cache */ static void release_work_buffer(struct render *rndr, struct Blob *buf){ if( !buf ) return; rndr->iDepth--; blob_reset(buf); | < | | 380 381 382 383 384 385 386 387 388 389 390 391 392 393 394 | /* release the given working buffer back to the cache */ static void release_work_buffer(struct render *rndr, struct Blob *buf){ if( !buf ) return; rndr->iDepth--; blob_reset(buf); if( rndr->nBlobCache < (int)(sizeof(rndr->aBlobCache)/sizeof(rndr->aBlobCache[0])) ){ rndr->aBlobCache[rndr->nBlobCache++] = buf; }else{ fossil_free(buf); } } |
︙ | ︙ | |||
1616 1617 1618 1619 1620 1621 1622 | /* parse_blockquote -- handles parsing of a blockquote fragment */ static size_t parse_blockquote( struct Blob *ob, struct render *rndr, char *data, size_t size ){ | | < < < < < < < < < < < < < < < < < < < < < < < < < | | 1615 1616 1617 1618 1619 1620 1621 1622 1623 1624 1625 1626 1627 1628 1629 1630 1631 1632 1633 1634 1635 1636 1637 1638 1639 1640 1641 | /* parse_blockquote -- handles parsing of a blockquote fragment */ static size_t parse_blockquote( struct Blob *ob, struct render *rndr, char *data, size_t size ){ size_t beg, end = 0, pre, work_size = 0; char *work_data = 0; struct Blob *out = new_work_buffer(rndr); beg = 0; while( beg<size ){ for(end=beg+1; end<size && data[end-1]!='\n'; end++); pre = prefix_quote(data+beg, end-beg); if( pre ){ beg += pre; /* skipping prefix */ }else if( is_empty(data+beg, end-beg) && (end>=size || (prefix_quote(data+end, size-end)==0 && !is_empty(data+end, size-end))) ){ /* empty line followed by non-quote line */ break; } if( beg<end ){ /* copy into the in-place working buffer */ if( !work_data ){ |
︙ | ︙ | |||
1707 1708 1709 1710 1711 1712 1713 | ** "end" is left with a value such that data[end] is one byte ** past the first '\n' or one byte past the end of the string */ if( is_empty(data+i, size-i) || (level = is_headerline(data+i, size-i))!= 0 ){ break; } | | < < < < | 1681 1682 1683 1684 1685 1686 1687 1688 1689 1690 1691 1692 1693 1694 1695 | ** "end" is left with a value such that data[end] is one byte ** past the first '\n' or one byte past the end of the string */ if( is_empty(data+i, size-i) || (level = is_headerline(data+i, size-i))!= 0 ){ break; } if( (i && data[i]=='#') || is_hrule(data+i, size-i) ){ end = i; break; } i = end; } work_size = i; |
︙ | ︙ | |||
2369 2370 2371 2372 2373 2374 2375 | beg += parse_blockcode(ob, rndr, txt_data, end); }else if( prefix_uli(txt_data, end) ){ beg += parse_list(ob, rndr, txt_data, end, 0); }else if( prefix_oli(txt_data, end) ){ beg += parse_list(ob, rndr, txt_data, end, MKD_LIST_ORDERED); }else if( has_table && is_tableline(txt_data, end) ){ beg += parse_table(ob, rndr, txt_data, end); | | | 2339 2340 2341 2342 2343 2344 2345 2346 2347 2348 2349 2350 2351 2352 2353 | beg += parse_blockcode(ob, rndr, txt_data, end); }else if( prefix_uli(txt_data, end) ){ beg += parse_list(ob, rndr, txt_data, end, 0); }else if( prefix_oli(txt_data, end) ){ beg += parse_list(ob, rndr, txt_data, end, MKD_LIST_ORDERED); }else if( has_table && is_tableline(txt_data, end) ){ beg += parse_table(ob, rndr, txt_data, end); }else if( prefix_fencedcode(txt_data, end) && (i = char_codespan(ob, rndr, txt_data, 0, end))!=0 ){ beg += i; }else{ beg += parse_paragraph(ob, rndr, txt_data, end); } } |
︙ | ︙ |
Changes to src/markdown_html.c.
︙ | ︙ | |||
596 597 598 599 600 601 602 | }else{ html_escape(ob, blob_buffer(link), blob_size(link)); } blob_append_literal(ob, "</a>"); return 1; } | < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < | < | 596 597 598 599 600 601 602 603 604 605 606 607 608 609 610 611 612 613 614 615 616 617 618 619 620 621 622 | }else{ html_escape(ob, blob_buffer(link), blob_size(link)); } blob_append_literal(ob, "</a>"); return 1; } /* ** The nSrc bytes at zSrc[] are Pikchr input text (allegedly). Process that ** text and insert the result in place of the original. */ void pikchr_to_html( Blob *ob, /* Write the generated SVG here */ const char *zSrc, int nSrc, /* The Pikchr source text */ const char *zArg, int nArg /* Addition arguments */ ){ int pikFlags = PIKCHR_PROCESS_NONCE | PIKCHR_PROCESS_DIV | PIKCHR_PROCESS_SRC | PIKCHR_PROCESS_ERR_PRE; Blob bSrc = empty_blob; const char *zPikVar; double rPikVar; while( nArg>0 ){ int i; for(i=0; i<nArg && !fossil_isspace(zArg[i]); i++){} |
︙ | ︙ | |||
810 811 812 813 814 815 816 | ){ char *zLink = blob_buffer(link); char *zTitle = title!=0 && blob_size(title)>0 ? blob_str(title) : 0; char zClose[20]; if( zLink==0 || zLink[0]==0 ){ zClose[0] = 0; | | | | 778 779 780 781 782 783 784 785 786 787 788 789 790 791 792 793 | ){ char *zLink = blob_buffer(link); char *zTitle = title!=0 && blob_size(title)>0 ? blob_str(title) : 0; char zClose[20]; if( zLink==0 || zLink[0]==0 ){ zClose[0] = 0; }else{ static const int flags = WIKI_NOBADLINKS | WIKI_MARKDOWNLINKS ; wiki_resolve_hyperlink(ob, flags, zLink, zClose, sizeof(zClose), 0, zTitle); } if( blob_size(content)==0 ){ if( link ) blob_appendb(ob, link); |
︙ | ︙ |
Changes to src/merge.c.
︙ | ︙ | |||
134 135 136 137 138 139 140 | /* ** Add an entry to the FV table for all files renamed between ** version N and the version specified by vid. */ static void add_renames( const char *zFnCol, /* The FV column for the filename in vid */ int vid, /* The desired version's- RID */ | | | 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 | /* ** Add an entry to the FV table for all files renamed between ** version N and the version specified by vid. */ static void add_renames( const char *zFnCol, /* The FV column for the filename in vid */ int vid, /* The desired version's- RID */ int nid, /* The check-in rid for the name pivot */ int revOK, /* OK to move backwards (child->parent) if true */ const char *zDebug /* Generate trace output if not NULL */ ){ int nChng; /* Number of file name changes */ int *aChng; /* An array of file name changes */ int i; /* Loop counter */ find_filename_changes(nid, vid, revOK, &nChng, &aChng, zDebug); |
︙ | ︙ | |||
266 267 268 269 270 271 272 | */ void test_show_vfile_cmd(void){ if( g.argc!=2 ){ fossil_fatal("unknown arguments to the %s command\n", g.argv[1]); } verify_all_options(); db_must_be_within_tree(); | | | 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 | */ void test_show_vfile_cmd(void){ if( g.argc!=2 ){ fossil_fatal("unknown arguments to the %s command\n", g.argv[1]); } verify_all_options(); db_must_be_within_tree(); debug_show_vfile(); } /* ** COMMAND: merge ** COMMAND: cherry-pick ** |
︙ | ︙ | |||
373 374 375 376 377 378 379 | /* Undocumented --debug and --show-vfile options: ** ** When included on the command-line, --debug causes lots of state ** information to be displayed. This option is undocumented as it ** might change or be eliminated in future releases. ** | | | 373 374 375 376 377 378 379 380 381 382 383 384 385 386 387 | /* Undocumented --debug and --show-vfile options: ** ** When included on the command-line, --debug causes lots of state ** information to be displayed. This option is undocumented as it ** might change or be eliminated in future releases. ** ** The --show-vfile flag does a dump of the VFILE table for reference. ** ** Hints: ** * Combine --debug and --verbose for still more output. ** * The --dry-run option is also useful in combination with --debug. */ debugFlag = find_option("debug",0,0)!=0; if( debugFlag && verboseFlag ) debugFlag = 2; |
︙ | ︙ |
Changes to src/merge3.c.
︙ | ︙ | |||
209 210 211 212 213 214 215 | int limit1, limit2; /* Sizes of aC1[] and aC2[] */ int nConflict = 0; /* Number of merge conflicts seen so far */ int useCrLf = 0; int ln1, ln2, lnPivot; /* Line numbers for all files */ DiffConfig DCfg; blob_zero(pOut); /* Merge results stored in pOut */ | | | 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 | int limit1, limit2; /* Sizes of aC1[] and aC2[] */ int nConflict = 0; /* Number of merge conflicts seen so far */ int useCrLf = 0; int ln1, ln2, lnPivot; /* Line numbers for all files */ DiffConfig DCfg; blob_zero(pOut); /* Merge results stored in pOut */ /* If both pV1 and pV2 start with a UTF-8 byte-order-mark (BOM), ** keep it in the output. This should be secure enough not to cause ** unintended changes to the merged file and consistent with what ** users are using in their source files. */ if( starts_with_utf8_bom(pV1, 0) && starts_with_utf8_bom(pV2, 0) ){ blob_append(pOut, (char*)get_utf8_bom(0), -1); |
︙ | ︙ |
Changes to src/name.c.
︙ | ︙ | |||
208 209 210 211 212 213 214 | ** Find the RID of the most recent object with symbolic tag zTag ** and having a type that matches zType. ** ** Return 0 if there are no matches. ** ** This is a tricky query to do efficiently. ** If the tag is very common (ex: "trunk") then | | | 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 | ** Find the RID of the most recent object with symbolic tag zTag ** and having a type that matches zType. ** ** Return 0 if there are no matches. ** ** This is a tricky query to do efficiently. ** If the tag is very common (ex: "trunk") then ** we want to use the query identified below as Q1 - which searching ** the most recent EVENT table entries for the most recent with the tag. ** But if the tag is relatively scarce (anything other than "trunk", basically) ** then we want to do the indexed search show below as Q2. */ static int most_recent_event_with_tag(const char *zTag, const char *zType){ return db_int(0, "SELECT objid FROM (" |
︙ | ︙ | |||
511 512 513 514 515 516 517 | return start_of_branch(rid, 0); } /* start:BR -> The first check-in on branch named BR */ if( strncmp(zTag, "start:", 6)==0 ){ rid = symbolic_name_to_rid(zTag+6, zType); return start_of_branch(rid, 1); | | | | | 511 512 513 514 515 516 517 518 519 520 521 522 523 524 525 526 527 528 529 530 531 532 533 534 535 536 | return start_of_branch(rid, 0); } /* start:BR -> The first check-in on branch named BR */ if( strncmp(zTag, "start:", 6)==0 ){ rid = symbolic_name_to_rid(zTag+6, zType); return start_of_branch(rid, 1); } /* merge-in:BR -> Most recent merge-in for the branch named BR */ if( strncmp(zTag, "merge-in:", 9)==0 ){ rid = symbolic_name_to_rid(zTag+9, zType); return start_of_branch(rid, 2); } /* symbolic-name ":" date-time */ nTag = strlen(zTag); for(i=0; i<nTag-8 && zTag[i]!=':'; i++){} if( zTag[i]==':' && (fossil_isdate(&zTag[i+1]) || fossil_expand_datetime(&zTag[i+1],0)!=0) ){ char *zDate = mprintf("%s", &zTag[i+1]); char *zTagBase = mprintf("%.*s", i, zTag); char *zXDate; int nDate = strlen(zDate); if( sqlite3_strnicmp(&zDate[nDate-3],"utc",3)==0 ){ |
︙ | ︙ | |||
818 819 820 821 822 823 824 | } return rid; } int name_to_rid(const char *zName){ return name_to_typed_rid(zName, "*"); } | < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < | 818 819 820 821 822 823 824 825 826 827 828 829 830 831 | } return rid; } int name_to_rid(const char *zName){ return name_to_typed_rid(zName, "*"); } /* ** WEBPAGE: ambiguous ** URL: /ambiguous?name=NAME&src=WEBPAGE ** ** The NAME given by the name parameter is ambiguous. Display a page ** that shows all possible choices and let the user select between them. ** |
︙ | ︙ | |||
1116 1117 1118 1119 1120 1121 1122 | " coalesce(euser,user), coalesce(ecomment,comment)" " FROM mlink, filename, blob, event" " WHERE mlink.fid=%d" " AND filename.fnid=mlink.fnid" " AND event.objid=mlink.mid" " AND blob.rid=mlink.mid" " ORDER BY event.mtime %s /*sort*/", | | | 1085 1086 1087 1088 1089 1090 1091 1092 1093 1094 1095 1096 1097 1098 1099 | " coalesce(euser,user), coalesce(ecomment,comment)" " FROM mlink, filename, blob, event" " WHERE mlink.fid=%d" " AND filename.fnid=mlink.fnid" " AND event.objid=mlink.mid" " AND blob.rid=mlink.mid" " ORDER BY event.mtime %s /*sort*/", rid, (flags & WHATIS_BRIEF) ? "LIMIT 1" : "DESC"); while( db_step(&q)==SQLITE_ROW ){ if( flags & WHATIS_BRIEF ){ fossil_print("mtime: %s\n", db_column_text(&q,2)); } fossil_print("file: %s\n", db_column_text(&q,0)); fossil_print(" part of [%S] by %s on %s\n", |
︙ | ︙ | |||
1192 1193 1194 1195 1196 1197 1198 1199 1200 1201 1202 1203 1204 1205 | */ void whatis_artifact( const char *zName, /* Symbolic name or full hash */ const char *zFileName,/* Optional: original filename (in file mode) */ const char *zType, /* Artifact type filter */ int mFlags /* WHATIS_* flags */ ){ int rid = symbolic_name_to_rid(zName, zType); if( rid<0 ){ Stmt q; int cnt = 0; if( mFlags & WHATIS_REPO ){ fossil_print("\nrepository: %s\n", g.zRepositoryName); } | > > > > > < < < | < < < < < < | | 1161 1162 1163 1164 1165 1166 1167 1168 1169 1170 1171 1172 1173 1174 1175 1176 1177 1178 1179 1180 1181 1182 1183 1184 1185 1186 1187 1188 1189 1190 1191 1192 1193 1194 1195 1196 1197 1198 1199 1200 1201 1202 1203 1204 1205 1206 | */ void whatis_artifact( const char *zName, /* Symbolic name or full hash */ const char *zFileName,/* Optional: original filename (in file mode) */ const char *zType, /* Artifact type filter */ int mFlags /* WHATIS_* flags */ ){ const char* zNameTitle = "name:"; int rid = symbolic_name_to_rid(zName, zType); if( zFileName ){ fossil_print("%-12s%s\n", zNameTitle, zFileName); zNameTitle = "hash:"; } if( rid<0 ){ Stmt q; int cnt = 0; if( mFlags & WHATIS_REPO ){ fossil_print("\nrepository: %s\n", g.zRepositoryName); } fossil_print("%-12s%s (ambiguous)\n", zNameTitle, zName); db_prepare(&q, "SELECT rid FROM blob WHERE uuid>=lower(%Q) AND uuid<(lower(%Q)||'z')", zName, zName ); while( db_step(&q)==SQLITE_ROW ){ if( cnt++ ) fossil_print("%12s---- meaning #%d ----\n", " ", cnt); whatis_rid(db_column_int(&q, 0), mFlags); } db_finalize(&q); }else if( rid==0 ){ if( (mFlags & WHATIS_OMIT_UNK)==0 ){ /* 0123456789 12 */ fossil_print("unknown: %s\n", zName); } }else{ if( mFlags & WHATIS_REPO ){ fossil_print("\nrepository: %s\n", g.zRepositoryName); } fossil_print("%-12s%s\n", zNameTitle, zName); whatis_rid(rid, mFlags); } } /* ** COMMAND: whatis* ** |
︙ | ︙ |
Changes to src/patch.c.
︙ | ︙ | |||
43 44 45 46 47 48 49 | /* ** Flags passed from the main patch_cmd() routine into subfunctions used ** to implement the various subcommands. */ #define PATCH_DRYRUN 0x0001 #define PATCH_VERBOSE 0x0002 #define PATCH_FORCE 0x0004 | < | 43 44 45 46 47 48 49 50 51 52 53 54 55 56 | /* ** Flags passed from the main patch_cmd() routine into subfunctions used ** to implement the various subcommands. */ #define PATCH_DRYRUN 0x0001 #define PATCH_VERBOSE 0x0002 #define PATCH_FORCE 0x0004 /* ** Implementation of the "readfile(X)" SQL function. The entire content ** of the check-out file named X is read and returned as a BLOB. */ static void readfileFunc( sqlite3_context *context, |
︙ | ︙ | |||
70 71 72 73 74 75 76 | } /* ** mkdelta(X,Y) ** ** X is an numeric artifact id. Y is a filename. ** | | | 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 | } /* ** mkdelta(X,Y) ** ** X is an numeric artifact id. Y is a filename. ** ** Compute a compressed delta that carries X into Y. Or return ** and zero-length blob if X is equal to Y. */ static void mkdeltaFunc( sqlite3_context *context, int argc, sqlite3_value **argv ){ |
︙ | ︙ | |||
131 132 133 134 135 136 137 | SQLITE_TRANSIENT); blob_reset(&x); } /* ** Generate a binary patch file and store it into the file | | < < | 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 | SQLITE_TRANSIENT); blob_reset(&x); } /* ** Generate a binary patch file and store it into the file ** named zOut. */ void patch_create(unsigned mFlags, const char *zOut, FILE *out){ int vid; char *z; if( zOut && file_isdir(zOut, ExtFILE)!=0 ){ if( mFlags & PATCH_FORCE ){ |
︙ | ︙ | |||
163 164 165 166 167 168 169 | "PRAGMA patch.page_size=512;\n" "CREATE TABLE patch.chng(\n" " pathname TEXT,\n" /* Filename */ " origname TEXT,\n" /* Name before rename. NULL if not renamed */ " hash TEXT,\n" /* Baseline hash. NULL for new files. */ " isexe BOOL,\n" /* True if executable */ " islink BOOL,\n" /* True if is a symbolic link */ | | | 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 | "PRAGMA patch.page_size=512;\n" "CREATE TABLE patch.chng(\n" " pathname TEXT,\n" /* Filename */ " origname TEXT,\n" /* Name before rename. NULL if not renamed */ " hash TEXT,\n" /* Baseline hash. NULL for new files. */ " isexe BOOL,\n" /* True if executable */ " islink BOOL,\n" /* True if is a symbolic link */ " delta BLOB\n" /* compressed delta. NULL if deleted. ** length 0 if unchanged */ ");" "CREATE TABLE patch.cfg(\n" " key TEXT,\n" " value ANY\n" ");" ); |
︙ | ︙ | |||
197 198 199 200 201 202 203 | ";", vid, g.zLocalRoot, g.zRepositoryName, g.zLogin); z = fossil_hostname(); if( z ){ db_multi_exec( "INSERT INTO patch.cfg(key,value)VALUES('hostname',%Q)", z); fossil_free(z); } | | | 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 | ";", vid, g.zLocalRoot, g.zRepositoryName, g.zLogin); z = fossil_hostname(); if( z ){ db_multi_exec( "INSERT INTO patch.cfg(key,value)VALUES('hostname',%Q)", z); fossil_free(z); } /* New files */ db_multi_exec( "INSERT INTO patch.chng(pathname,hash,isexe,islink,delta)" " SELECT pathname, NULL, isexe, islink," " compress(read_co_file(%Q||pathname))" " FROM vfile WHERE rid==0;", g.zLocalRoot |
︙ | ︙ | |||
249 250 251 252 253 254 255 | if( pData==0 ){ fossil_fatal("out of memory"); } #ifdef _WIN32 fflush(out); _setmode(_fileno(out), _O_BINARY); #endif | | < | > < | < < < < < | 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 | if( pData==0 ){ fossil_fatal("out of memory"); } #ifdef _WIN32 fflush(out); _setmode(_fileno(out), _O_BINARY); #endif fwrite(pData, sz, 1, out); sqlite3_free(pData); fflush(out); } } /* ** Attempt to load and validate a patchfile identified by the first ** argument. */ void patch_attach(const char *zIn, FILE *in){ Stmt q; if( g.db==0 ){ sqlite3_open(":memory:", &g.db); } if( zIn==0 ){ Blob buf; int rc; int sz; unsigned char *pData; blob_init(&buf, 0, 0); #ifdef _WIN32 _setmode(_fileno(in), _O_BINARY); #endif sz = blob_read_from_channel(&buf, in, -1); pData = (unsigned char*)blob_buffer(&buf); db_multi_exec("ATTACH ':memory:' AS patch"); if( g.fSqlTrace ){ fossil_trace("-- deserialize(\"patch\", pData, %lld);\n", sz); } rc = sqlite3_deserialize(g.db, "patch", pData, sz, sz, 0); if( rc ){ fossil_fatal("cannot open patch database: %s", sqlite3_errmsg(g.db)); |
︙ | ︙ | |||
308 309 310 311 312 313 314 | } /* ** Show a summary of the content of a patch on standard output */ void patch_view(unsigned mFlags){ Stmt q; | | | | 299 300 301 302 303 304 305 306 307 308 309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324 | } /* ** Show a summary of the content of a patch on standard output */ void patch_view(unsigned mFlags){ Stmt q; db_prepare(&q, "WITH nmap(nkey,nm) AS (VALUES" "('baseline','BASELINE')," "('project-name','PROJECT-NAME'))" "SELECT nm, value FROM nmap, patch.cfg WHERE nkey=key;" ); while( db_step(&q)==SQLITE_ROW ){ fossil_print("%-12s %s\n", db_column_text(&q,0), db_column_text(&q,1)); } db_finalize(&q); if( mFlags & PATCH_VERBOSE ){ db_prepare(&q, "WITH nmap(nkey,nm,isDate) AS (VALUES" "('project-code','PROJECT-CODE',0)," "('date','TIMESTAMP',1)," "('user','USER',0)," "('hostname','HOSTNAME',0)," "('ckout','CHECKOUT',0)," "('repo','REPOSITORY',0))" |
︙ | ︙ | |||
440 441 442 443 444 445 446 | blob_append_escaped_arg(&cmd, g.nameOfExe, 1); if( strcmp(zType,"merge")==0 ){ blob_appendf(&cmd, " merge %s\n", db_column_text(&q,1)); }else{ blob_appendf(&cmd, " merge --%s %s\n", zType, db_column_text(&q,1)); } if( mFlags & PATCH_VERBOSE ){ | | | 431 432 433 434 435 436 437 438 439 440 441 442 443 444 445 | blob_append_escaped_arg(&cmd, g.nameOfExe, 1); if( strcmp(zType,"merge")==0 ){ blob_appendf(&cmd, " merge %s\n", db_column_text(&q,1)); }else{ blob_appendf(&cmd, " merge --%s %s\n", zType, db_column_text(&q,1)); } if( mFlags & PATCH_VERBOSE ){ fossil_print("%-10s %s\n", db_column_text(&q,2), db_column_text(&q,0)); } } db_finalize(&q); if( mFlags & PATCH_DRYRUN ){ fossil_print("%s", blob_str(&cmd)); }else{ |
︙ | ︙ | |||
568 569 570 571 572 573 574 | }else{ blob_append_escaped_arg(&cmd, g.nameOfExe, 1); blob_appendf(&cmd, " add %$\n", zPathname); if( mFlags & PATCH_VERBOSE ){ fossil_print("%-10s %s\n", "NEW", zPathname); } } | | | 559 560 561 562 563 564 565 566 567 568 569 570 571 572 573 | }else{ blob_append_escaped_arg(&cmd, g.nameOfExe, 1); blob_appendf(&cmd, " add %$\n", zPathname); if( mFlags & PATCH_VERBOSE ){ fossil_print("%-10s %s\n", "NEW", zPathname); } } if( (mFlags & PATCH_DRYRUN)==0 ){ if( isLink ){ symlink_create(blob_str(&data), zPathname); }else{ blob_write_to_file(&data, zPathname); } file_setexe(zPathname, isExe); blob_reset(&data); |
︙ | ︙ | |||
671 672 673 674 675 676 677 | static FILE *patch_remote_command( unsigned mFlags, /* flags */ const char *zThisCmd, /* "push" or "pull" */ const char *zRemoteCmd, /* "apply" or "create" */ const char *zFossilCmd, /* Name of "fossil" on remote system */ const char *zRW /* "w" or "r" */ ){ | | | | | < < < > < | < < < | < < < < < < < < < < < < < < < < < < < < | 662 663 664 665 666 667 668 669 670 671 672 673 674 675 676 677 678 679 680 681 682 683 684 685 686 687 688 689 690 691 692 693 694 695 696 697 698 699 700 701 702 703 704 705 706 707 708 709 710 711 712 713 714 715 716 717 718 719 720 721 722 723 724 | static FILE *patch_remote_command( unsigned mFlags, /* flags */ const char *zThisCmd, /* "push" or "pull" */ const char *zRemoteCmd, /* "apply" or "create" */ const char *zFossilCmd, /* Name of "fossil" on remote system */ const char *zRW /* "w" or "r" */ ){ char *zRemote; char *zDir; Blob cmd; FILE *f; Blob flgs; char *zForce; blob_init(&flgs, 0, 0); if( mFlags & PATCH_FORCE ) blob_appendf(&flgs, " -f"); if( mFlags & PATCH_VERBOSE ) blob_appendf(&flgs, " -v"); if( mFlags & PATCH_DRYRUN ) blob_appendf(&flgs, " -n"); zForce = blob_size(&flgs)>0 ? blob_str(&flgs) : ""; if( g.argc!=4 ){ usage(mprintf("%s [USER@]HOST:DIRECTORY", zThisCmd)); } zRemote = fossil_strdup(g.argv[3]); zDir = (char*)file_skip_userhost(zRemote); if( zDir==0 ){ zDir = zRemote; blob_init(&cmd, 0, 0); blob_append_escaped_arg(&cmd, g.nameOfExe, 1); blob_appendf(&cmd, " patch %s%s %$ -", zRemoteCmd, zForce, zDir); }else{ Blob remote; *(char*)(zDir-1) = 0; transport_ssh_command(&cmd); blob_appendf(&cmd, " -T"); blob_append_escaped_arg(&cmd, zRemote, 0); blob_init(&remote, 0, 0); if( zFossilCmd==0 ){ blob_append_escaped_arg(&cmd, "PATH=$HOME/bin:$PATH", 0); zFossilCmd = "fossil"; } blob_appendf(&remote, "%$ patch %s%s --dir64 %z -", zFossilCmd, zRemoteCmd, zForce, encode64(zDir, -1)); blob_append_escaped_arg(&cmd, blob_str(&remote), 0); blob_reset(&remote); } fossil_print("%s\n", blob_str(&cmd)); fflush(stdout); f = popen(blob_str(&cmd), zRW); if( f==0 ){ fossil_fatal("cannot run command: %s", blob_str(&cmd)); } blob_reset(&cmd); blob_reset(&flgs); return f; } /* ** Show a diff for the patch currently loaded into database "patch". */ static void patch_diff( unsigned mFlags, /* Patch flags. only -f is allowed */ DiffConfig *pCfg /* Diff options */ ){ |
︙ | ︙ | |||
810 811 812 813 814 815 816 | " FROM patch.chng" " ORDER BY pathname" ); while( db_step(&q)==SQLITE_ROW ){ int rid; const char *zName; Blob a, b; | | | 775 776 777 778 779 780 781 782 783 784 785 786 787 788 789 | " FROM patch.chng" " ORDER BY pathname" ); while( db_step(&q)==SQLITE_ROW ){ int rid; const char *zName; Blob a, b; if( db_column_type(&q,0)!=SQLITE_INTEGER && db_column_type(&q,4)==SQLITE_TEXT ){ char *zUuid = fossil_strdup(db_column_text(&q,4)); char *zName = fossil_strdup(db_column_text(&q,1)); if( mFlags & PATCH_FORCE ){ fossil_print("ERROR cannot find base artifact %S for file \"%s\"\n", |
︙ | ︙ | |||
832 833 834 835 836 837 838 | fossil_fatal("base artifact %S for file \"%s\" not found", zUuid, zName); } } zName = db_column_text(&q, 1); rid = db_column_int(&q, 0); | < < < | 797 798 799 800 801 802 803 804 805 806 807 808 809 810 811 812 813 814 815 816 817 818 819 | fossil_fatal("base artifact %S for file \"%s\" not found", zUuid, zName); } } zName = db_column_text(&q, 1); rid = db_column_int(&q, 0); if( db_column_type(&q,3)==SQLITE_NULL ){ if( !bWebpage ) fossil_print("DELETE %s\n", zName); diff_print_index(zName, pCfg, 0); content_get(rid, &a); diff_file_mem(&a, &empty, zName, pCfg); }else if( rid==0 ){ db_ephemeral_blob(&q, 3, &a); blob_uncompress(&a, &a); if( !bWebpage ) fossil_print("ADDED %s\n", zName); diff_print_index(zName, pCfg, 0); diff_file_mem(&empty, &a, zName, pCfg); blob_reset(&a); }else if( db_column_bytes(&q, 3)>0 ){ Blob delta; db_ephemeral_blob(&q, 3, &delta); blob_uncompress(&delta, &delta); |
︙ | ︙ | |||
932 933 934 935 936 937 938 | ** ** Command-line options: ** ** -f|--force Apply the patch even though there are unsaved ** changes in the current check-out. Unsaved ** changes will be reverted and then the patch is ** applied. | | | 894 895 896 897 898 899 900 901 902 903 904 905 906 907 908 | ** ** Command-line options: ** ** -f|--force Apply the patch even though there are unsaved ** changes in the current check-out. Unsaved ** changes will be reverted and then the patch is ** applied. ** --fossilcmd EXE Name of the "fossil" executable on the remote ** -n|--dry-run Do nothing, but print what would have happened ** -v|--verbose Extra output explaining what happens ** ** ** > fossil patch pull REMOTE-CHECKOUT ** ** Like "fossil patch push" except that the transfer is from remote |
︙ | ︙ | |||
967 968 969 970 971 972 973 | char *zIn; unsigned flags = 0; if( find_option("dry-run","n",0) ) flags |= PATCH_DRYRUN; if( find_option("verbose","v",0) ) flags |= PATCH_VERBOSE; if( find_option("force","f",0) ) flags |= PATCH_FORCE; zIn = patch_find_patch_filename("apply"); db_must_be_within_tree(); | | | 929 930 931 932 933 934 935 936 937 938 939 940 941 942 943 | char *zIn; unsigned flags = 0; if( find_option("dry-run","n",0) ) flags |= PATCH_DRYRUN; if( find_option("verbose","v",0) ) flags |= PATCH_VERBOSE; if( find_option("force","f",0) ) flags |= PATCH_FORCE; zIn = patch_find_patch_filename("apply"); db_must_be_within_tree(); patch_attach(zIn, stdin); patch_apply(flags); fossil_free(zIn); }else if( strncmp(zCmd, "create", n)==0 ){ char *zOut; unsigned flags = 0; if( find_option("force","f",0) ) flags |= PATCH_FORCE; |
︙ | ︙ | |||
996 997 998 999 1000 1001 1002 | return; } db_find_and_open_repository(0, 0); if( find_option("force","f",0) ) flags |= PATCH_FORCE; diff_options(&DCfg, zCmd[0]=='g', 0); verify_all_options(); zIn = patch_find_patch_filename("apply"); | | | | | < < < < < < < < < < | < < < < < < < < < | | | 958 959 960 961 962 963 964 965 966 967 968 969 970 971 972 973 974 975 976 977 978 979 980 981 982 983 984 985 986 987 988 989 990 991 992 993 994 995 996 997 998 999 1000 1001 1002 1003 1004 1005 1006 1007 1008 1009 1010 1011 1012 1013 1014 1015 1016 1017 | return; } db_find_and_open_repository(0, 0); if( find_option("force","f",0) ) flags |= PATCH_FORCE; diff_options(&DCfg, zCmd[0]=='g', 0); verify_all_options(); zIn = patch_find_patch_filename("apply"); patch_attach(zIn, stdin); patch_diff(flags, &DCfg); fossil_free(zIn); }else if( strncmp(zCmd, "pull", n)==0 ){ FILE *pIn = 0; unsigned flags = 0; const char *zFossilCmd = find_option("fossilcmd",0,1); if( find_option("dry-run","n",0) ) flags |= PATCH_DRYRUN; if( find_option("verbose","v",0) ) flags |= PATCH_VERBOSE; if( find_option("force","f",0) ) flags |= PATCH_FORCE; db_must_be_within_tree(); verify_all_options(); pIn = patch_remote_command(flags & (~PATCH_FORCE), "pull", "create", zFossilCmd, "r"); if( pIn ){ patch_attach(0, pIn); pclose(pIn); patch_apply(flags); } }else if( strncmp(zCmd, "push", n)==0 ){ FILE *pOut = 0; unsigned flags = 0; const char *zFossilCmd = find_option("fossilcmd",0,1); if( find_option("dry-run","n",0) ) flags |= PATCH_DRYRUN; if( find_option("verbose","v",0) ) flags |= PATCH_VERBOSE; if( find_option("force","f",0) ) flags |= PATCH_FORCE; db_must_be_within_tree(); verify_all_options(); pOut = patch_remote_command(flags, "push", "apply", zFossilCmd, "w"); if( pOut ){ patch_create(0, 0, pOut); pclose(pOut); } }else if( strncmp(zCmd, "view", n)==0 ){ const char *zIn; unsigned int flags = 0; if( find_option("verbose","v",0) ) flags |= PATCH_VERBOSE; verify_all_options(); if( g.argc!=4 ){ usage("view FILENAME"); } zIn = g.argv[3]; if( fossil_strcmp(zIn, "-")==0 ) zIn = 0; patch_attach(zIn, stdin); patch_view(flags); }else { goto patch_usage; } } |
Changes to src/pikchrshow.c.
︙ | ︙ | |||
20 21 22 23 24 25 26 | #include "config.h" #include <assert.h> #include <ctype.h> #include "pikchrshow.h" #if INTERFACE /* These are described in pikchr_process()'s docs. */ | < < < < | 20 21 22 23 24 25 26 27 28 29 30 31 32 33 | #include "config.h" #include <assert.h> #include <ctype.h> #include "pikchrshow.h" #if INTERFACE /* These are described in pikchr_process()'s docs. */ #define PIKCHR_PROCESS_PASSTHROUGH 0x0003 /* Pass through these flags */ #define PIKCHR_PROCESS_TH1 0x0004 #define PIKCHR_PROCESS_TH1_NOSVG 0x0008 #define PIKCHR_PROCESS_NONCE 0x0010 #define PIKCHR_PROCESS_ERR_PRE 0x0020 #define PIKCHR_PROCESS_SRC 0x0040 #define PIKCHR_PROCESS_DIV 0x0080 |
︙ | ︙ | |||
137 138 139 140 141 142 143 | ) & pikFlags){ pikFlags |= PIKCHR_PROCESS_DIV; } if(!(PIKCHR_PROCESS_TH1 & pikFlags) /* If any TH1_xxx flags are set, set TH1 */ && (PIKCHR_PROCESS_TH1_NOSVG & pikFlags || thFlags!=0)){ pikFlags |= PIKCHR_PROCESS_TH1; | | | 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 | ) & pikFlags){ pikFlags |= PIKCHR_PROCESS_DIV; } if(!(PIKCHR_PROCESS_TH1 & pikFlags) /* If any TH1_xxx flags are set, set TH1 */ && (PIKCHR_PROCESS_TH1_NOSVG & pikFlags || thFlags!=0)){ pikFlags |= PIKCHR_PROCESS_TH1; } if(zNonce){ blob_appendf(pOut, "%s\n", zNonce); } if(PIKCHR_PROCESS_TH1 & pikFlags){ Blob out = empty_blob; isErr = Th_RenderToBlob(zIn, &out, thFlags) ? 1 : 0; |
︙ | ︙ | |||
548 549 550 551 552 553 554 | ** ** -div-source Set the 'source' CSS class on the div, which tells ** CSS to hide the SVG and reveal the source by default. ** ** -src Store the input pikchr's source code in the output as ** a separate element adjacent to the SVG one. Implied ** by -div-source. | | < < | 544 545 546 547 548 549 550 551 552 553 554 555 556 557 558 559 560 561 562 563 564 565 566 567 568 569 570 571 | ** ** -div-source Set the 'source' CSS class on the div, which tells ** CSS to hide the SVG and reveal the source by default. ** ** -src Store the input pikchr's source code in the output as ** a separate element adjacent to the SVG one. Implied ** by -div-source. ** ** ** -th Process the input using TH1 before passing it to pikchr ** ** -th-novar Disable $var and $<var> TH1 processing. Use this if the ** pikchr script uses '$' for its own purposes and that ** causes issues. This only affects parsing of '$' outside ** of TH1 script blocks. Code in such blocks is unaffected. ** ** -th-nosvg When using -th, output the post-TH1'd script ** instead of the pikchr-rendered output ** ** -th-trace Trace TH1 execution (for debugging purposes) ** ** ** The -div-indent/center/left/right flags may not be combined. ** ** TH1-related Notes and Caveats: ** ** If the -th flag is used, this command must open a fossil database ** for certain functionality to work (via a check-out or the -R REPO |
︙ | ︙ | |||
617 618 619 620 621 622 623 | } if(find_option("div-toggle",0,0)!=0){ pikFlags |= PIKCHR_PROCESS_DIV_TOGGLE; } if(find_option("div-source",0,0)!=0){ pikFlags |= PIKCHR_PROCESS_DIV_SOURCE | PIKCHR_PROCESS_SRC; } | < < < | 611 612 613 614 615 616 617 618 619 620 621 622 623 624 | } if(find_option("div-toggle",0,0)!=0){ pikFlags |= PIKCHR_PROCESS_DIV_TOGGLE; } if(find_option("div-source",0,0)!=0){ pikFlags |= PIKCHR_PROCESS_DIV_SOURCE | PIKCHR_PROCESS_SRC; } verify_all_options(); if(g.argc>4){ usage("?INFILE? ?OUTFILE?"); } if(g.argc>2){ zInfile = g.argv[2]; |
︙ | ︙ |
Changes to src/pqueue.c.
︙ | ︙ | |||
40 41 42 43 44 45 46 47 48 49 50 51 52 53 | ** Integers must be positive. */ struct PQueue { int cnt; /* Number of entries in the queue */ int sz; /* Number of slots in a[] */ struct QueueElement { int id; /* ID of the element */ double value; /* Value of element. Kept in ascending order */ } *a; }; #endif /* ** Initialize a PQueue structure | > | 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 | ** Integers must be positive. */ struct PQueue { int cnt; /* Number of entries in the queue */ int sz; /* Number of slots in a[] */ struct QueueElement { int id; /* ID of the element */ void *p; /* Content pointer */ double value; /* Value of element. Kept in ascending order */ } *a; }; #endif /* ** Initialize a PQueue structure |
︙ | ︙ | |||
71 72 73 74 75 76 77 | p->a = fossil_realloc(p->a, sizeof(p->a[0])*N); p->sz = N; } /* ** Insert element e into the queue. */ | | > | > > | 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 | p->a = fossil_realloc(p->a, sizeof(p->a[0])*N); p->sz = N; } /* ** Insert element e into the queue. */ void pqueuex_insert(PQueue *p, int e, double v, void *pData){ int i, j; if( p->cnt+1>p->sz ){ pqueuex_resize(p, p->cnt+5); } for(i=0; i<p->cnt; i++){ if( p->a[i].value>v ){ for(j=p->cnt; j>i; j--){ p->a[j] = p->a[j-1]; } break; } } p->a[i].id = e; p->a[i].p = pData; p->a[i].value = v; p->cnt++; } /* ** Extract the first element from the queue (the element with ** the smallest value) and return its ID. Return 0 if the queue ** is empty. */ int pqueuex_extract(PQueue *p, void **pp){ int e, i; if( p->cnt==0 ){ if( pp ) *pp = 0; return 0; } e = p->a[0].id; if( pp ) *pp = p->a[0].p; for(i=0; i<p->cnt-1; i++){ p->a[i] = p->a[i+1]; } p->cnt--; return e; } |
Changes to src/rebuild.c.
︙ | ︙ | |||
654 655 656 657 658 659 660 661 662 663 664 665 | ** executable in a way that changes the database schema. ** ** Options: ** --analyze Run ANALYZE on the database after rebuilding ** --cluster Compute clusters for unclustered artifacts ** --compress Strive to make the database as small as possible ** --compress-only Skip the rebuilding step. Do --compress only ** --force Force the rebuild to complete even if errors are seen ** --ifneeded Only do the rebuild if it would change the schema version ** --index Always add in the full-text search index ** --noverify Skip the verification of changes to the BLOB table ** --noindex Always omit the full-text search index | > | | 654 655 656 657 658 659 660 661 662 663 664 665 666 667 668 669 670 671 672 673 674 | ** executable in a way that changes the database schema. ** ** Options: ** --analyze Run ANALYZE on the database after rebuilding ** --cluster Compute clusters for unclustered artifacts ** --compress Strive to make the database as small as possible ** --compress-only Skip the rebuilding step. Do --compress only ** --deanalyze Remove ANALYZE tables from the database ** --force Force the rebuild to complete even if errors are seen ** --ifneeded Only do the rebuild if it would change the schema version ** --index Always add in the full-text search index ** --noverify Skip the verification of changes to the BLOB table ** --noindex Always omit the full-text search index ** --pagesize N Set the database pagesize to N. (512..65536 and power of 2) ** --quiet Only show output if there are errors ** --stats Show artifact statistics after rebuilding ** --vacuum Run VACUUM on the database after rebuilding ** --wal Set Write-Ahead-Log journalling mode on the database */ void rebuild_database(void){ int forceFlag; |
︙ | ︙ | |||
688 689 690 691 692 693 694 | int optIfNeeded; int compressOnlyFlag; omitVerify = find_option("noverify",0,0)!=0; forceFlag = find_option("force","f",0)!=0; doClustering = find_option("cluster", 0, 0)!=0; runVacuum = find_option("vacuum",0,0)!=0; | | | 689 690 691 692 693 694 695 696 697 698 699 700 701 702 703 | int optIfNeeded; int compressOnlyFlag; omitVerify = find_option("noverify",0,0)!=0; forceFlag = find_option("force","f",0)!=0; doClustering = find_option("cluster", 0, 0)!=0; runVacuum = find_option("vacuum",0,0)!=0; runDeanalyze = find_option("deanalyze",0,0)!=0; runAnalyze = find_option("analyze",0,0)!=0; runCompress = find_option("compress",0,0)!=0; zPagesize = find_option("pagesize",0,1); showStats = find_option("stats",0,0)!=0; optIndex = find_option("index",0,0)!=0; optNoIndex = find_option("noindex",0,0)!=0; optIfNeeded = find_option("ifneeded",0,0)!=0; |
︙ | ︙ | |||
1393 1394 1395 1396 1397 1398 1399 | */ verify_cancel(); db_end_transaction(0); fossil_print("project-id: %s\n", db_get("project-code", 0)); fossil_print("server-id: %s\n", db_get("server-code", 0)); zPassword = db_text(0, "SELECT pw FROM user WHERE login=%Q", g.zLogin); | | < | 1394 1395 1396 1397 1398 1399 1400 1401 1402 1403 1404 1405 1406 1407 1408 | */ verify_cancel(); db_end_transaction(0); fossil_print("project-id: %s\n", db_get("project-code", 0)); fossil_print("server-id: %s\n", db_get("server-code", 0)); zPassword = db_text(0, "SELECT pw FROM user WHERE login=%Q", g.zLogin); fossil_print("admin-user: %s (initial password is \"%s\")\n", g.zLogin, zPassword); hash_user_password(g.zLogin); } /* ** COMMAND: deconstruct* ** ** Usage %fossil deconstruct ?OPTIONS? DESTINATION |
︙ | ︙ |
Changes to src/report.c.
︙ | ︙ | |||
1124 1125 1126 1127 1128 1129 1130 | char *zClrKey; char *zDesc; char *zMimetype; int tabs; Stmt q; char *zErr1 = 0; char *zErr2 = 0; | | | 1124 1125 1126 1127 1128 1129 1130 1131 1132 1133 1134 1135 1136 1137 1138 | char *zClrKey; char *zDesc; char *zMimetype; int tabs; Stmt q; char *zErr1 = 0; char *zErr2 = 0; login_check_credentials(); if( !g.perm.RdTkt ){ login_needed(g.anon.RdTkt); return; } report_update_reportfmt_table(); rn = report_number(); tabs = P("tablist")!=0; db_prepare(&q, "SELECT title, sqlcode, owner, cols, rn, jx->>'desc', jx->>'descmt'" |
︙ | ︙ | |||
1366 1367 1368 1369 1370 1371 1372 | Stmt q; char *zSql; char *zErr1 = 0; char *zErr2 = 0; int count = 0; int rn; | | < | 1366 1367 1368 1369 1370 1371 1372 1373 1374 1375 1376 1377 1378 1379 1380 | Stmt q; char *zSql; char *zErr1 = 0; char *zErr2 = 0; int count = 0; int rn; if( !zRep || !strcmp(zRep,zFullTicketRptRn) || !strcmp(zRep,zFullTicketRptTitle) ){ zSql = "SELECT * FROM ticket"; }else{ rn = atoi(zRep); if( rn ){ db_prepare(&q, "SELECT sqlcode FROM reportfmt WHERE rn=%d", rn); }else{ |
︙ | ︙ |
Changes to src/rss.c.
︙ | ︙ | |||
141 142 143 144 145 146 147 | blob_append_sql( &bSQL, " ORDER BY event.mtime DESC" ); cgi_set_content_type("application/rss+xml"); zProjectName = db_get("project-name", 0); if( zProjectName==0 ){ | | | | 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 | blob_append_sql( &bSQL, " ORDER BY event.mtime DESC" ); cgi_set_content_type("application/rss+xml"); zProjectName = db_get("project-name", 0); if( zProjectName==0 ){ zFreeProjectName = zProjectName = mprintf("Fossil source repository for: %s", g.zBaseURL); } zProjectDescr = db_get("project-description", 0); if( zProjectDescr==0 ){ zProjectDescr = zProjectName; } zPubDate = cgi_rfc822_datestamp(time(NULL)); |
︙ | ︙ | |||
256 257 258 259 260 261 262 | ** The default is "URL-PLACEHOLDER" (without quotes). */ void cmd_timeline_rss(void){ Stmt q; int nLine=0; char *zPubDate, *zProjectName, *zProjectDescr, *zFreeProjectName=0; Blob bSQL; | | | 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 | ** The default is "URL-PLACEHOLDER" (without quotes). */ void cmd_timeline_rss(void){ Stmt q; int nLine=0; char *zPubDate, *zProjectName, *zProjectDescr, *zFreeProjectName=0; Blob bSQL; const char *zType = find_option("type","y",1); /* Type of events. All if NULL */ const char *zTicketUuid = find_option("tkt",NULL,1); const char *zTag = find_option("tag",NULL,1); const char *zFilename = find_option("name",NULL,1); const char *zWiki = find_option("wiki",NULL,1); const char *zLimit = find_option("limit", "n",1); const char *zBaseURL = find_option("url", NULL, 1); int nLimit = atoi( (zLimit && *zLimit) ? zLimit : "20" ); |
︙ | ︙ | |||
330 331 332 333 334 335 336 | }else if( nTagId!=0 ){ blob_append_sql(&bSQL, " AND (EXISTS(SELECT 1 FROM tagxref" " WHERE tagid=%d AND tagtype>0 AND rid=blob.rid))", nTagId); } if( zFilename ){ blob_append_sql(&bSQL, | | < | | | < | 330 331 332 333 334 335 336 337 338 339 340 341 342 343 344 345 346 347 348 349 350 351 352 353 354 355 356 357 358 359 360 361 362 363 364 | }else if( nTagId!=0 ){ blob_append_sql(&bSQL, " AND (EXISTS(SELECT 1 FROM tagxref" " WHERE tagid=%d AND tagtype>0 AND rid=blob.rid))", nTagId); } if( zFilename ){ blob_append_sql(&bSQL, " AND (SELECT mlink.fnid FROM mlink WHERE event.objid=mlink.mid) IN (SELECT fnid FROM filename WHERE name=%Q %s)", zFilename, filename_collation() ); } blob_append( &bSQL, " ORDER BY event.mtime DESC", -1 ); zProjectName = db_get("project-name", 0); if( zProjectName==0 ){ zFreeProjectName = zProjectName = mprintf("Fossil source repository for: %s", zBaseURL); } zProjectDescr = db_get("project-description", 0); if( zProjectDescr==0 ){ zProjectDescr = zProjectName; } zPubDate = cgi_rfc822_datestamp(time(NULL)); fossil_print("<?xml version=\"1.0\"?>"); fossil_print("<rss xmlns:dc=\"http://purl.org/dc/elements/1.1/\" version=\"2.0\">"); fossil_print("<channel>\n"); fossil_print("<title>%h</title>\n", zProjectName); fossil_print("<link>%s</link>\n", zBaseURL); fossil_print("<description>%h</description>\n", zProjectDescr); fossil_print("<pubDate>%s</pubDate>\n", zPubDate); fossil_print("<generator>Fossil version %s %s</generator>\n", MANIFEST_VERSION, MANIFEST_DATE); |
︙ | ︙ |
Changes to src/search.c.
︙ | ︙ | |||
581 582 583 584 585 586 587 | ** option can be used to output all matches, regardless of their search ** score. The -limit option can be used to limit the number of entries ** returned. The -width option can be used to set the output width used ** when printing matches. ** ** Options: ** -a|--all Output all matches, not just best matches | < < < < | > > < < | > < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < | | | | | | | | | | | | | | | | | | | | | | | | | | < | 581 582 583 584 585 586 587 588 589 590 591 592 593 594 595 596 597 598 599 600 601 602 603 604 605 606 607 608 609 610 611 612 613 614 615 616 617 618 619 620 621 622 623 624 625 626 627 628 629 630 631 632 633 634 635 636 637 638 639 640 641 642 643 644 645 646 647 648 649 650 651 652 653 654 | ** option can be used to output all matches, regardless of their search ** score. The -limit option can be used to limit the number of entries ** returned. The -width option can be used to set the output width used ** when printing matches. ** ** Options: ** -a|--all Output all matches, not just best matches ** -n|--limit N Limit output to N matches ** -W|--width WIDTH Set display width to WIDTH columns, 0 for ** unlimited. Defaults the terminal's width. */ void search_cmd(void){ Blob pattern; int i; Blob sql = empty_blob; Stmt q; int iBest; char fAll = NULL != find_option("all", "a", 0); /* If set, do not lop off the end of the results. */ const char *zLimit = find_option("limit","n",1); const char *zWidth = find_option("width","W",1); int nLimit = zLimit ? atoi(zLimit) : -1000; /* Max number of matching lines/entries to list */ int width; if( zWidth ){ width = atoi(zWidth); if( (width!=0) && (width<=20) ){ fossil_fatal("-W|--width value must be >20 or 0"); } }else{ width = -1; } db_find_and_open_repository(0, 0); if( g.argc<3 ) return; blob_init(&pattern, g.argv[2], -1); for(i=3; i<g.argc; i++){ blob_appendf(&pattern, " %s", g.argv[i]); } (void)search_init(blob_str(&pattern),"*","*","...",SRCHFLG_STATIC); blob_reset(&pattern); search_sql_setup(g.db); db_multi_exec( "CREATE TEMP TABLE srch(rid,uuid,date,comment,x);" "CREATE INDEX srch_idx1 ON srch(x);" "INSERT INTO srch(rid,uuid,date,comment,x)" " SELECT blob.rid, uuid, datetime(event.mtime,toLocal())," " coalesce(ecomment,comment)," " search_score()" " FROM event, blob" " WHERE blob.rid=event.objid" " AND search_match(coalesce(ecomment,comment));" ); iBest = db_int(0, "SELECT max(x) FROM srch"); blob_append(&sql, "SELECT rid, uuid, date, comment, 0, 0 FROM srch " "WHERE 1 ", -1); if(!fAll){ blob_append_sql(&sql,"AND x>%d ", iBest/3); } blob_append(&sql, "ORDER BY x DESC, date DESC ", -1); db_prepare(&q, "%s", blob_sql_text(&sql)); blob_reset(&sql); print_timeline(&q, nLimit, width, 0, 0); db_finalize(&q); } #if INTERFACE /* What to search for */ #define SRCH_CKIN 0x0001 /* Search over check-in comments */ #define SRCH_DOC 0x0002 /* Search over embedded documents */ #define SRCH_TKT 0x0004 /* Search over tickets */ |
︙ | ︙ | |||
782 783 784 785 786 787 788 | ** snip: A snippet for the match ** ** And the srchFlags parameter has been validated. This routine ** fills the X table with search results using a full-scan search. ** ** The companion indexed search routine is search_indexed(). */ | | | 704 705 706 707 708 709 710 711 712 713 714 715 716 717 718 | ** snip: A snippet for the match ** ** And the srchFlags parameter has been validated. This routine ** fills the X table with search results using a full-scan search. ** ** The companion indexed search routine is search_indexed(). */ static void search_fullscan( const char *zPattern, /* The query pattern */ unsigned int srchFlags /* What to search over */ ){ search_init(zPattern, "<mark>", "</mark>", " ... ", SRCHFLG_STATIC|SRCHFLG_HTML); if( (srchFlags & SRCH_DOC)!=0 ){ char *zDocGlob = db_get("doc-glob",""); |
︙ | ︙ | |||
990 991 992 993 994 995 996 | ** snip: A snippet for the match ** ** And the srchFlags parameter has been validated. This routine ** fills the X table with search results using FTS indexed search. ** ** The companion full-scan search routine is search_fullscan(). */ | | < < < < < | 912 913 914 915 916 917 918 919 920 921 922 923 924 925 926 927 928 929 930 931 932 933 934 935 936 937 938 939 | ** snip: A snippet for the match ** ** And the srchFlags parameter has been validated. This routine ** fills the X table with search results using FTS indexed search. ** ** The companion full-scan search routine is search_fullscan(). */ static void search_indexed( const char *zPattern, /* The query pattern */ unsigned int srchFlags /* What to search over */ ){ Blob sql; char *zPat = mprintf("%s",zPattern); int i; static const char *zSnippetCall; if( srchFlags==0 ) return; sqlite3_create_function(g.db, "rank", 1, SQLITE_UTF8|SQLITE_INNOCUOUS, 0, search_rank_sqlfunc, 0, 0); for(i=0; zPat[i]; i++){ if( (zPat[i]&0x80)==0 && !fossil_isalnum(zPat[i]) ) zPat[i] = ' '; } blob_init(&sql, 0, 0); if( search_index_type(0)==4 ){ /* If this repo is still using the legacy FTS4 search index, then ** the snippet() function is slightly different */ zSnippetCall = "snippet(ftsidx,'<mark>','</mark>',' ... ',-1,35)"; }else{ /* This is the common case - Using newer FTS5 search index */ |
︙ | ︙ | |||
1160 1161 1162 1163 1164 1165 1166 | } nRow++; @ <li><p><a href='%R%s(zUrl)'>%h(zLabel)</a> if( fDebug ){ @ (%e(db_column_double(&q,3)), %s(db_column_text(&q,4)) } @ <br><span class='snippet'>%z(cleanSnippet(zSnippet)) \ | | | 1077 1078 1079 1080 1081 1082 1083 1084 1085 1086 1087 1088 1089 1090 1091 | } nRow++; @ <li><p><a href='%R%s(zUrl)'>%h(zLabel)</a> if( fDebug ){ @ (%e(db_column_double(&q,3)), %s(db_column_text(&q,4)) } @ <br><span class='snippet'>%z(cleanSnippet(zSnippet)) \ if( zDate && zDate[0] && strstr(zLabel,zDate)==0 ){ @ <small>(%h(zDate))</small> } @ </span></li> if( nLimit && nRow>=nLimit ) break; } db_finalize(&q); if( nRow ){ |
︙ | ︙ | |||
1310 1311 1312 1313 1314 1315 1316 | } /* ** This is a helper function for search_stext(). Writing into pOut ** the search text obtained from pIn according to zMimetype. ** | < < < < < < | < < < < < < | | | | | | | | | | < | < | > | > | > > > | < > | > | 1227 1228 1229 1230 1231 1232 1233 1234 1235 1236 1237 1238 1239 1240 1241 1242 1243 1244 1245 1246 1247 1248 1249 1250 1251 1252 1253 1254 1255 1256 1257 1258 1259 1260 1261 1262 1263 1264 1265 1266 1267 1268 1269 1270 1271 1272 1273 1274 1275 1276 1277 1278 1279 1280 | } /* ** This is a helper function for search_stext(). Writing into pOut ** the search text obtained from pIn according to zMimetype. ** ** The title of the document is the first line of text. All subsequent ** lines are the body. If the document has no title, the first line ** is blank. */ static void get_stext_by_mimetype( Blob *pIn, const char *zMimetype, Blob *pOut ){ Blob html, title; blob_init(&html, 0, 0); blob_init(&title, 0, 0); if( zMimetype==0 ) zMimetype = "text/plain"; if( fossil_strcmp(zMimetype,"text/x-fossil-wiki")==0 ){ Blob tail; blob_init(&tail, 0, 0); if( wiki_find_title(pIn, &title, &tail) ){ blob_appendf(pOut, "%s\n", blob_str(&title)); wiki_convert(&tail, &html, 0); blob_reset(&tail); }else{ blob_append(pOut, "\n", 1); wiki_convert(pIn, &html, 0); } html_to_plaintext(blob_str(&html), pOut); }else if( fossil_strcmp(zMimetype,"text/x-markdown")==0 ){ markdown_to_html(pIn, &title, &html); if( blob_size(&title) ){ blob_appendf(pOut, "%s\n", blob_str(&title)); }else{ blob_append(pOut, "\n", 1); } html_to_plaintext(blob_str(&html), pOut); }else if( fossil_strcmp(zMimetype,"text/html")==0 ){ if( doc_is_embedded_html(pIn, &title) ){ blob_appendf(pOut, "%s\n", blob_str(&title)); } html_to_plaintext(blob_str(pIn), pOut); }else{ blob_append(pOut, "\n", 1); blob_append(pOut, blob_buffer(pIn), blob_size(pIn)); } blob_reset(&html); blob_reset(&title); } /* |
︙ | ︙ | |||
1394 1395 1396 1397 1398 1399 1400 | if( fossil_strcmp(zMime,"text/plain")==0 ) zMime = 0; }else if( zMime==0 || eType!=SQLITE_TEXT ){ blob_appendf(pAccum, "%s: %s |\n", zColName, db_column_text(pQuery,i)); }else{ Blob txt; blob_init(&txt, db_column_text(pQuery,i), -1); blob_appendf(pAccum, "%s: ", zColName); | | | 1303 1304 1305 1306 1307 1308 1309 1310 1311 1312 1313 1314 1315 1316 1317 | if( fossil_strcmp(zMime,"text/plain")==0 ) zMime = 0; }else if( zMime==0 || eType!=SQLITE_TEXT ){ blob_appendf(pAccum, "%s: %s |\n", zColName, db_column_text(pQuery,i)); }else{ Blob txt; blob_init(&txt, db_column_text(pQuery,i), -1); blob_appendf(pAccum, "%s: ", zColName); get_stext_by_mimetype(&txt, zMime, pAccum); blob_append(pAccum, " |", 2); blob_reset(&txt); } } } |
︙ | ︙ | |||
1433 1434 1435 1436 1437 1438 1439 | ){ blob_init(pOut, 0, 0); switch( cType ){ case 'd': { /* Documents */ Blob doc; content_get(rid, &doc); blob_to_utf8_no_bom(&doc, 0); | | | 1342 1343 1344 1345 1346 1347 1348 1349 1350 1351 1352 1353 1354 1355 1356 | ){ blob_init(pOut, 0, 0); switch( cType ){ case 'd': { /* Documents */ Blob doc; content_get(rid, &doc); blob_to_utf8_no_bom(&doc, 0); get_stext_by_mimetype(&doc, mimetype_from_name(zName), pOut); blob_reset(&doc); break; } case 'f': /* Forum messages */ case 'e': /* Tech Notes */ case 'w': { /* Wiki */ Manifest *pWiki = manifest_get(rid, |
︙ | ︙ | |||
1455 1456 1457 1458 1459 1460 1461 | blob_appendf(&wiki, "<h1>%h</h1>\n", pWiki->zThreadTitle); } blob_appendf(&wiki, "From %s:\n\n%s", pWiki->zUser, pWiki->zWiki); }else{ blob_init(&wiki, pWiki->zWiki, -1); } get_stext_by_mimetype(&wiki, wiki_filter_mimetypes(pWiki->zMimetype), | | | 1364 1365 1366 1367 1368 1369 1370 1371 1372 1373 1374 1375 1376 1377 1378 | blob_appendf(&wiki, "<h1>%h</h1>\n", pWiki->zThreadTitle); } blob_appendf(&wiki, "From %s:\n\n%s", pWiki->zUser, pWiki->zWiki); }else{ blob_init(&wiki, pWiki->zWiki, -1); } get_stext_by_mimetype(&wiki, wiki_filter_mimetypes(pWiki->zMimetype), pOut); blob_reset(&wiki); manifest_destroy(pWiki); break; } case 'c': { /* Check-in Comments */ static Stmt q; static int isPlainText = -1; |
︙ | ︙ | |||
1485 1486 1487 1488 1489 1490 1491 | blob_append(pOut, "\n", 1); if( isPlainText ){ db_column_blob(&q, 0, pOut); }else{ Blob x; blob_init(&x,0,0); db_column_blob(&q, 0, &x); | | | 1394 1395 1396 1397 1398 1399 1400 1401 1402 1403 1404 1405 1406 1407 1408 | blob_append(pOut, "\n", 1); if( isPlainText ){ db_column_blob(&q, 0, pOut); }else{ Blob x; blob_init(&x,0,0); db_column_blob(&q, 0, &x); get_stext_by_mimetype(&x, "text/x-fossil-wiki", pOut); blob_reset(&x); } } db_reset(&q); break; } case 't': { /* Tickets */ |
︙ | ︙ | |||
1596 1597 1598 1599 1600 1601 1602 | */ void test_convert_stext(void){ Blob in, out; db_find_and_open_repository(0,0); if( g.argc!=4 ) usage("FILENAME MIMETYPE"); blob_read_from_file(&in, g.argv[2], ExtFILE); blob_init(&out, 0, 0); | | | 1505 1506 1507 1508 1509 1510 1511 1512 1513 1514 1515 1516 1517 1518 1519 | */ void test_convert_stext(void){ Blob in, out; db_find_and_open_repository(0,0); if( g.argc!=4 ) usage("FILENAME MIMETYPE"); blob_read_from_file(&in, g.argv[2], ExtFILE); blob_init(&out, 0, 0); get_stext_by_mimetype(&in, g.argv[3], &out); fossil_print("%s\n",blob_str(&out)); blob_reset(&in); blob_reset(&out); } /* ** The schema for the full-text index. The %s part must be an empty |
︙ | ︙ | |||
2383 2384 2385 2386 2387 2388 2389 | return rc; } /* ** Argument f should be a flag accepted by matchinfo() (a valid character | | | 2292 2293 2294 2295 2296 2297 2298 2299 2300 2301 2302 2303 2304 2305 2306 | return rc; } /* ** Argument f should be a flag accepted by matchinfo() (a valid character ** in the string passed as the second argument). If it is not, -1 is ** returned. Otherwise, if f is a valid matchinfo flag, the value returned ** is the number of 32-bit integers added to the output array if the ** table has nCol columns and the query nPhrase phrases. */ static int fts5MatchinfoFlagsize(int nCol, int nPhrase, char f){ int ret = -1; switch( f ){ |
︙ | ︙ |
Changes to src/security_audit.c.
︙ | ︙ | |||
334 335 336 337 338 339 340 | } /* Anonymous users probably should not be allowed act as moderators ** for wiki or tickets. */ if( hasAnyCap(zAnonCap, "lq5") ){ @ <li><p><b>WARNING:</b> | | | | 334 335 336 337 338 339 340 341 342 343 344 345 346 347 348 349 350 351 352 353 354 355 356 357 358 359 360 361 362 363 364 365 | } /* Anonymous users probably should not be allowed act as moderators ** for wiki or tickets. */ if( hasAnyCap(zAnonCap, "lq5") ){ @ <li><p><b>WARNING:</b> @ Anonymous users can act as moderators for wiki, tickets, or @ forum posts. This defeats the whole purpose of moderation. @ Fix this by removing the "Mod-Wiki", "Mod-Tkt", and "Mod-Forum" @ privileges (<a href="%R/setup_ucap_list">capabilities</a> "fq5") @ from users "anonymous" and "nobody" @ on the <a href="setup_ulist">User Configuration</a> page. } /* Check to see if any TH1 scripts are configured to run on a sync */ if( db_exists("SELECT 1 FROM config WHERE name GLOB 'xfer-*-script'" " AND length(value)>0") ){ @ <li><p><b>WARNING:</b> @ TH1 scripts might be configured to run on any sync, push, pull, or @ clone operation. See the the <a href="%R/xfersetup">/xfersetup</a> @ page for more information. These TH1 scripts are a potential @ security concern and so should be carefully audited by a human. } /* The strict-manifest-syntax setting should be on. */ if( db_get_boolean("strict-manifest-syntax",1)==0 ){ @ <li><p><b>WARNING:</b> @ The "strict-manifest-syntax" flag is off. This is a security @ risk. Turn this setting on (its default) to protect the users @ of this repository. |
︙ | ︙ | |||
580 581 582 583 584 585 586 | }else { double r = atof(db_get("max-loadavg", 0)); if( r<=0.0 ){ @ <li><p> @ Load average limiting is turned off. This can cause the server @ to bog down if many requests for expensive services (such as @ large diffs or tarballs) arrive at about the same time. | | | 580 581 582 583 584 585 586 587 588 589 590 591 592 593 594 | }else { double r = atof(db_get("max-loadavg", 0)); if( r<=0.0 ){ @ <li><p> @ Load average limiting is turned off. This can cause the server @ to bog down if many requests for expensive services (such as @ large diffs or tarballs) arrive at about the same time. @ To fix this, set the @ <a href='%R/setup_access#slal'>"Server Load Average Limit"</a> on the @ <a href='%R/setup_access'>Access Control</a> page to the approximate @ the number of available cores on your server, or maybe just a little @ less. }else if( r>=8.0 ){ @ <li><p> @ The <a href='%R/setup_access#slal'>"Server Load Average Limit"</a> on |
︙ | ︙ | |||
602 603 604 605 606 607 608 | @ <li><p> @ The server error log is disabled. @ To set up an error log, if( fossil_strcmp(g.zCmdName, "cgi")==0 ){ @ make an entry like "errorlog: <i>FILENAME</i>" in the @ CGI script at %h(P("SCRIPT_FILENAME")). }else{ | | | 602 603 604 605 606 607 608 609 610 611 612 613 614 615 616 | @ <li><p> @ The server error log is disabled. @ To set up an error log, if( fossil_strcmp(g.zCmdName, "cgi")==0 ){ @ make an entry like "errorlog: <i>FILENAME</i>" in the @ CGI script at %h(P("SCRIPT_FILENAME")). }else{ @ add the "--errorlog <i>FILENAME</i>" option to the @ "%h(g.argv[0]) %h(g.zCmdName)" command that launched this server. } }else{ FILE *pTest = fossil_fopen(g.zErrlog,"a"); if( pTest==0 ){ @ <li><p> @ <b>Error:</b> |
︙ | ︙ | |||
633 634 635 636 637 638 639 | @ <li><p> CGI Extensions are enabled with a document root @ at <a href='%R/extfilelist'>%h(g.zExtRoot)</a> holding @ %d(nCgi) CGIs and %d(nFile-nCgi) static content and data files. } if( fileedit_glob()!=0 ){ @ <li><p><a href='%R/fileedit'>Online File Editing</a> is enabled | | | | 633 634 635 636 637 638 639 640 641 642 643 644 645 646 647 648 649 650 651 652 653 654 655 656 657 658 659 660 661 662 663 | @ <li><p> CGI Extensions are enabled with a document root @ at <a href='%R/extfilelist'>%h(g.zExtRoot)</a> holding @ %d(nCgi) CGIs and %d(nFile-nCgi) static content and data files. } if( fileedit_glob()!=0 ){ @ <li><p><a href='%R/fileedit'>Online File Editing</a> is enabled @ for this repository. Clear the @ <a href='%R/setup_settings'>"fileedit-glob" setting</a> to @ disable online editing.</p> } @ <li><p> User capability summary: capability_summary(); azCSP = parse_content_security_policy(); if( azCSP==0 ){ @ <li><p> WARNING: No Content Security Policy (CSP) is specified in the @ header. Though not required, a strong CSP is recommended. Fossil will @ automatically insert an appropriate CSP if you let it generate the @ HTML <tt><head></tt> element by omitting <tt><body></tt> @ from the header configuration in your customized skin. @ }else{ int ii; @ <li><p> Content Security Policy: @ <ol type="a"> for(ii=0; azCSP[ii]; ii++){ @ <li>%h(azCSP[ii]) } |
︙ | ︙ | |||
696 697 698 699 700 701 702 | } blob_init(&cmd, 0, 0); for(i=0; g.argvOrig[i]!=0; i++){ blob_append_escaped_arg(&cmd, g.argvOrig[i], 0); } @ <li><p> | < < < < < < | 696 697 698 699 700 701 702 703 704 705 706 707 708 709 | } blob_init(&cmd, 0, 0); for(i=0; g.argvOrig[i]!=0; i++){ blob_append_escaped_arg(&cmd, g.argvOrig[i], 0); } @ <li><p> @ The command that generated this page: @ <blockquote> @ <tt>%h(blob_str(&cmd))</tt> @ </blockquote></li> blob_zero(&cmd); @ </ol> |
︙ | ︙ | |||
755 756 757 758 759 760 761 | @ <input type="submit" name="cancel" value="Cancel"> @ </form> style_finish_page(); } /* | < < < < < < < < < < < < < < < < < < < < < | | < | | > > > > > > > | > > > > > > > | 749 750 751 752 753 754 755 756 757 758 759 760 761 762 763 764 765 766 767 768 769 770 771 772 773 774 775 776 777 778 779 780 781 782 783 784 785 786 787 788 789 790 791 792 793 794 795 796 797 798 799 800 801 802 803 | @ <input type="submit" name="cancel" value="Cancel"> @ </form> style_finish_page(); } /* ** The maximum number of bytes of log to show */ #define MXSHOWLOG 50000 /* ** WEBPAGE: errorlog ** ** Show the content of the error log. Only the administrator can view ** this page. */ void errorlog_page(void){ i64 szFile; FILE *in; char z[10000]; login_check_credentials(); if( !g.perm.Admin ){ login_needed(0); return; } style_header("Server Error Log"); style_submenu_element("Test", "%R/test-warning"); style_submenu_element("Refresh", "%R/errorlog"); style_submenu_element("Admin-Log", "admin_log"); style_submenu_element("User-Log", "access_log"); style_submenu_element("Artifact-Log", "rcvfromlist"); if( g.zErrlog==0 || fossil_strcmp(g.zErrlog,"-")==0 ){ @ <p>To create a server error log: @ <ol> @ <li><p> @ If the server is running as CGI, then create a line in the CGI file @ like this: @ <blockquote><pre> @ errorlog: <i>FILENAME</i> @ </pre></blockquote> @ <li><p> @ If the server is running using one of @ the "fossil http" or "fossil server" commands then add @ a command-line option "--errorlog <i>FILENAME</i>" to that @ command. @ </ol> style_finish_page(); return; } if( P("truncate1") && cgi_csrf_safe(2) ){ fclose(fopen(g.zErrlog,"w")); } if( P("download") ){ |
︙ | ︙ | |||
828 829 830 831 832 833 834 | @ <p>Confirm that you want to truncate the %,lld(szFile)-byte error log: @ <input type="submit" name="truncate1" value="Confirm"> @ <input type="submit" name="cancel" value="Cancel"> @ </form> style_finish_page(); return; } | < | < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < | 814 815 816 817 818 819 820 821 822 823 824 825 826 827 828 829 830 831 832 833 834 835 836 837 838 839 840 841 842 843 844 845 | @ <p>Confirm that you want to truncate the %,lld(szFile)-byte error log: @ <input type="submit" name="truncate1" value="Confirm"> @ <input type="submit" name="cancel" value="Cancel"> @ </form> style_finish_page(); return; } @ <p>The server error log at "%h(g.zErrlog)" is %,lld(szFile) bytes in size. style_submenu_element("Download", "%R/errorlog?download"); style_submenu_element("Truncate", "%R/errorlog?truncate"); in = fossil_fopen(g.zErrlog, "rb"); if( in==0 ){ @ <p class='generalError'>Unable to open that file for reading!</p> style_finish_page(); return; } if( szFile>MXSHOWLOG && P("all")==0 ){ @ <form action="%R/errorlog" method="POST"> @ <p>Only the last %,d(MXSHOWLOG) bytes are shown. @ <input type="submit" name="all" value="Show All"> @ </form> fseek(in, -MXSHOWLOG, SEEK_END); } @ <hr> @ <pre> while( fgets(z, sizeof(z), in) ){ @ %h(z)\ } fclose(in); @ </pre> style_finish_page(); } |
Changes to src/setup.c.
︙ | ︙ | |||
47 48 49 50 51 52 53 | void setup_menu_entry( const char *zTitle, const char *zLink, const char *zDesc ){ @ <tr><td valign="top" align="right"> if( zLink && zLink[0] ){ | | | 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 | void setup_menu_entry( const char *zTitle, const char *zLink, const char *zDesc ){ @ <tr><td valign="top" align="right"> if( zLink && zLink[0] ){ @ <a href="%s(zLink)">%h(zTitle)</a> }else{ @ %h(zTitle) } @ </td><td width="5"></td><td valign="top">%h(zDesc)</td></tr> } |
︙ | ︙ | |||
141 142 143 144 145 146 147 | "Configure URL aliases"); if( setup_user ){ setup_menu_entry("Notification", "setup_notification", "Automatic notifications of changes via outbound email"); setup_menu_entry("Transfers", "xfersetup", "Configure the transfer system for this repository"); } | | > > | > > | > > < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < | 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 | "Configure URL aliases"); if( setup_user ){ setup_menu_entry("Notification", "setup_notification", "Automatic notifications of changes via outbound email"); setup_menu_entry("Transfers", "xfersetup", "Configure the transfer system for this repository"); } setup_menu_entry("Skins", "setup_skin", "Select and/or modify the web interface \"skins\""); setup_menu_entry("Moderation", "setup_modreq", "Enable/Disable requiring moderator approval of Wiki and/or Ticket" " changes and attachments."); setup_menu_entry("Ad-Unit", "setup_adunit", "Edit HTML text for an ad unit inserted after the menu bar"); setup_menu_entry("URLs & Checkouts", "urllist", "Show URLs used to access this repo and known check-outs"); if( setup_user ){ setup_menu_entry("Web-Cache", "cachestat", "View the status of the expensive-page cache"); } setup_menu_entry("Logo", "setup_logo", "Change the logo and background images for the server"); setup_menu_entry("Shunned", "shun", "Show artifacts that are shunned by this repository"); setup_menu_entry("Artifact Receipts Log", "rcvfromlist", "A record of received artifacts and their sources"); setup_menu_entry("User Log", "access_log", "A record of login attempts"); setup_menu_entry("Administrative Log", "admin_log", "View the admin_log entries"); setup_menu_entry("Error Log", "errorlog", "View the Fossil server error log"); setup_menu_entry("Unversioned Files", "uvlist?byage=1", "Show all unversioned files held"); setup_menu_entry("Stats", "stat", "Repository Status Reports"); setup_menu_entry("Sitemap", "sitemap", "Links to miscellaneous pages"); if( setup_user ){ setup_menu_entry("SQL", "admin_sql", "Enter raw SQL commands"); setup_menu_entry("TH1", "admin_th1", "Enter raw TH1 commands"); } @ </table> style_finish_page(); } /* ** Generate a checkbox for an attribute. */ void onoff_attribute( |
︙ | ︙ | |||
641 642 643 644 645 646 647 | @ for users who are not logged in. (Property: "require-captcha")</p> @ <hr> entry_attribute("Public pages", 30, "public-pages", "pubpage", "", 0); @ <p>A comma-separated list of glob patterns for pages that are accessible @ without needing a login and using the privileges given by the | | | 586 587 588 589 590 591 592 593 594 595 596 597 598 599 600 | @ for users who are not logged in. (Property: "require-captcha")</p> @ <hr> entry_attribute("Public pages", 30, "public-pages", "pubpage", "", 0); @ <p>A comma-separated list of glob patterns for pages that are accessible @ without needing a login and using the privileges given by the @ "Default privileges" setting below. @ @ <p>Example use case: Set this field to "/doc/trunk/www/*" and set @ the "Default privileges" to include the "o" privilege @ to give anonymous users read-only permission to the @ latest version of the embedded documentation in the www/ folder without @ allowing them to see the rest of the source code. @ (Property: "public-pages") |
︙ | ︙ | |||
1254 1255 1256 1257 1258 1259 1260 | @ choices (such as the hamburger button) to the menu that are not shown @ on this list. (Property: mainmenu) @ <p> if(P("resetMenu")!=0){ db_unset("mainmenu", 0); cgi_delete_parameter("mmenu"); } | | | 1199 1200 1201 1202 1203 1204 1205 1206 1207 1208 1209 1210 1211 1212 1213 | @ choices (such as the hamburger button) to the menu that are not shown @ on this list. (Property: mainmenu) @ <p> if(P("resetMenu")!=0){ db_unset("mainmenu", 0); cgi_delete_parameter("mmenu"); } textarea_attribute("Main Menu", 12, 80, "mainmenu", "mmenu", style_default_mainmenu(), 0); @ </p> @ <p><input type='checkbox' id='cbResetMenu' name='resetMenu' value='1'> @ <label for='cbResetMenu'>Reset menu to default value</label> @ </p> @ <hr> @ <p>Extra links to appear on the <a href="%R/sitemap">/sitemap</a> page, |
︙ | ︙ | |||
1282 1283 1284 1285 1286 1287 1288 | @ If capexpr evaluates to true, then the entry is shown. If not, @ the entry is omitted. "*" is always true. @ </ol> @ @ <p>The default value is blank, meaning no added entries. @ (Property: sitemap-extra) @ <p> | | | 1227 1228 1229 1230 1231 1232 1233 1234 1235 1236 1237 1238 1239 1240 1241 | @ If capexpr evaluates to true, then the entry is shown. If not, @ the entry is omitted. "*" is always true. @ </ol> @ @ <p>The default value is blank, meaning no added entries. @ (Property: sitemap-extra) @ <p> textarea_attribute("Custom Sitemap Entries", 8, 80, "sitemap-extra", "smextra", "", 0); @ <hr> @ <p><input type="submit" name="submit" value="Apply Changes"></p> @ </div></form> db_end_transaction(0); style_finish_page(); } |
︙ | ︙ | |||
2015 2016 2017 2018 2019 2020 2021 | login_check_credentials(); if( !g.perm.Admin ){ login_needed(0); return; } style_set_current_feature("setup"); style_header("Admin Log"); | | > > | 1960 1961 1962 1963 1964 1965 1966 1967 1968 1969 1970 1971 1972 1973 1974 1975 1976 | login_check_credentials(); if( !g.perm.Admin ){ login_needed(0); return; } style_set_current_feature("setup"); style_header("Admin Log"); style_submenu_element("User-Log", "access_log"); style_submenu_element("Artifact-Log", "rcvfromlist"); style_submenu_element("Error-Log", "errorlog"); create_admin_log_table(); limit = atoi(PD("n","200")); ofst = atoi(PD("x","0")); fLogEnabled = db_get_boolean("admin-log", 0); @ <div>Admin logging is %s(fLogEnabled?"on":"off"). @ (Change this on the <a href="setup_settings">settings</a> page.)</div> |
︙ | ︙ |
Changes to src/setupuser.c.
︙ | ︙ | |||
808 809 810 811 812 813 814 | @ subscript suffix @ indicates the privileges of <span class="usertype">anonymous</span> that @ are inherited by all logged-in users. @ </p></li> @ @ <li><p> @ The "<span class="ueditInheritDeveloper"><sub>D</sub></span>" | | | 808 809 810 811 812 813 814 815 816 817 818 819 820 821 822 | @ subscript suffix @ indicates the privileges of <span class="usertype">anonymous</span> that @ are inherited by all logged-in users. @ </p></li> @ @ <li><p> @ The "<span class="ueditInheritDeveloper"><sub>D</sub></span>" @ subscript suffix indicates the privileges of @ <span class="usertype">developer</span> that @ are inherited by all users with the @ <span class="capability">Developer</span> privilege. @ </p></li> @ @ <li><p> @ The "<span class="ueditInheritReader"><sub>R</sub></span>" subscript suffix |
︙ | ︙ |
Changes to src/sha1.c.
︙ | ︙ | |||
30 31 32 33 34 35 36 | ** ** Downloaded on 2017-03-01 then repackaged to work with Fossil ** and makeheaders. */ #if FOSSIL_HARDENED_SHA1 #if INTERFACE | | < | 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 | ** ** Downloaded on 2017-03-01 then repackaged to work with Fossil ** and makeheaders. */ #if FOSSIL_HARDENED_SHA1 #if INTERFACE typedef void(*collision_block_callback)(uint64_t, const uint32_t*, const uint32_t*, const uint32_t*, const uint32_t*); struct SHA1_CTX { uint64_t total; uint32_t ihv[5]; unsigned char buffer[64]; int bigendian; int found_collision; int safe_hash; |
︙ | ︙ |
Changes to src/sha1hard.c.
︙ | ︙ | |||
71 72 73 74 75 76 77 | void sha1_message_expansion(uint32_t W[80]); void sha1_compression(uint32_t ihv[5], const uint32_t m[16]); void sha1_compression_W(uint32_t ihv[5], const uint32_t W[80]); void sha1_compression_states(uint32_t ihv[5], const uint32_t W[80], uint32_t states[80][5]); extern sha1_recompression_type sha1_recompression_step[80]; typedef void(*collision_block_callback)(uint64_t, const uint32_t*, const uint32_t*, const uint32_t*, const uint32_t*); typedef struct { | | | | | | | | | | | | | | | | | 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 | void sha1_message_expansion(uint32_t W[80]); void sha1_compression(uint32_t ihv[5], const uint32_t m[16]); void sha1_compression_W(uint32_t ihv[5], const uint32_t W[80]); void sha1_compression_states(uint32_t ihv[5], const uint32_t W[80], uint32_t states[80][5]); extern sha1_recompression_type sha1_recompression_step[80]; typedef void(*collision_block_callback)(uint64_t, const uint32_t*, const uint32_t*, const uint32_t*, const uint32_t*); typedef struct { uint64_t total; uint32_t ihv[5]; unsigned char buffer[64]; int bigendian; int found_collision; int safe_hash; int detect_coll; int ubc_check; int reduced_round_coll; collision_block_callback callback; uint32_t ihv1[5]; uint32_t ihv2[5]; uint32_t m1[80]; uint32_t m2[80]; uint32_t states[80][5]; } SHA1_CTX; /******************** File: lib/ubc_check.c **************************/ /*** * Copyright 2017 Marc Stevens <marc@marc-stevens.nl>, Dan Shumow <danshu@microsoft.com> * Distributed under the MIT Software License. * See accompanying file LICENSE.txt or copy at |
︙ | ︙ |
Changes to src/sha3.c.
︙ | ︙ | |||
414 415 416 417 418 419 420 | static void SHA3Update( SHA3Context *p, const unsigned char *aData, unsigned int nData ){ unsigned int i = 0; #if SHA3_BYTEORDER==1234 | | | 414 415 416 417 418 419 420 421 422 423 424 425 426 427 428 | static void SHA3Update( SHA3Context *p, const unsigned char *aData, unsigned int nData ){ unsigned int i = 0; #if SHA3_BYTEORDER==1234 if( (p->nLoaded % 8)==0 && ((aData - (const unsigned char*)0)&7)==0 ){ for(; i+7<nData; i+=8){ p->u.s[p->nLoaded/8] ^= *(u64*)&aData[i]; p->nLoaded += 8; if( p->nLoaded>=p->nRate ){ KeccakF1600Step(p); p->nLoaded = 0; } |
︙ | ︙ |
Changes to src/shun.c.
︙ | ︙ | |||
45 46 47 48 49 50 51 | void shun_page(void){ Stmt q; int cnt = 0; const char *zUuid = P("uuid"); const char *zShun = P("shun"); const char *zAccept = P("accept"); const char *zRcvid = P("rcvid"); | < | 45 46 47 48 49 50 51 52 53 54 55 56 57 58 | void shun_page(void){ Stmt q; int cnt = 0; const char *zUuid = P("uuid"); const char *zShun = P("shun"); const char *zAccept = P("accept"); const char *zRcvid = P("rcvid"); int nRcvid = 0; int numRows = 3; char *zCanonical = 0; login_check_credentials(); if( !g.perm.Admin ){ login_needed(0); |
︙ | ︙ | |||
84 85 86 87 88 89 90 | } i++; } zCanonical[j+1] = zCanonical[j] = 0; p = zCanonical; while( *p ){ int nUuid = strlen(p); | | | 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 | } i++; } zCanonical[j+1] = zCanonical[j] = 0; p = zCanonical; while( *p ){ int nUuid = strlen(p); if( !hname_validate(p, nUuid) ){ @ <p class="generalError">Error: Bad artifact IDs.</p> fossil_free(zCanonical); zCanonical = 0; break; }else{ canonical16(p, nUuid); p += nUuid+1; |
︙ | ︙ | |||
154 155 156 157 158 159 160 | for( p = zUuid ; *p ; p += strlen(p)+1 ){ @ <a href="%R/artifact/%s(p)">%s(p)</a><br> } @ have been shunned. They will no longer be pushed. @ They will be removed from the repository the next time the repository @ is rebuilt using the <b>fossil rebuild</b> command-line</p> } | < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < | 153 154 155 156 157 158 159 160 161 162 163 164 165 166 | for( p = zUuid ; *p ; p += strlen(p)+1 ){ @ <a href="%R/artifact/%s(p)">%s(p)</a><br> } @ have been shunned. They will no longer be pushed. @ They will be removed from the repository the next time the repository @ is rebuilt using the <b>fossil rebuild</b> command-line</p> } if( zRcvid ){ nRcvid = atoi(zRcvid); numRows = db_int(0, "SELECT min(count(), 10) FROM blob WHERE rcvid=%d", nRcvid); } @ <p>A shunned artifact will not be pushed nor accepted in a pull and the @ artifact content will be purged from the repository the next time the |
︙ | ︙ | |||
248 249 250 251 252 253 254 | }else if( nRcvid ){ db_prepare(&q, "SELECT uuid FROM blob WHERE rcvid=%d", nRcvid); while( db_step(&q)==SQLITE_ROW ){ @ %s(db_column_text(&q, 0)) } db_finalize(&q); } | < < < < < < | 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 | }else if( nRcvid ){ db_prepare(&q, "SELECT uuid FROM blob WHERE rcvid=%d", nRcvid); while( db_step(&q)==SQLITE_ROW ){ @ %s(db_column_text(&q, 0)) } db_finalize(&q); } } @ </textarea> @ <input type="submit" name="add" value="Shun"> @ </div></form> @ </blockquote> @ @ <a name="delshun"></a> @ <p>Enter the UUIDs of previously shunned artifacts to cause them to be @ accepted again in the repository. The artifacts content is not @ restored because the content is unknown. The only change is that |
︙ | ︙ | |||
372 373 374 375 376 377 378 | login_check_credentials(); if( !g.perm.Admin ){ login_needed(0); return; } style_header("Artifact Receipts"); | | > > | 312 313 314 315 316 317 318 319 320 321 322 323 324 325 326 327 328 | login_check_credentials(); if( !g.perm.Admin ){ login_needed(0); return; } style_header("Artifact Receipts"); style_submenu_element("Admin-Log", "admin_log"); style_submenu_element("User-Log", "access_log"); style_submenu_element("Error-Log", "errorlog"); if( showAll ){ ofst = 0; }else{ style_submenu_element("All", "rcvfromlist?all=1"); } if( ofst>0 ){ style_submenu_element("Newer", "rcvfromlist?ofst=%d", |
︙ | ︙ |
Changes to src/sitemap.c.
︙ | ︙ | |||
79 80 81 82 83 84 85 | g.jsHref = 0; } srchFlags = search_restrict(SRCH_ALL); if( !isPopup ){ style_header("Site Map"); style_adunit_config(ADUNIT_RIGHT_OK); } | | | 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 | g.jsHref = 0; } srchFlags = search_restrict(SRCH_ALL); if( !isPopup ){ style_header("Site Map"); style_adunit_config(ADUNIT_RIGHT_OK); } @ <ul id="sitemap" class="columns" style="column-width:20em"> if( (e&1)==0 ){ @ <li>%z(href("%R/home"))Home Page</a> } #if 0 /* Removed 2021-01-26 */ for(i=0; i<sizeof(aExtra)/sizeof(aExtra[0]); i++){ |
︙ | ︙ | |||
150 151 152 153 154 155 156 | } @ <li>%z(href("%R/docsrch"))Documentation Search</a></li> } #endif if( inSublist ){ @ </ul> | | | 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 | } @ <li>%z(href("%R/docsrch"))Documentation Search</a></li> } #endif if( inSublist ){ @ </ul> inSublist = 0; } @ </li> if( g.perm.Read ){ const char *zEditGlob = db_get("fileedit-glob",""); @ <li>%z(href("%R/tree"))File Browser</a> @ <ul> @ <li>%z(href("%R/tree?type=tree&ci=trunk"))Tree-view, |
︙ | ︙ |
Changes to src/skins.c.
︙ | ︙ | |||
17 18 19 20 21 22 23 | ** ** Implementation of the Setup page for "skins". */ #include "config.h" #include <assert.h> #include "skins.h" | < < < < < < < | 17 18 19 20 21 22 23 24 25 26 27 28 29 30 | ** ** Implementation of the Setup page for "skins". */ #include "config.h" #include <assert.h> #include "skins.h" /* ** An array of available built-in skins. ** ** To add new built-in skins: ** ** 1. Pick a name for the new skin. (Here we use "xyzzy"). ** |
︙ | ︙ | |||
50 51 52 53 54 55 56 | } aBuiltinSkin[] = { { "Default", "default", 0 }, { "Ardoise", "ardoise", 0 }, { "Black & White", "black_and_white", 0 }, { "Blitz", "blitz", 0 }, { "Dark Mode", "darkmode", 0 }, { "Eagle", "eagle", 0 }, | < | 43 44 45 46 47 48 49 50 51 52 53 54 55 56 | } aBuiltinSkin[] = { { "Default", "default", 0 }, { "Ardoise", "ardoise", 0 }, { "Black & White", "black_and_white", 0 }, { "Blitz", "blitz", 0 }, { "Dark Mode", "darkmode", 0 }, { "Eagle", "eagle", 0 }, { "Khaki", "khaki", 0 }, { "Original", "original", 0 }, { "Plain Gray", "plain_gray", 0 }, { "Xekri", "xekri", 0 }, }; /* |
︙ | ︙ | |||
81 82 83 84 85 86 87 | static char *zAltSkinDir = 0; static int iDraftSkin = 0; /* ** Used by skin_use_alternative() to store the current skin rank skin ** so that the /skins page can, if warranted, warn the user that skin ** changes won't have any effect. */ | | < < < < < < < < < < < < < < < < < | 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 | static char *zAltSkinDir = 0; static int iDraftSkin = 0; /* ** Used by skin_use_alternative() to store the current skin rank skin ** so that the /skins page can, if warranted, warn the user that skin ** changes won't have any effect. */ static int nSkinRank = 5; /* ** Skin details are a set of key/value pairs that define display ** attributes of the skin that cannot be easily specified using CSS ** or that need to be known on the server-side. ** ** The following array holds the value for all known skin details. |
︙ | ︙ | |||
147 148 149 150 151 152 153 | ** preferred ranking, making it otherwise more invasive to tell the ** internals "the --skin flag ranks higher than a URL parameter" (the ** former gets initialized before both URL parameters and the /draft ** path determination). ** ** The rankings were initially defined in ** https://fossil-scm.org/forum/forumpost/caf8c9a8bb | | | < | < | | | < < < < | | < < | < < < < < < < < < < < < | < | 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 | ** preferred ranking, making it otherwise more invasive to tell the ** internals "the --skin flag ranks higher than a URL parameter" (the ** former gets initialized before both URL parameters and the /draft ** path determination). ** ** The rankings were initially defined in ** https://fossil-scm.org/forum/forumpost/caf8c9a8bb ** and are: ** ** 0) A skin name matching the glob draft[1-9] trumps everything else. ** ** 1) The --skin flag or skin: CGI config setting. ** ** 2) The "skin" display setting cookie or URL argument, in that ** order. If the "skin" URL argument is provided and refers to a legal ** skin then that will update the display cookie. If the skin name is ** illegal it is silently ignored. ** ** 3) Skin properties from the CONFIG db table ** ** 4) Default skin. ** ** As a special case, a NULL or empty name resets zAltSkinDir and ** pAltSkin to 0 to indicate that the current config-side skin should ** be used (rank 3, above), then returns 0. */ char *skin_use_alternative(const char *zName, int rank){ int i; Blob err = BLOB_INITIALIZER; if(rank > nSkinRank) return 0; nSkinRank = rank; if( zName && 1==rank && strchr(zName, '/')!=0 ){ zAltSkinDir = fossil_strdup(zName); return 0; } if( zName && sqlite3_strglob("draft[1-9]", zName)==0 ){ skin_use_draft(zName[5] - '0'); return 0; } if(!zName || !*zName){ pAltSkin = 0; zAltSkinDir = 0; return 0; } for(i=0; i<count(aBuiltinSkin); i++){ if( fossil_strcmp(aBuiltinSkin[i].zLabel, zName)==0 ){ pAltSkin = &aBuiltinSkin[i]; return 0; } } blob_appendf(&err, "available skins: %s", aBuiltinSkin[0].zLabel); for(i=1; i<count(aBuiltinSkin); i++){ blob_append(&err, " ", 1); blob_append(&err, aBuiltinSkin[i].zLabel, -1); } return blob_str(&err); } /* ** Look for the --skin command-line option and process it. Or ** call fossil_fatal() if an unknown skin is specified. */ void skin_override(void){ const char *zSkin = find_option("skin",0,1); if( zSkin ){ char *zErr = skin_use_alternative(zSkin, 1); if( zErr ) fossil_fatal("%s", zErr); } } /* ** Use one of the draft skins. */ void skin_use_draft(int i){ iDraftSkin = i; } /* ** The following routines return the various components of the skin ** that should be used for the current run. ** ** zWhat is one of: "css", "header", "footer", "details", "js" |
︙ | ︙ | |||
263 264 265 266 267 268 269 | Blob x; blob_read_from_file(&x, z, ExtFILE); fossil_free(z); return blob_str(&x); } fossil_free(z); } | < < < < < < < < < < < < < < < < | 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 | Blob x; blob_read_from_file(&x, z, ExtFILE); fossil_free(z); return blob_str(&x); } fossil_free(z); } if( pAltSkin ){ z = mprintf("skins/%s/%s.txt", pAltSkin->zLabel, zWhat); zOut = builtin_text(z); fossil_free(z); }else{ zOut = db_get(zWhat, 0); if( zOut==0 ){ z = mprintf("skins/default/%s.txt", zWhat); zOut = builtin_text(z); fossil_free(z); } } return zOut; } /* ** Return the command-line option used to set the skin, or return NULL |
︙ | ︙ | |||
563 564 565 566 567 568 569 | "VALUES('skin:%q',%Q,now())", zNewName, zCurrent ); db_protect_pop(); return 0; } | < < < < < < < < < < > < < < < | | < < < < | < < < < < < < < < < < < < < < < < < < | | | | | | | | | | | | | | | | | | < < < < < | < < < < < < | > > > > | < < < | | | < < < | | < | | | < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < | > | > > > > > | | > > > > > > > > > > > | > < < < | | | | | | < | < | 501 502 503 504 505 506 507 508 509 510 511 512 513 514 515 516 517 518 519 520 521 522 523 524 525 526 527 528 529 530 531 532 533 534 535 536 537 538 539 540 541 542 543 544 545 546 547 548 549 550 551 552 553 554 555 556 557 558 559 560 561 562 563 564 565 566 567 568 569 570 571 572 573 574 575 576 577 578 579 580 581 582 583 584 585 586 587 588 589 590 591 592 593 594 595 596 597 598 599 600 601 602 603 604 605 606 607 608 609 610 611 612 613 614 615 616 617 618 619 620 621 622 623 624 625 626 627 628 629 630 631 632 633 634 635 636 637 638 639 640 641 642 643 644 645 646 647 648 649 650 651 652 653 654 655 656 657 658 659 660 661 662 663 664 665 666 667 668 669 670 671 672 673 674 675 676 677 678 679 680 681 682 683 684 685 686 687 688 689 690 691 692 693 694 695 696 697 698 699 700 701 702 703 704 705 706 707 708 709 710 711 712 713 714 715 716 717 718 719 720 721 722 723 724 725 726 727 728 729 730 731 732 733 734 735 736 737 738 739 740 741 742 743 744 745 746 747 748 749 750 | "VALUES('skin:%q',%Q,now())", zNewName, zCurrent ); db_protect_pop(); return 0; } /* ** WEBPAGE: setup_skin_admin ** ** Administrative actions on skins. For administrators only. */ void setup_skin_admin(void){ const char *z; char *zName; char *zErr = 0; const char *zCurrent = 0; /* Current skin */ int i; /* Loop counter */ Stmt q; int seenCurrent = 0; int once; login_check_credentials(); if( !g.perm.Admin ){ login_needed(0); return; } db_begin_transaction(); zCurrent = getSkin(0); for(i=0; i<count(aBuiltinSkin); i++){ aBuiltinSkin[i].zSQL = getSkin(aBuiltinSkin[i].zLabel); } style_set_current_feature("skins"); if( cgi_csrf_safe(2) ){ /* Process requests to delete a user-defined skin */ if( P("del1") && (zName = skinVarName(P("sn"), 1))!=0 ){ style_header("Confirm Custom Skin Delete"); @ <form action="%R/setup_skin_admin" method="post"><div> @ <p>Deletion of a custom skin is a permanent action that cannot @ be undone. Please confirm that this is what you want to do:</p> @ <input type="hidden" name="sn" value="%h(P("sn"))"> @ <input type="submit" name="del2" value="Confirm - Delete The Skin"> @ <input type="submit" name="cancel" value="Cancel - Do Not Delete"> login_insert_csrf_secret(); @ </div></form> style_finish_page(); db_end_transaction(1); return; } if( P("del2")!=0 && (zName = skinVarName(P("sn"), 1))!=0 ){ db_unprotect(PROTECT_CONFIG); db_multi_exec("DELETE FROM config WHERE name=%Q", zName); db_protect_pop(); } if( P("draftdel")!=0 ){ const char *zDraft = P("name"); if( sqlite3_strglob("draft[1-9]",zDraft)==0 ){ db_unprotect(PROTECT_CONFIG); db_multi_exec("DELETE FROM config WHERE name GLOB '%q-*'", zDraft); db_protect_pop(); } } if( skinRename() || skinSave(zCurrent) ){ db_end_transaction(0); return; } /* The user pressed one of the "Install" buttons. */ if( P("load") && (z = P("sn"))!=0 && z[0] ){ int seen = 0; /* Check to see if the current skin is already saved. If it is, there ** is no need to create a backup */ zCurrent = getSkin(0); for(i=0; i<count(aBuiltinSkin); i++){ if( fossil_strcmp(aBuiltinSkin[i].zSQL, zCurrent)==0 ){ seen = 1; break; } } if( !seen ){ seen = db_exists("SELECT 1 FROM config WHERE name GLOB 'skin:*'" " AND value=%Q", zCurrent); if( !seen ){ db_unprotect(PROTECT_CONFIG); db_multi_exec( "INSERT INTO config(name,value,mtime) VALUES(" " strftime('skin:Backup On %%Y-%%m-%%d %%H:%%M:%%S')," " %Q,now())", zCurrent ); db_protect_pop(); } } seen = 0; for(i=0; i<count(aBuiltinSkin); i++){ if( fossil_strcmp(aBuiltinSkin[i].zDesc, z)==0 ){ seen = 1; zCurrent = aBuiltinSkin[i].zSQL; db_unprotect(PROTECT_CONFIG); db_multi_exec("%s", zCurrent/*safe-for-%s*/); db_protect_pop(); break; } } if( !seen ){ zName = skinVarName(z,0); zCurrent = db_get(zName, 0); db_unprotect(PROTECT_CONFIG); db_multi_exec("%s", zCurrent/*safe-for-%s*/); db_protect_pop(); } } } style_header("Skins"); if( zErr ){ @ <p style="color:red">%h(zErr)</p> } @ <table border="0"> @ <tr><td colspan=4><h2>Built-in Skins:</h2></td></th> for(i=0; i<count(aBuiltinSkin); i++){ z = aBuiltinSkin[i].zDesc; @ <tr><td>%d(i+1).<td>%h(z)<td> <td> if( fossil_strcmp(aBuiltinSkin[i].zSQL, zCurrent)==0 ){ @ (Currently In Use) seenCurrent = 1; }else{ @ <form action="%R/setup_skin_admin" method="post"> @ <input type="hidden" name="sn" value="%h(z)"> @ <input type="submit" name="load" value="Install"> login_insert_csrf_secret(); if( pAltSkin==&aBuiltinSkin[i] ){ @ (Current override) } @ </form> } @ </tr> } db_prepare(&q, "SELECT substr(name, 6), value FROM config" " WHERE name GLOB 'skin:*'" " ORDER BY name" ); once = 1; while( db_step(&q)==SQLITE_ROW ){ const char *zN = db_column_text(&q, 0); const char *zV = db_column_text(&q, 1); i++; if( once ){ once = 0; @ <tr><td colspan=4><h2>Skins saved as "skin:*' entries \ @ in the CONFIG table:</h2></td></tr> } @ <tr><td>%d(i).<td>%h(zN)<td> <td> @ <form action="%R/setup_skin_admin" method="post"> login_insert_csrf_secret(); if( fossil_strcmp(zV, zCurrent)==0 ){ @ (Currently In Use) seenCurrent = 1; }else{ @ <input type="submit" name="load" value="Install"> @ <input type="submit" name="del1" value="Delete"> } @ <input type="submit" name="rename" value="Rename"> @ <input type="hidden" name="sn" value="%h(zN)"> @ </form></tr> } db_finalize(&q); if( !seenCurrent ){ i++; @ <tr><td colspan=4><h2>Current skin in css/header/footer/details entries \ @ in the CONFIG table:</h2></td></tr> @ <tr><td>%d(i).<td><i>Current</i><td> <td> @ <form action="%R/setup_skin_admin" method="post"> @ <input type="submit" name="save" value="Backup"> login_insert_csrf_secret(); @ </form> } db_prepare(&q, "SELECT DISTINCT substr(name, 1, 6) FROM config" " WHERE name GLOB 'draft[1-9]-*'" " ORDER BY name" ); once = 1; while( db_step(&q)==SQLITE_ROW ){ const char *zN = db_column_text(&q, 0); i++; if( once ){ once = 0; @ <tr><td colspan=4><h2>Draft skins stored as "draft[1-9]-*' entries \ @ in the CONFIG table:</h2></td></tr> } @ <tr><td>%d(i).<td>%h(zN)<td> <td> @ <form action="%R/setup_skin_admin" method="post"> login_insert_csrf_secret(); @ <input type="submit" name="draftdel" value="Delete"> @ <input type="hidden" name="name" value="%h(zN)"> @ </form></tr> } db_finalize(&q); @ </table> style_finish_page(); db_end_transaction(0); } /* ** Generate HTML for a <select> that lists all the available skin names, ** except for zExcept if zExcept!=NULL. */ static void skin_emit_skin_selector( const char *zVarName, /* Variable name for the <select> */ const char *zDefault, /* The default value, if not NULL */ const char *zExcept /* Omit this skin if not NULL */ ){ int i; @ <select size='1' name='%s(zVarName)'> if( fossil_strcmp(zExcept, "current")!=0 ){ @ <option value='current'>Currently In Use</option> } for(i=0; i<count(aBuiltinSkin); i++){ const char *zName = aBuiltinSkin[i].zLabel; if( fossil_strcmp(zName, zExcept)==0 ) continue; if( fossil_strcmp(zDefault, zName)==0 ){ @ <option value='%s(zName)' selected>\ @ %h(aBuiltinSkin[i].zDesc) (built-in)</option> }else{ @ <option value='%s(zName)'>\ @ %h(aBuiltinSkin[i].zDesc) (built-in)</option> } } for(i=1; i<=9; i++){ char zName[20]; sqlite3_snprintf(sizeof(zName), zName, "draft%d", i); if( fossil_strcmp(zName, zExcept)==0 ) continue; if( fossil_strcmp(zDefault, zName)==0 ){ @ <option value='%s(zName)' selected>%s(zName)</option> }else{ @ <option value='%s(zName)'>%s(zName)</option> } } @ </select> } /* ** Return the text of one of the skin files. */ static const char *skin_file_content(const char *zLabel, const char *zFile){ |
︙ | ︙ | |||
1032 1033 1034 1035 1036 1037 1038 | DiffConfig DCfg; construct_diff_flags(1, &DCfg); DCfg.diffFlags |= DIFF_STRIP_EOLCR; if( P("sbsdiff")!=0 ) DCfg.diffFlags |= DIFF_SIDEBYSIDE; blob_init(&to, zContent, -1); blob_init(&from, skin_file_content(zBasis, zFile), -1); blob_zero(&out); | | | 883 884 885 886 887 888 889 890 891 892 893 894 895 896 897 | DiffConfig DCfg; construct_diff_flags(1, &DCfg); DCfg.diffFlags |= DIFF_STRIP_EOLCR; if( P("sbsdiff")!=0 ) DCfg.diffFlags |= DIFF_SIDEBYSIDE; blob_init(&to, zContent, -1); blob_init(&from, skin_file_content(zBasis, zFile), -1); blob_zero(&out); DCfg.diffFlags |= DIFF_HTML | DIFF_NOTTOOBIG; if( DCfg.diffFlags & DIFF_SIDEBYSIDE ){ text_diff(&from, &to, &out, &DCfg); @ %s(blob_str(&out)) }else{ DCfg.diffFlags |= DIFF_LINENO; text_diff(&from, &to, &out, &DCfg); @ <pre class="udiff"> |
︙ | ︙ | |||
1103 1104 1105 1106 1107 1108 1109 | } /* Publish draft iSkin */ for(i=0; i<count(azSkinFile); i++){ char *zNew = db_get_mprintf("", "draft%d-%s", iSkin, azSkinFile[i]); db_set(azSkinFile[i]/*works-like:"x"*/, zNew, 0); } | < | < | 954 955 956 957 958 959 960 961 962 963 964 965 966 967 968 969 970 971 972 973 | } /* Publish draft iSkin */ for(i=0; i<count(azSkinFile); i++){ char *zNew = db_get_mprintf("", "draft%d-%s", iSkin, azSkinFile[i]); db_set(azSkinFile[i]/*works-like:"x"*/, zNew, 0); } } /* ** WEBPAGE: setup_skin ** ** Generate a page showing the steps needed to customize a skin. */ void setup_skin(void){ int i; /* Loop counter */ int iSkin; /* Which draft skin is being edited */ int isSetup; /* True for an administrator */ int isEditor; /* Others authorized to make edits */ char *zAllowedEditors; /* Who may edit the draft skin */ |
︙ | ︙ | |||
1169 1170 1171 1172 1173 1174 1175 | /* Publish the draft skin */ if( P("pub7")!=0 && PB("pub7ck1") && PB("pub7ck2") ){ skin_publish(iSkin); } style_set_current_feature("skins"); style_header("Customize Skin"); | < < < | 1018 1019 1020 1021 1022 1023 1024 1025 1026 1027 1028 1029 1030 1031 | /* Publish the draft skin */ if( P("pub7")!=0 && PB("pub7ck1") && PB("pub7ck2") ){ skin_publish(iSkin); } style_set_current_feature("skins"); style_header("Customize Skin"); @ <p>Customize the look of this Fossil repository by making changes @ to the CSS, Header, Footer, and Detail Settings in one of nine "draft" @ configurations. Then, after verifying that all is working correctly, @ publish the draft to become the new main Skin. Users can select a skin @ of their choice from the built-in ones or the locally-edited one via @ <a href='%R/skins'>the /skins page</a>.</p> |
︙ | ︙ | |||
1237 1238 1239 1240 1241 1242 1243 | @ <a name='step3'></a> @ <h1>Step 3: Initialize The Draft</h1> @ if( !isEditor ){ @ <p>You are not allowed to initialize draft%d(iSkin). Contact @ the administrator for this repository for more information. }else{ | < | < | 1083 1084 1085 1086 1087 1088 1089 1090 1091 1092 1093 1094 1095 1096 1097 1098 1099 1100 1101 1102 1103 1104 | @ <a name='step3'></a> @ <h1>Step 3: Initialize The Draft</h1> @ if( !isEditor ){ @ <p>You are not allowed to initialize draft%d(iSkin). Contact @ the administrator for this repository for more information. }else{ @ <p>Initialize the draft%d(iSkin) skin to one of the built-in skins @ or a preexisting skin, to use as a baseline.</p> @ @ <form method='POST' action='%R/setup_skin#step4' id='f03'> @ <p class='skinInput'> @ <input type='hidden' name='sk' value='%d(iSkin)'> @ Initialize skin <b>draft%d(iSkin)</b> using skin_emit_skin_selector("initskin", "current", 0); @ <input type='submit' name='init3' value='Go'> @ </p> @ </form> } @ @ <a name='step4'></a> @ <h1>Step 4: Make Edits</h1> |
︙ | ︙ | |||
1350 1351 1352 1353 1354 1355 1356 | ** Show a list of all of the built-in skins, plus the responsitory skin, ** and provide the user with an opportunity to change to any of them. */ void skins_page(void){ int i; char *zBase = fossil_strdup(g.zTop); size_t nBase = strlen(zBase); | < | > | | | < > > > > > < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < | 1194 1195 1196 1197 1198 1199 1200 1201 1202 1203 1204 1205 1206 1207 1208 1209 1210 1211 1212 1213 1214 1215 1216 1217 1218 1219 1220 1221 1222 1223 1224 1225 1226 1227 1228 1229 1230 1231 1232 1233 1234 1235 1236 1237 1238 1239 1240 1241 1242 1243 1244 1245 1246 | ** Show a list of all of the built-in skins, plus the responsitory skin, ** and provide the user with an opportunity to change to any of them. */ void skins_page(void){ int i; char *zBase = fossil_strdup(g.zTop); size_t nBase = strlen(zBase); if( iDraftSkin && sqlite3_strglob("*/draft?", zBase)==0 ){ nBase -= 7; zBase[nBase] = 0; }else if( pAltSkin ){ char *zPattern = mprintf("*/skn_%s", pAltSkin->zLabel); if( sqlite3_strglob(zPattern, zBase)==0 ){ nBase -= strlen(zPattern)-1; zBase[nBase] = 0; } fossil_free(zPattern); } login_check_credentials(); style_header("Skins"); if( iDraftSkin || nSkinRank<=1 ){ @ <p class="warning">Warning: if( iDraftSkin>0 ){ @ you are using a draft skin, }else{ @ this fossil instance was started with a hard-coded skin @ value, } @ which trumps any option selected below. A skin selected @ below will be recorded in your preference cookie @ but will not be used so long as the site has a @ higher-priority skin in place. @ </p> } @ <p>The following skins are available for this repository:</p> @ <ul> if( pAltSkin==0 && zAltSkinDir==0 && iDraftSkin==0 ){ @ <li> Standard skin for this repository ← <i>Currently in use</i> }else{ @ <li> %z(href("%R/skins?skin="))Standard skin for this repository</a> } for(i=0; i<count(aBuiltinSkin); i++){ if( pAltSkin==&aBuiltinSkin[i] ){ @ <li> %h(aBuiltinSkin[i].zDesc) ← <i>Currently in use</i> }else{ char *zUrl = href("%R/skins?skin=%T", aBuiltinSkin[i].zLabel); @ <li> %z(zUrl)%h(aBuiltinSkin[i].zDesc)</a> } } @ </ul> style_finish_page(); fossil_free(zBase); } |
Changes to src/smtp.c.
︙ | ︙ | |||
17 18 19 20 21 22 23 | ** ** Implementation of SMTP (Simple Mail Transport Protocol) according ** to RFC 5321. */ #include "config.h" #include "smtp.h" #include <assert.h> | | | | 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 | ** ** Implementation of SMTP (Simple Mail Transport Protocol) according ** to RFC 5321. */ #include "config.h" #include "smtp.h" #include <assert.h> #if (HAVE_DN_EXPAND || HAVE___NS_NAME_UNCOMPRESS || HAVE_NS_NAME_UNCOMPRESS) && \ (HAVE_NS_PARSERR || HAVE___NS_PARSERR) && !defined(FOSSIL_OMIT_DNS) # include <sys/types.h> # include <netinet/in.h> # if defined(HAVE_BIND_RESOLV_H) # include <bind/resolv.h> # include <bind/arpa/nameser_compat.h> # else # include <arpa/nameser.h> |
︙ | ︙ |
Changes to src/sqlcmd.c.
︙ | ︙ | |||
382 383 384 385 386 387 388 | ** files_of_checkin(X) A table-valued function that returns info on ** all files contained in check-in X. Example: ** ** SELECT * FROM files_of_checkin('trunk'); ** ** helptext A virtual table with one row for each command, ** webpage, and setting together with the built-in | | | 382 383 384 385 386 387 388 389 390 391 392 393 394 395 396 | ** files_of_checkin(X) A table-valued function that returns info on ** all files contained in check-in X. Example: ** ** SELECT * FROM files_of_checkin('trunk'); ** ** helptext A virtual table with one row for each command, ** webpage, and setting together with the built-in ** help text. ** ** now() Return the number of seconds since 1970. ** ** obscure(T) Obfuscate the text password T so that its ** original value is not readily visible. Fossil ** uses this same algorithm when storing passwords ** of remote URLs. |
︙ | ︙ |
Changes to src/stash.c.
︙ | ︙ | |||
425 426 427 428 429 430 431 | int rid = db_column_int(&q, 0); int isRemoved = db_column_int(&q, 1); int isLink = db_column_int(&q, 3); const char *zOrig = db_column_text(&q, 4); const char *zNew = db_column_text(&q, 5); char *zOPath = mprintf("%s%s", g.zLocalRoot, zOrig); Blob a, b; | < < < | 425 426 427 428 429 430 431 432 433 434 435 436 437 438 439 440 441 442 443 444 445 | int rid = db_column_int(&q, 0); int isRemoved = db_column_int(&q, 1); int isLink = db_column_int(&q, 3); const char *zOrig = db_column_text(&q, 4); const char *zNew = db_column_text(&q, 5); char *zOPath = mprintf("%s%s", g.zLocalRoot, zOrig); Blob a, b; if( rid==0 ){ db_ephemeral_blob(&q, 6, &a); if( !bWebpage ) fossil_print("ADDED %s\n", zNew); diff_print_index(zNew, pCfg, 0); diff_file_mem(&empty, &a, zNew, pCfg); }else if( isRemoved ){ if( !bWebpage) fossil_print("DELETE %s\n", zOrig); diff_print_index(zNew, pCfg, 0); if( fBaseline ){ content_get(rid, &a); diff_file_mem(&a, &empty, zOrig, pCfg); } }else{ Blob delta; |
︙ | ︙ | |||
570 571 572 573 574 575 576 | stash_tables_exist_and_current(); if( g.argc<=2 ){ zCmd = "save"; }else{ zCmd = g.argv[2]; } nCmd = strlen(zCmd); | | | 567 568 569 570 571 572 573 574 575 576 577 578 579 580 581 | stash_tables_exist_and_current(); if( g.argc<=2 ){ zCmd = "save"; }else{ zCmd = g.argv[2]; } nCmd = strlen(zCmd); if( memcmp(zCmd, "save", nCmd)==0 ){ if( unsaved_changes(0)==0 ){ fossil_fatal("nothing to stash"); } stashid = stash_create(); undo_disable(); if( g.argc>=2 ){ int nFile = db_int(0, "SELECT count(*) FROM stashfile WHERE stashid=%d", |
︙ | ︙ | |||
601 602 603 604 605 606 607 | ** we have a copy of the changes before deleting them. */ db_commit_transaction(); g.argv[1] = "revert"; revert_cmd(); fossil_print("stash %d saved\n", stashid); return; }else | | | | 598 599 600 601 602 603 604 605 606 607 608 609 610 611 612 613 614 615 | ** we have a copy of the changes before deleting them. */ db_commit_transaction(); g.argv[1] = "revert"; revert_cmd(); fossil_print("stash %d saved\n", stashid); return; }else if( memcmp(zCmd, "snapshot", nCmd)==0 ){ stash_create(); }else if( memcmp(zCmd, "list", nCmd)==0 || memcmp(zCmd, "ls", nCmd)==0 ){ Stmt q, q2; int n = 0, width; int verboseFlag = find_option("verbose","v",0)!=0; const char *zWidth = find_option("width","W",1); if( zWidth ){ width = atoi(zWidth); |
︙ | ︙ | |||
668 669 670 671 672 673 674 | db_reset(&q2); } } db_finalize(&q); if( verboseFlag ) db_finalize(&q2); if( n==0 ) fossil_print("empty stash\n"); }else | | | 665 666 667 668 669 670 671 672 673 674 675 676 677 678 679 | db_reset(&q2); } } db_finalize(&q); if( verboseFlag ) db_finalize(&q2); if( n==0 ) fossil_print("empty stash\n"); }else if( memcmp(zCmd, "drop", nCmd)==0 || memcmp(zCmd, "rm", nCmd)==0 ){ int allFlag = find_option("all", "a", 0)!=0; if( allFlag ){ Blob ans; char cReply; prompt_user("This action is not undoable. Continue (y/N)? ", &ans); cReply = blob_str(&ans)[0]; if( cReply=='y' || cReply=='Y' ){ |
︙ | ︙ | |||
694 695 696 697 698 699 700 | }else{ undo_begin(); undo_save_stash(0); stash_drop(stashid); undo_finish(); } }else | | | 691 692 693 694 695 696 697 698 699 700 701 702 703 704 705 | }else{ undo_begin(); undo_save_stash(0); stash_drop(stashid); undo_finish(); } }else if( memcmp(zCmd, "pop", nCmd)==0 || memcmp(zCmd, "apply", nCmd)==0 ){ char *zCom = 0, *zDate = 0, *zHash = 0; int popped = *zCmd=='p'; if( popped ){ if( g.argc>3 ) usage("pop"); stashid = stash_get_id(0); }else{ if( g.argc>4 ) usage("apply STASHID"); |
︙ | ︙ | |||
723 724 725 726 727 728 729 | } fossil_free(zCom); fossil_free(zDate); fossil_free(zHash); undo_finish(); if( popped ) stash_drop(stashid); }else | | | | | | | | | | 720 721 722 723 724 725 726 727 728 729 730 731 732 733 734 735 736 737 738 739 740 741 742 743 744 745 746 747 748 749 750 751 752 753 754 755 756 757 758 759 760 761 762 763 764 765 766 767 768 769 770 771 772 773 774 | } fossil_free(zCom); fossil_free(zDate); fossil_free(zHash); undo_finish(); if( popped ) stash_drop(stashid); }else if( memcmp(zCmd, "goto", nCmd)==0 ){ int nConflict; int vid; if( g.argc>4 ) usage("apply STASHID"); stashid = stash_get_id(g.argc==4 ? g.argv[3] : 0); undo_begin(); vid = db_int(0, "SELECT blob.rid FROM stash,blob" " WHERE stashid=%d AND blob.uuid=stash.hash", stashid); nConflict = update_to(vid); stash_apply(stashid, nConflict); db_multi_exec("UPDATE vfile SET mtime=0 WHERE pathname IN " "(SELECT origname FROM stashfile WHERE stashid=%d)", stashid); undo_finish(); }else if( memcmp(zCmd, "diff", nCmd)==0 || memcmp(zCmd, "gdiff", nCmd)==0 || memcmp(zCmd, "show", nCmd)==0 || memcmp(zCmd, "gshow", nCmd)==0 || memcmp(zCmd, "cat", nCmd)==0 || memcmp(zCmd, "gcat", nCmd)==0 ){ int fBaseline = 0; DiffConfig DCfg; if( strstr(zCmd,"show")!=0 || strstr(zCmd,"cat")!=0 ){ fBaseline = 1; } if( find_option("tk",0,0)!=0 ){ db_close(0); diff_tk(fBaseline ? "stash show" : "stash diff", 3); return; } diff_options(&DCfg, zCmd[0]=='g', 0); stashid = stash_get_id(g.argc==4 ? g.argv[3] : 0); stash_diff(stashid, fBaseline, &DCfg); }else if( memcmp(zCmd, "help", nCmd)==0 ){ g.argv[1] = "help"; g.argv[2] = "stash"; g.argc = 3; help_cmd(); }else { usage("SUBCOMMAND ARGS..."); } db_end_transaction(0); } |
Changes to src/stat.c.
︙ | ︙ | |||
555 556 557 558 559 560 561 | }else{ @ <tr><td width='100%%'>%h(db_column_text(&q,0))</td> @ <td><nobr>%h(db_column_text(&q,1))</nobr></td></tr> } cnt++; } db_finalize(&q); | | | 555 556 557 558 559 560 561 562 563 564 565 566 567 568 569 | }else{ @ <tr><td width='100%%'>%h(db_column_text(&q,0))</td> @ <td><nobr>%h(db_column_text(&q,1))</nobr></td></tr> } cnt++; } db_finalize(&q); if( nOmitted ){ @ <tr><td><a href="urllist?all"><i>Show %d(nOmitted) more...</i></a> } if( cnt ){ @ </table> total += cnt; } |
︙ | ︙ | |||
713 714 715 716 717 718 719 | void repo_schema_page(void){ Stmt q; Blob sql; const char *zArg = P("n"); login_check_credentials(); if( !g.perm.Admin ){ login_needed(0); return; } | < < < < < < < < < < < < < < < < | 713 714 715 716 717 718 719 720 721 722 723 724 725 726 | void repo_schema_page(void){ Stmt q; Blob sql; const char *zArg = P("n"); login_check_credentials(); if( !g.perm.Admin ){ login_needed(0); return; } style_set_current_feature("stat"); style_header("Repository Schema"); style_adunit_config(ADUNIT_RIGHT_OK); style_submenu_element("Stat", "stat"); style_submenu_element("URLs", "urllist"); if( sqlite3_compileoption_used("ENABLE_DBSTAT_VTAB") ){ style_submenu_element("Table Sizes", "repo-tabsize"); |
︙ | ︙ | |||
773 774 775 776 777 778 779 | } @ </pre> db_finalize(&q); }else{ style_submenu_element("Stat1","repo_stat1"); } } | < < < < < < < < < < < < < < < < < < < < < < < | < < < < < | | < < < < | < < < < < < < < < < < | 757 758 759 760 761 762 763 764 765 766 767 768 769 770 771 772 773 774 775 776 777 778 779 780 781 782 783 784 785 786 787 788 789 790 791 792 793 794 795 796 797 798 799 800 801 802 803 | } @ </pre> db_finalize(&q); }else{ style_submenu_element("Stat1","repo_stat1"); } } style_finish_page(); } /* ** WEBPAGE: repo_stat1 ** ** Show the sqlite_stat1 table for the repository schema */ void repo_stat1_page(void){ login_check_credentials(); if( !g.perm.Admin ){ login_needed(0); return; } style_set_current_feature("stat"); style_header("Repository STAT1 Table"); style_adunit_config(ADUNIT_RIGHT_OK); style_submenu_element("Stat", "stat"); style_submenu_element("Schema", "repo_schema"); if( db_table_exists("repository","sqlite_stat1") ){ Stmt q; db_prepare(&q, "SELECT tbl, idx, stat FROM repository.sqlite_stat1" " ORDER BY tbl, idx"); @ <pre> while( db_step(&q)==SQLITE_ROW ){ const char *zTab = db_column_text(&q,0); const char *zIdx = db_column_text(&q,1); const char *zStat = db_column_text(&q,2); char *zUrl = href("%R/repo_schema?n=%t",zTab); @ INSERT INTO sqlite_stat1 VALUES('%z(zUrl)%h(zTab)</a>','%h(zIdx)','%h(zStat)'); } @ </pre> db_finalize(&q); } style_finish_page(); } /* ** WEBPAGE: repo-tabsize ** ** Show relative sizes of tables in the repository database. |
︙ | ︙ | |||
932 933 934 935 936 937 938 | /* ** Gather statistics on artifact types, counts, and sizes. ** ** Only populate the artstat.atype field if the bWithTypes parameter is true. */ void gather_artifact_stats(int bWithTypes){ | | | | | | | | | | | | 873 874 875 876 877 878 879 880 881 882 883 884 885 886 887 888 889 890 891 892 893 894 895 896 897 898 899 900 901 902 903 904 905 906 907 908 909 910 911 912 913 914 915 916 917 918 919 920 921 922 923 924 925 926 927 928 929 930 931 932 933 934 935 936 937 | /* ** Gather statistics on artifact types, counts, and sizes. ** ** Only populate the artstat.atype field if the bWithTypes parameter is true. */ void gather_artifact_stats(int bWithTypes){ static const char zSql[] = @ CREATE TEMP TABLE artstat( @ id INTEGER PRIMARY KEY, -- Corresponds to BLOB.RID @ atype TEXT, -- 'data', 'manifest', 'tag', 'wiki', etc. @ isDelta BOOLEAN, -- true if stored as a delta @ szExp, -- expanded, uncompressed size @ szCmpr -- size as stored on disk @ ); @ INSERT INTO artstat(id,atype,isDelta,szExp,szCmpr) @ SELECT blob.rid, NULL, @ delta.rid IS NOT NULL, @ size, octet_length(content) @ FROM blob LEFT JOIN delta ON blob.rid=delta.rid @ WHERE content IS NOT NULL; ; static const char zSql2[] = @ UPDATE artstat SET atype='file' @ WHERE +id IN (SELECT fid FROM mlink); @ UPDATE artstat SET atype='manifest' @ WHERE id IN (SELECT objid FROM event WHERE type='ci') AND atype IS NULL; @ UPDATE artstat SET atype='forum' @ WHERE id IN (SELECT objid FROM event WHERE type='f') AND atype IS NULL; @ UPDATE artstat SET atype='cluster' @ WHERE atype IS NULL @ AND id IN (SELECT rid FROM tagxref @ WHERE tagid=(SELECT tagid FROM tag @ WHERE tagname='cluster')); @ UPDATE artstat SET atype='ticket' @ WHERE atype IS NULL @ AND id IN (SELECT rid FROM tagxref @ WHERE tagid IN (SELECT tagid FROM tag @ WHERE tagname GLOB 'tkt-*')); @ UPDATE artstat SET atype='wiki' @ WHERE atype IS NULL @ AND id IN (SELECT rid FROM tagxref @ WHERE tagid IN (SELECT tagid FROM tag @ WHERE tagname GLOB 'wiki-*')); @ UPDATE artstat SET atype='technote' @ WHERE atype IS NULL @ AND id IN (SELECT rid FROM tagxref @ WHERE tagid IN (SELECT tagid FROM tag @ WHERE tagname GLOB 'event-*')); @ UPDATE artstat SET atype='attachment' @ WHERE atype IS NULL @ AND id IN (SELECT attachid FROM attachment UNION @ SELECT blob.rid FROM attachment JOIN blob ON uuid=src); @ UPDATE artstat SET atype='tag' @ WHERE atype IS NULL @ AND id IN (SELECT srcid FROM tagxref); @ UPDATE artstat SET atype='tag' @ WHERE atype IS NULL @ AND id IN (SELECT objid FROM event WHERE type='g'); @ UPDATE artstat SET atype='unused' WHERE atype IS NULL; ; db_multi_exec("%s", zSql/*safe-for-%s*/); if( bWithTypes ){ db_multi_exec("%s", zSql2/*safe-for-%s*/); } |
︙ | ︙ |
Changes to src/statrep.c.
︙ | ︙ | |||
128 129 130 131 132 133 134 | const char *zNot = rc=='n' ? "NOT" : ""; statsReportTimelineYFlag = "ci"; db_multi_exec( "CREATE TEMP VIEW v_reports AS " "SELECT * FROM event WHERE type='ci' AND %s" " AND objid %s IN (SELECT cid FROM plink WHERE NOT isprim)", zTimeSpan/*safe-for-%s*/, zNot/*safe-for-%s*/ | | | 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 | const char *zNot = rc=='n' ? "NOT" : ""; statsReportTimelineYFlag = "ci"; db_multi_exec( "CREATE TEMP VIEW v_reports AS " "SELECT * FROM event WHERE type='ci' AND %s" " AND objid %s IN (SELECT cid FROM plink WHERE NOT isprim)", zTimeSpan/*safe-for-%s*/, zNot/*safe-for-%s*/ ); } return statsReportType = rc; } /* ** Returns a string suitable (for a given value of suitable) for ** use in a label with the header of the /reports pages, dependent |
︙ | ︙ | |||
307 308 309 310 311 312 313 | zTimeframe, (char)statsReportType); if( zUserName ){ cgi_printf("&u=%t", zUserName); } cgi_printf("'>%s</a>", zTimeframe); } @ </td><td>%d(nCount)</td> | | | 307 308 309 310 311 312 313 314 315 316 317 318 319 320 321 | zTimeframe, (char)statsReportType); if( zUserName ){ cgi_printf("&u=%t", zUserName); } cgi_printf("'>%s</a>", zTimeframe); } @ </td><td>%d(nCount)</td> @ <td> if( strcmp(zTimeframe, zCurrentTF)==0 && rNowFraction>0.05 && nCount>0 && nMaxEvents>0 ){ /* If the timespan covered by this row contains "now", then project ** the number of changes until the completion of the timespan and |
︙ | ︙ | |||
738 739 740 741 742 743 744 | statsReportTimelineYFlag); if( zUserName ){ cgi_printf("&u=%t",zUserName); } cgi_printf("'>%s</a></td>",zWeek); cgi_printf("<td>%d</td>",nCount); | | | 738 739 740 741 742 743 744 745 746 747 748 749 750 751 752 | statsReportTimelineYFlag); if( zUserName ){ cgi_printf("&u=%t",zUserName); } cgi_printf("'>%s</a></td>",zWeek); cgi_printf("<td>%d</td>",nCount); cgi_printf("<td>"); if( nCount ){ if( zCurrentWeek!=0 && strcmp(zWeek, zCurrentWeek)==0 && rNowFraction>0.05 && nMaxEvents>0 ){ /* If the covered covered by this row contains "now", then project |
︙ | ︙ |
Changes to src/style.c.
︙ | ︙ | |||
450 451 452 453 454 455 456 | ** or after any updates to the CSS files */ blob_appendf(&url, "?id=%x", skin_id("css")); if( P("once")!=0 && P("skin")!=0 ){ blob_appendf(&url, "&skin=%s&once", skin_in_use()); } | | | | 450 451 452 453 454 455 456 457 458 459 460 461 462 463 464 465 466 467 468 469 470 471 472 473 474 475 476 477 478 479 480 481 482 483 484 | ** or after any updates to the CSS files */ blob_appendf(&url, "?id=%x", skin_id("css")); if( P("once")!=0 && P("skin")!=0 ){ blob_appendf(&url, "&skin=%s&once", skin_in_use()); } /* Generate the CSS URL variable */ Th_Store("stylesheet_url", blob_str(&url)); blob_reset(&url); } /* ** Create a TH1 variable containing the URL for the specified image. ** The resulting variable name will be of the form $[zImageName]_image_url. ** The value will be a URL that includes an id= query parameter that ** changes if the underlying resource changes or if a different skin ** is selected. */ static void image_url_var(const char *zImageName){ char *zVarName; /* Name of the new TH1 variable */ char *zResource; /* Name of CONFIG entry holding content */ char *zUrl; /* The URL */ zResource = mprintf("%s-image", zImageName); zUrl = mprintf("%R/%s?id=%x", zImageName, skin_id(zResource)); free(zResource); zVarName = mprintf("%s_image_url", zImageName); Th_Store(zVarName, zUrl); free(zVarName); free(zUrl); } /* ** Output TEXT with a click-to-copy button next to it. Loads the copybtn.js |
︙ | ︙ | |||
595 596 597 598 599 600 601 | ** The text '$nonce' is replaced by style_nonce() if and whereever it ** occurs in the input string. ** ** The string returned is obtained from fossil_malloc() and ** should be released by the caller. */ char *style_csp(int toHeader){ | | | 595 596 597 598 599 600 601 602 603 604 605 606 607 608 609 | ** The text '$nonce' is replaced by style_nonce() if and whereever it ** occurs in the input string. ** ** The string returned is obtained from fossil_malloc() and ** should be released by the caller. */ char *style_csp(int toHeader){ static const char zBackupCSP[] = "default-src 'self' data:; " "script-src 'self' 'nonce-$nonce'; " "style-src 'self' 'unsafe-inline'; " "img-src * data:"; const char *zFormat; Blob csp; char *zNonce; |
︙ | ︙ | |||
631 632 633 634 635 636 637 | return zCsp; } /* ** Disable content security policy for the current page. ** WARNING: Do not do this lightly! ** | | | | 631 632 633 634 635 636 637 638 639 640 641 642 643 644 645 646 647 648 649 650 651 652 653 654 655 656 657 | return zCsp; } /* ** Disable content security policy for the current page. ** WARNING: Do not do this lightly! ** ** This routine must be called before the CSP is sued by ** style_header(). */ void style_disable_csp(void){ disableCSP = 1; } /* ** Default HTML page header text through <body>. If the repository-specific ** header template lacks a <body> tag, then all of the following is ** prepended. */ static const char zDfltHeader[] = @ <html> @ <head> @ <meta charset="UTF-8"> @ <base href="$baseurl/$current_page"> @ <meta http-equiv="Content-Security-Policy" content="$default_csp"> @ <meta name="viewport" content="width=device-width, initial-scale=1.0"> @ <title>$<project_name>: $<title></title> |
︙ | ︙ | |||
668 669 670 671 672 673 674 | const char *get_default_header(){ return zDfltHeader; } /* ** The default TCL list that defines the main menu. */ | | | 668 669 670 671 672 673 674 675 676 677 678 679 680 681 682 | const char *get_default_header(){ return zDfltHeader; } /* ** The default TCL list that defines the main menu. */ static const char zDfltMainMenu[] = @ Home /home * {} @ Timeline /timeline {o r j} {} @ Files /dir?ci=tip oh desktoponly @ Branches /brlist o wideonly @ Tags /taglist o wideonly @ Forum /forum {@2 3 4 5 6} wideonly @ Chat /chat C wideonly |
︙ | ︙ | |||
793 794 795 796 797 798 799 | if( !login_is_nobody() ){ Th_Store("login", g.zLogin); } Th_MaybeStore("current_feature", feature_from_page_path(local_zCurrentPage) ); if( g.ftntsIssues[0] || g.ftntsIssues[1] || g.ftntsIssues[2] || g.ftntsIssues[3] ){ char buf[80]; | | | | 793 794 795 796 797 798 799 800 801 802 803 804 805 806 807 808 | if( !login_is_nobody() ){ Th_Store("login", g.zLogin); } Th_MaybeStore("current_feature", feature_from_page_path(local_zCurrentPage) ); if( g.ftntsIssues[0] || g.ftntsIssues[1] || g.ftntsIssues[2] || g.ftntsIssues[3] ){ char buf[80]; sqlite3_snprintf(sizeof(buf),buf,"%i %i %i %i",g.ftntsIssues[0],g.ftntsIssues[1], g.ftntsIssues[2],g.ftntsIssues[3]); Th_Store("footnotes_issues_counters", buf); } } /* ** Draw the header. */ |
︙ | ︙ | |||
1283 1284 1285 1286 1287 1288 1289 | ** * $basename ** * $secureurl ** * $home ** * $logo ** * $background ** ** The output from TH1 becomes the style sheet. Fossil always reports | | | 1283 1284 1285 1286 1287 1288 1289 1290 1291 1292 1293 1294 1295 1296 1297 | ** * $basename ** * $secureurl ** * $home ** * $logo ** * $background ** ** The output from TH1 becomes the style sheet. Fossil always reports ** that the style sheet is cacheable. */ void page_style_css(void){ Blob css = empty_blob; int i; const char * zDefaults; const char *zSkin; |
︙ | ︙ | |||
1323 1324 1325 1326 1327 1328 1329 | /* Tell CGI that the content returned by this page is considered cacheable */ g.isConst = 1; } /* ** All possible capabilities */ | | | 1323 1324 1325 1326 1327 1328 1329 1330 1331 1332 1333 1334 1335 1336 1337 | /* Tell CGI that the content returned by this page is considered cacheable */ g.isConst = 1; } /* ** All possible capabilities */ static const char allCap[] = "abcdefghijklmnopqrstuvwxyz0123456789ABCDEFGHIJKL"; /* ** Compute the current login capabilities */ static char *find_capabilities(char *zCap){ int i, j; |
︙ | ︙ | |||
1396 1397 1398 1399 1400 1401 1402 | ** For administators, or if the test_env_enable setting is true, then ** details of the request environment are displayed. Otherwise, just ** the error message is shown. ** ** If zFormat is an empty string, then this is the /test_env page. */ void webpage_error(const char *zFormat, ...){ | | | 1396 1397 1398 1399 1400 1401 1402 1403 1404 1405 1406 1407 1408 1409 1410 | ** For administators, or if the test_env_enable setting is true, then ** details of the request environment are displayed. Otherwise, just ** the error message is shown. ** ** If zFormat is an empty string, then this is the /test_env page. */ void webpage_error(const char *zFormat, ...){ int showAll; char *zErr = 0; int isAuth = 0; char zCap[100]; login_check_credentials(); if( g.perm.Admin || g.perm.Setup || db_get_boolean("test_env_enable",0) ){ isAuth = 1; |
︙ | ︙ | |||
1477 1478 1479 1480 1481 1482 1483 | break; } default: { @ CSRF safety = unsafe<br> break; } } | | < < < < | 1477 1478 1479 1480 1481 1482 1483 1484 1485 1486 1487 1488 1489 1490 1491 1492 1493 1494 1495 1496 1497 1498 1499 1500 1501 1502 1503 1504 1505 1506 | break; } default: { @ CSRF safety = unsafe<br> break; } } @ fossil_exe_id() = %h(fossil_exe_id())<br> if( g.perm.Admin ){ int k; for(k=0; g.argvOrig[k]; k++){ Blob t; blob_init(&t, 0, 0); blob_append_escaped_arg(&t, g.argvOrig[k], 0); @ argv[%d(k)] = %h(blob_str(&t))<br> blob_zero(&t); } } @ <hr> P("HTTP_USER_AGENT"); P("SERVER_SOFTWARE"); cgi_print_all(showAll, 0, 0); if( showAll && blob_size(&g.httpHeader)>0 ){ @ <hr> @ <pre> @ %h(blob_str(&g.httpHeader)) @ </pre> } } |
︙ | ︙ | |||
1653 1654 1655 1656 1657 1658 1659 | ** Example: ** ** style_select_list_int("my-grapes", "my_grapes", "Grapes", ** "Select the number of grapes", ** atoi(PD("my_field","0")), ** "", 1, "2", 2, "Three", 3, ** NULL); | | | 1649 1650 1651 1652 1653 1654 1655 1656 1657 1658 1659 1660 1661 1662 1663 | ** Example: ** ** style_select_list_int("my-grapes", "my_grapes", "Grapes", ** "Select the number of grapes", ** atoi(PD("my_field","0")), ** "", 1, "2", 2, "Three", 3, ** NULL); ** */ void style_select_list_int(const char * zWrapperId, const char *zFieldName, const char * zLabel, const char * zToolTip, int selectedVal, ... ){ char * zLabelID = style_next_input_id(); va_list vargs; |
︙ | ︙ | |||
1777 1778 1779 1780 1781 1782 1783 | if( z[0]=='/' || z[0]=='\\' ){ zOrigin = z+1; } } CX("<script nonce='%s'>/* %s:%d */\n", style_nonce(), zOrigin, iLine); } | | | 1773 1774 1775 1776 1777 1778 1779 1780 1781 1782 1783 1784 1785 1786 1787 | if( z[0]=='/' || z[0]=='\\' ){ zOrigin = z+1; } } CX("<script nonce='%s'>/* %s:%d */\n", style_nonce(), zOrigin, iLine); } /* Generate the closing </script> tag */ void style_script_end(void){ CX("</script>\n"); } /* ** Emits a NOSCRIPT tag with an error message stating that JS is |
︙ | ︙ |
Changes to src/style.fileedit.css.
︙ | ︙ | |||
75 76 77 78 79 80 81 | } body.fileedit #fileedit-tab-preview-wrapper > pre { margin: 0; } body.fileedit #fileedit-tab-fileselect > h1 { margin: 0; } | < < < | 75 76 77 78 79 80 81 82 83 84 85 86 87 88 | } body.fileedit #fileedit-tab-preview-wrapper > pre { margin: 0; } body.fileedit #fileedit-tab-fileselect > h1 { margin: 0; } body.fileedit .fileedit-options.commit-message > div { display: flex; flex-direction: column; align-items: stretch; font-family: monospace; } body.fileedit .fileedit-options.commit-message > div > * { |
︙ | ︙ | |||
106 107 108 109 110 111 112 | margin: 0.5em; } body.fileedit .tab-container > .tabs > .tab-panel > .fileedit-options > input { vertical-align: middle; margin: 0.5em; } body.fileedit .tab-container > .tabs > .tab-panel > .fileedit-options > .input-with-label { | > | | 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 | margin: 0.5em; } body.fileedit .tab-container > .tabs > .tab-panel > .fileedit-options > input { vertical-align: middle; margin: 0.5em; } body.fileedit .tab-container > .tabs > .tab-panel > .fileedit-options > .input-with-label { vertical-align: middle; margin: 0.5em; } body.fileedit .fileedit-options > div > * { margin: 0.25em; } body.fileedit .fileedit-options.flex-container.flex-row { align-items: first baseline; } |
︙ | ︙ |
Changes to src/style.wikiedit.css.
︙ | ︙ | |||
41 42 43 44 45 46 47 48 49 50 51 52 53 54 | margin: 0.5em; } body.wikiedit .tab-container > .tabs > .tab-panel > .wikiedit-options > input { vertical-align: middle; margin: 0.5em; } body.wikiedit .tab-container > .tabs > .tab-panel > .wikiedit-options > .input-with-label { margin: 0 0.5em 0.25em 0.5em; } body.wikiedit label { display: inline; /* some skins set label display to block! */ } body.wikiedit .wikiedit-options > div > * { margin: 0.25em; | > | 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 | margin: 0.5em; } body.wikiedit .tab-container > .tabs > .tab-panel > .wikiedit-options > input { vertical-align: middle; margin: 0.5em; } body.wikiedit .tab-container > .tabs > .tab-panel > .wikiedit-options > .input-with-label { vertical-align: middle; margin: 0 0.5em 0.25em 0.5em; } body.wikiedit label { display: inline; /* some skins set label display to block! */ } body.wikiedit .wikiedit-options > div > * { margin: 0.25em; |
︙ | ︙ |
Changes to src/sync.c.
︙ | ︙ | |||
50 51 52 53 54 55 56 | */ static int client_sync_all_urls( unsigned syncFlags, /* Mask of SYNC_* flags */ unsigned configRcvMask, /* Receive these configuration items */ unsigned configSendMask, /* Send these configuration items */ const char *zAltPCode /* Alternative project code (usually NULL) */ ){ | | | | | < < < < | < < < | | | < < < < < < < < < | | < < | < < < | < | | < < < < < < | < < < | | | | | | | | | < < < < < < < < < < < | 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 | */ static int client_sync_all_urls( unsigned syncFlags, /* Mask of SYNC_* flags */ unsigned configRcvMask, /* Receive these configuration items */ unsigned configSendMask, /* Send these configuration items */ const char *zAltPCode /* Alternative project code (usually NULL) */ ){ int nErr; int nOther; char **azOther; int i; Stmt q; sync_explain(syncFlags); nErr = client_sync(syncFlags, configRcvMask, configSendMask, zAltPCode); if( nErr==0 ) url_remember(); if( (syncFlags & SYNC_ALLURL)==0 ) return nErr; nOther = 0; azOther = 0; db_prepare(&q, "SELECT substr(name,10) FROM config" " WHERE name glob 'sync-url:*'" " AND value<>(SELECT value FROM config WHERE name='last-sync-url')" ); while( db_step(&q)==SQLITE_ROW ){ const char *zUrl = db_column_text(&q, 0); azOther = fossil_realloc(azOther, sizeof(*azOther)*(nOther+1)); azOther[nOther++] = fossil_strdup(zUrl); } db_finalize(&q); for(i=0; i<nOther; i++){ int rc; url_unparse(&g.url); url_parse(azOther[i], URL_PROMPT_PW|URL_ASK_REMEMBER_PW|URL_USE_CONFIG); sync_explain(syncFlags); rc = client_sync(syncFlags, configRcvMask, configSendMask, zAltPCode); nErr += rc; if( (g.url.flags & URL_REMEMBER_PW)!=0 && rc==0 ){ char *zKey = mprintf("sync-pw:%s", azOther[i]); char *zPw = obscure(g.url.passwd); if( zPw && zPw[0] ){ db_set(zKey/*works-like:""*/, zPw, 0); } fossil_free(zPw); fossil_free(zKey); } fossil_free(azOther[i]); azOther[i] = 0; } fossil_free(azOther); return nErr; } /* ** If the repository is configured for autosyncing, then do an ** autosync. Bits of the "flags" parameter determine details of behavior: |
︙ | ︙ | |||
168 169 170 171 172 173 174 | int configSync = 0; /* configuration changes transferred */ if( g.fNoSync ){ return 0; } zAutosync = db_get_for_subsystem("autosync", zSubsys); if( zAutosync==0 ) zAutosync = "on"; /* defend against misconfig */ if( is_false(zAutosync) ) return 0; | | | | 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 | int configSync = 0; /* configuration changes transferred */ if( g.fNoSync ){ return 0; } zAutosync = db_get_for_subsystem("autosync", zSubsys); if( zAutosync==0 ) zAutosync = "on"; /* defend against misconfig */ if( is_false(zAutosync) ) return 0; if( db_get_boolean("dont-push",0) || sqlite3_strglob("*pull*", zAutosync)==0 ){ flags &= ~SYNC_CKIN_LOCK; if( flags & SYNC_PUSH ) return 0; } if( find_option("verbose","v",0)!=0 ) flags |= SYNC_VERBOSE; url_parse(0, URL_REMEMBER|URL_USE_CONFIG); if( g.url.protocol==0 ) return 0; if( g.url.user!=0 && g.url.passwd==0 ){ g.url.passwd = unobscure(db_get("last-sync-pw", 0)); g.url.flags |= URL_PROMPT_PW; url_prompt_for_password(); } g.zHttpAuth = get_httpauth(); if( sqlite3_strglob("*all*", zAutosync)==0 ){ rc = client_sync_all_urls(flags|SYNC_ALLURL, configSync, 0, 0); }else{ url_remember(); sync_explain(flags); url_enable_proxy("via proxy: "); rc = client_sync(flags, configSync, 0, 0); } return rc; } /* ** This routine will try a number of times to perform autosync with a ** 0.5 second sleep between attempts. The number of attempts is determined |
︙ | ︙ | |||
274 275 276 277 278 279 280 | } } if( find_option("private",0,0)!=0 ){ *pSyncFlags |= SYNC_PRIVATE; } if( find_option("verbose","v",0)!=0 ){ *pSyncFlags |= SYNC_VERBOSE; | < < < | 232 233 234 235 236 237 238 239 240 241 242 243 244 245 | } } if( find_option("private",0,0)!=0 ){ *pSyncFlags |= SYNC_PRIVATE; } if( find_option("verbose","v",0)!=0 ){ *pSyncFlags |= SYNC_VERBOSE; } if( find_option("no-http-compression",0,0)!=0 ){ *pSyncFlags |= SYNC_NOHTTPCOMPRESS; } if( find_option("all",0,0)!=0 ){ *pSyncFlags |= SYNC_ALLURL; } |
︙ | ︙ | |||
343 344 345 346 347 348 349 | if( g.url.protocol==0 ){ if( urlOptional ) fossil_exit(0); usage("URL"); } user_select(); url_enable_proxy("via proxy: "); *pConfigFlags |= configSync; | < < < < < < | 298 299 300 301 302 303 304 305 306 307 308 309 310 311 | if( g.url.protocol==0 ){ if( urlOptional ) fossil_exit(0); usage("URL"); } user_select(); url_enable_proxy("via proxy: "); *pConfigFlags |= configSync; } /* ** COMMAND: pull ** ** Usage: %fossil pull ?URL? ?options? |
︙ | ︙ | |||
383 384 385 386 387 388 389 | ** --project-code CODE Use CODE as the project code ** --proxy PROXY Use the specified HTTP proxy ** -R|--repository REPO Local repository to pull into ** --ssl-identity FILE Local SSL credentials, if requested by remote ** --ssh-command SSH Use SSH as the "ssh" command ** --transport-command CMD Use external command CMD to move messages ** between client and server | | < | 332 333 334 335 336 337 338 339 340 341 342 343 344 345 346 | ** --project-code CODE Use CODE as the project code ** --proxy PROXY Use the specified HTTP proxy ** -R|--repository REPO Local repository to pull into ** --ssl-identity FILE Local SSL credentials, if requested by remote ** --ssh-command SSH Use SSH as the "ssh" command ** --transport-command CMD Use external command CMD to move messages ** between client and server ** -v|--verbose Additional (debugging) output ** --verily Exchange extra information with the remote ** to ensure no content is overlooked ** ** See also: [[clone]], [[config]], [[push]], [[remote]], [[sync]] */ void pull_cmd(void){ unsigned configFlags = 0; |
︙ | ︙ | |||
436 437 438 439 440 441 442 | ** --proxy PROXY Use the specified HTTP proxy ** --private Push private branches too ** -R|--repository REPO Local repository to push from ** --ssl-identity FILE Local SSL credentials, if requested by remote ** --ssh-command SSH Use SSH as the "ssh" command ** --transport-command CMD Use external command CMD to communicate with ** the server | | < | 384 385 386 387 388 389 390 391 392 393 394 395 396 397 398 | ** --proxy PROXY Use the specified HTTP proxy ** --private Push private branches too ** -R|--repository REPO Local repository to push from ** --ssl-identity FILE Local SSL credentials, if requested by remote ** --ssh-command SSH Use SSH as the "ssh" command ** --transport-command CMD Use external command CMD to communicate with ** the server ** -v|--verbose Additional (debugging) output ** --verily Exchange extra information with the remote ** to ensure no content is overlooked ** ** See also: [[clone]], [[config]], [[pull]], [[remote]], [[sync]] */ void push_cmd(void){ unsigned configFlags = 0; |
︙ | ︙ | |||
486 487 488 489 490 491 492 | ** --private Sync private branches too ** -R|--repository REPO Local repository to sync with ** --ssl-identity FILE Local SSL credentials, if requested by remote ** --ssh-command SSH Use SSH as the "ssh" command ** --transport-command CMD Use external command CMD to move message ** between the client and the server ** -u|--unversioned Also sync unversioned content | | < | 433 434 435 436 437 438 439 440 441 442 443 444 445 446 447 | ** --private Sync private branches too ** -R|--repository REPO Local repository to sync with ** --ssl-identity FILE Local SSL credentials, if requested by remote ** --ssh-command SSH Use SSH as the "ssh" command ** --transport-command CMD Use external command CMD to move message ** between the client and the server ** -u|--unversioned Also sync unversioned content ** -v|--verbose Additional (debugging) output ** --verily Exchange extra information with the remote ** to ensure no content is overlooked ** ** See also: [[clone]], [[pull]], [[push]], [[remote]] */ void sync_cmd(void){ unsigned configFlags = 0; |
︙ | ︙ | |||
520 521 522 523 524 525 526 | ** commands. */ void sync_unversioned(unsigned syncFlags){ unsigned configFlags = 0; (void)find_option("uv-noop",0,0); process_sync_args(&configFlags, &syncFlags, 1, 0); verify_all_options(); | | | 466 467 468 469 470 471 472 473 474 475 476 477 478 479 480 | ** commands. */ void sync_unversioned(unsigned syncFlags){ unsigned configFlags = 0; (void)find_option("uv-noop",0,0); process_sync_args(&configFlags, &syncFlags, 1, 0); verify_all_options(); client_sync(syncFlags, 0, 0, 0); } /* ** COMMAND: remote ** COMMAND: remote-url* ** ** Usage: %fossil remote ?SUBCOMMAND ...? |
︙ | ︙ | |||
573 574 575 576 577 578 579 | ** ** > fossil remote list|ls ** ** Show all remote repository URLs. ** ** > fossil remote off ** | | | 519 520 521 522 523 524 525 526 527 528 529 530 531 532 533 | ** ** > fossil remote list|ls ** ** Show all remote repository URLs. ** ** > fossil remote off ** ** Forget the default URL. This disables autosync. ** ** This is a convenient way to enter "airplane mode". To enter ** airplane mode, first save the current default URL, then turn the ** default off. Perhaps like this: ** ** fossil remote add main default ** fossil remote off |
︙ | ︙ | |||
635 636 637 638 639 640 641 | ** ** The last-sync-url is called "default" for the display list. ** ** The last-sync-url might be duplicated into one of the sync-url:NAME ** entries. Thus, when doing a "fossil sync --all" or an autosync with ** autosync=all, each sync-url:NAME entry is checked to see if it is the ** same as last-sync-url and if it is then that entry is skipped. | | | 581 582 583 584 585 586 587 588 589 590 591 592 593 594 595 | ** ** The last-sync-url is called "default" for the display list. ** ** The last-sync-url might be duplicated into one of the sync-url:NAME ** entries. Thus, when doing a "fossil sync --all" or an autosync with ** autosync=all, each sync-url:NAME entry is checked to see if it is the ** same as last-sync-url and if it is then that entry is skipped. */ if( g.argc==2 ){ /* "fossil remote" with no arguments: Show the last sync URL. */ zUrl = db_get("last-sync-url", 0); if( zUrl==0 ){ fossil_print("off\n"); }else{ |
︙ | ︙ |
Changes to src/tag.c.
︙ | ︙ | |||
42 43 44 45 46 47 48 | PQueue queue; /* Queue of check-ins to be tagged */ Stmt s; /* Query the children of :pid to which to propagate */ Stmt ins; /* INSERT INTO tagxref */ Stmt eventupdate; /* UPDATE event */ assert( tagType==0 || tagType==2 ); pqueuex_init(&queue); | | | 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 | PQueue queue; /* Queue of check-ins to be tagged */ Stmt s; /* Query the children of :pid to which to propagate */ Stmt ins; /* INSERT INTO tagxref */ Stmt eventupdate; /* UPDATE event */ assert( tagType==0 || tagType==2 ); pqueuex_init(&queue); pqueuex_insert(&queue, pid, 0.0, 0); /* Query for children of :pid to which to propagate the tag. ** Three returns: (1) rid of the child. (2) timestamp of child. ** (3) True to propagate or false to block. */ db_prepare(&s, "SELECT cid, plink.mtime," |
︙ | ︙ | |||
77 78 79 80 81 82 83 | ); } if( tagid==TAG_BGCOLOR ){ db_prepare(&eventupdate, "UPDATE event SET bgcolor=%Q WHERE objid=:rid", zValue ); } | | | | 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 | ); } if( tagid==TAG_BGCOLOR ){ db_prepare(&eventupdate, "UPDATE event SET bgcolor=%Q WHERE objid=:rid", zValue ); } while( (pid = pqueuex_extract(&queue, 0))!=0 ){ db_bind_int(&s, ":pid", pid); while( db_step(&s)==SQLITE_ROW ){ int doit = db_column_int(&s, 2); if( doit ){ int cid = db_column_int(&s, 0); double mtime = db_column_double(&s, 1); pqueuex_insert(&queue, cid, mtime, 0); db_bind_int(&ins, ":rid", cid); db_step(&ins); db_reset(&ins); if( tagid==TAG_BGCOLOR ){ db_bind_int(&eventupdate, ":rid", cid); db_step(&eventupdate); db_reset(&eventupdate); |
︙ | ︙ | |||
400 401 402 403 404 405 406 | ** ARTIFACT-ID. For check-ins, the tag will be usable instead ** of a CHECK-IN in commands such as update and merge. If the ** --propagate flag is present and ARTIFACT-ID refers to a ** wiki page, forum post, technote, or check-in, the tag ** propagates to all descendants of that artifact. ** ** Options: | | < | | < > > < < < > > > < > > > > < < < | 400 401 402 403 404 405 406 407 408 409 410 411 412 413 414 415 416 417 418 419 420 421 422 423 424 425 426 427 428 429 430 431 432 433 434 435 436 437 438 439 440 441 442 443 444 445 446 447 448 449 450 451 452 453 454 455 456 457 458 459 460 461 462 463 464 465 466 467 468 469 470 471 472 473 474 475 476 477 478 479 480 481 | ** ARTIFACT-ID. For check-ins, the tag will be usable instead ** of a CHECK-IN in commands such as update and merge. If the ** --propagate flag is present and ARTIFACT-ID refers to a ** wiki page, forum post, technote, or check-in, the tag ** propagates to all descendants of that artifact. ** ** Options: ** --raw Raw tag name. Ignored for ** non-CHECK-IN artifacts. ** --propagate Propagating tag ** --date-override DATETIME Set date and time added ** --user-override USER Name USER when adding the tag ** -n|--dry-run Display the tag text, but do not ** actually insert it into the database ** ** The --date-override and --user-override options support ** importing history from other SCM systems. DATETIME has ** the form 'YYYY-MMM-DD HH:MM:SS'. ** ** Note that fossil uses some tag prefixes internally and this ** command will reject tags with these prefixes to avoid ** causing problems or confusion: "wiki-", "tkt-", "event-". ** ** > fossil tag cancel ?--raw? TAGNAME ARTIFACT-ID ** ** Remove the tag TAGNAME from the artifact referenced by ** ARTIFACT-ID, and also remove the propagation of the tag to ** any descendants. Use the the -n|--dry-run option to see ** what would have happened. Certain tag name prefixes are ** forbidden, as documented for the 'add' subcommand. ** ** Options: ** --raw Raw tag name. Ignored for ** non-CHECK-IN artifacts. ** --date-override DATETIME Set date and time deleted ** --user-override USER Name USER when deleting the tag ** -n|--dry-run Display the control artifact, but do ** not insert it into the database ** ** > fossil tag find ?OPTIONS? TAGNAME ** ** List all objects that use TAGNAME. ** ** Options: ** --raw Interprets tag as a raw name instead of a ** branch name and matches any type of artifact. ** Changes the output to include only the ** hashes of matching objects. ** -t|--type TYPE One of: ci (check-in), w (wiki), ** e (event/technote), f (forum post), ** t (ticket). Default is all types. Ignored ** if --raw is used. ** -n|--limit N Limit to N results ** ** > fossil tag list|ls ?OPTIONS? ?ARTIFACT-ID? ** ** List all tags or, if ARTIFACT-ID is supplied, all tags and ** their values for that artifact. The tagtype option accepts ** one of: propagated, singleton, cancel. For historical ** scripting compatibility, the internal tag types "wiki-", ** "tkt-", and "event-" (technote) are elided by default ** unless the --raw or --prefix options are used. ** ** Options: ** --raw List raw names of tags ** --tagtype TYPE List only tags of type TYPE, which must ** be one of: cancel, singleton, propagated ** -v|--inverse Inverse the meaning of --tagtype TYPE ** --prefix List only tags with the given prefix ** Fossil-internal prefixes include "sym-" ** (branch name), "wiki-", "event-" ** (technote), and "tkt-" (ticket). The ** prefix is stripped from the resulting ** list unless --raw is provided. Ignored if ** ARTIFACT-ID is provided. ** ** The option --raw allows the manipulation of all types of tags ** used for various internal purposes in fossil. It also shows ** "cancel" tags for the "find" and "list" subcommands. You should ** not use this option to make changes unless you are sure what ** you are doing. ** |
︙ | ︙ | |||
638 639 640 641 642 643 644 | const char *zTagPrefix = find_option("prefix","",1); int nTagType = fRaw ? -1 : 0; if( zTagType!=0 ){ int l = strlen(zTagType); if( strncmp(zTagType,"cancel",l)==0 ){ nTagType = 0; | | | | 638 639 640 641 642 643 644 645 646 647 648 649 650 651 652 653 654 | const char *zTagPrefix = find_option("prefix","",1); int nTagType = fRaw ? -1 : 0; if( zTagType!=0 ){ int l = strlen(zTagType); if( strncmp(zTagType,"cancel",l)==0 ){ nTagType = 0; }else if( strncmp(zTagType,"singleton",l)==0 ){ nTagType = 1; }else if( strncmp(zTagType,"propagated",l)==0 ){ nTagType = 2; }else{ fossil_fatal("unrecognized tag type"); } } if( g.argc==3 ){ const int nTagPrefix = zTagPrefix ? (int)strlen(zTagPrefix) : 0; |
︙ | ︙ |
Changes to src/tar.c.
︙ | ︙ | |||
242 243 244 245 246 247 248 | n /= 10; } /* adding the length extended the length field? */ if(blen > next10){ blen++; } /* build the string */ | | < | 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 | n /= 10; } /* adding the length extended the length field? */ if(blen > next10){ blen++; } /* build the string */ blob_appendf(&tball.pax, "%d %s=%*.*s\n", blen, zField, nValue, nValue, zValue); /* this _must_ be right */ if((int)blob_size(&tball.pax) != blen){ fossil_panic("internal error: PAX tar header has bad length"); } } |
︙ | ︙ |
Changes to src/terminal.c.
︙ | ︙ | |||
20 21 22 23 24 25 26 | #include "config.h" #include "terminal.h" #include <assert.h> #ifdef _WIN32 # include <windows.h> #else | < < < | 20 21 22 23 24 25 26 27 28 29 30 31 32 33 | #include "config.h" #include "terminal.h" #include <assert.h> #ifdef _WIN32 # include <windows.h> #else #include <sys/ioctl.h> #include <stdio.h> #include <unistd.h> #endif |
︙ | ︙ |
Changes to src/th.c.
︙ | ︙ | |||
2868 2869 2870 2871 2872 2873 2874 | /* ** Set the result of the interpreter to the th1 representation of ** the integer iVal and return TH_OK. */ int Th_SetResultInt(Th_Interp *interp, int iVal){ int isNegative = 0; | < | | | | | 2868 2869 2870 2871 2872 2873 2874 2875 2876 2877 2878 2879 2880 2881 2882 2883 2884 2885 2886 2887 2888 2889 2890 2891 2892 | /* ** Set the result of the interpreter to the th1 representation of ** the integer iVal and return TH_OK. */ int Th_SetResultInt(Th_Interp *interp, int iVal){ int isNegative = 0; char zBuf[32]; char *z = &zBuf[32]; if( iVal<0 ){ isNegative = 1; iVal = iVal * -1; } *(--z) = '\0'; *(--z) = (char)(48+((unsigned)iVal%10)); while( (iVal = ((unsigned)iVal/10))>0 ){ *(--z) = (char)(48+((unsigned)iVal%10)); assert(z>zBuf); } if( isNegative ){ *(--z) = '-'; } return Th_SetResult(interp, z, -1); |
︙ | ︙ |
Changes to src/th_main.c.
︙ | ︙ | |||
29 30 31 32 33 34 35 | */ #define TH_INIT_NONE ((u32)0x00000000) /* No flags. */ #define TH_INIT_NEED_CONFIG ((u32)0x00000001) /* Open configuration first? */ #define TH_INIT_FORCE_TCL ((u32)0x00000002) /* Force Tcl to be enabled? */ #define TH_INIT_FORCE_RESET ((u32)0x00000004) /* Force TH1 commands re-added? */ #define TH_INIT_FORCE_SETUP ((u32)0x00000008) /* Force eval of setup script? */ #define TH_INIT_NO_REPO ((u32)0x00000010) /* Skip opening repository. */ | | < | 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 | */ #define TH_INIT_NONE ((u32)0x00000000) /* No flags. */ #define TH_INIT_NEED_CONFIG ((u32)0x00000001) /* Open configuration first? */ #define TH_INIT_FORCE_TCL ((u32)0x00000002) /* Force Tcl to be enabled? */ #define TH_INIT_FORCE_RESET ((u32)0x00000004) /* Force TH1 commands re-added? */ #define TH_INIT_FORCE_SETUP ((u32)0x00000008) /* Force eval of setup script? */ #define TH_INIT_NO_REPO ((u32)0x00000010) /* Skip opening repository. */ #define TH_INIT_NO_ENCODE ((u32)0x00000020) /* Do not html-encode sendText() output. */ #define TH_INIT_MASK ((u32)0x0000003F) /* All possible init flags. */ /* ** Useful and/or "well-known" combinations of flag values. */ #define TH_INIT_DEFAULT (TH_INIT_NONE) /* Default flags. */ #define TH_INIT_HOOK (TH_INIT_NEED_CONFIG | TH_INIT_FORCE_SETUP) |
︙ | ︙ |
Changes to src/th_tcl.c.
︙ | ︙ | |||
1162 1163 1164 1165 1166 1167 1168 | Tcl_DeleteInterp(tclInterp); /* TODO: Redundant? */ tclInterp = 0; return TH_ERROR; } tclContext->interp = tclInterp; if( Tcl_Init(tclInterp)!=TCL_OK ){ Th_ErrorMessage(interp, | | < | < | 1162 1163 1164 1165 1166 1167 1168 1169 1170 1171 1172 1173 1174 1175 1176 1177 1178 1179 1180 1181 1182 1183 | Tcl_DeleteInterp(tclInterp); /* TODO: Redundant? */ tclInterp = 0; return TH_ERROR; } tclContext->interp = tclInterp; if( Tcl_Init(tclInterp)!=TCL_OK ){ Th_ErrorMessage(interp, "Tcl initialization error:", Tcl_GetString(Tcl_GetObjResult(tclInterp)), -1); Tcl_DeleteInterp(tclInterp); tclContext->interp = tclInterp = 0; return TH_ERROR; } if( setTclArguments(tclInterp, argc, argv)!=TCL_OK ){ Th_ErrorMessage(interp, "Tcl error setting arguments:", Tcl_GetString(Tcl_GetObjResult(tclInterp)), -1); Tcl_DeleteInterp(tclInterp); tclContext->interp = tclInterp = 0; return TH_ERROR; } /* ** Determine (and cache) if an objProc can be called directly for a Tcl ** command invoked via the tclInvoke TH1 command. |
︙ | ︙ | |||
1194 1195 1196 1197 1198 1199 1200 | Tcl_CallWhenDeleted(tclInterp, Th1DeleteProc, interp); Tcl_CreateObjCommand(tclInterp, "th1Eval", Th1EvalObjCmd, interp, NULL); Tcl_CreateObjCommand(tclInterp, "th1Expr", Th1ExprObjCmd, interp, NULL); /* If necessary, evaluate the custom Tcl setup script. */ setup = tclContext->setup; if( setup && Tcl_EvalEx(tclInterp, setup, -1, 0)!=TCL_OK ){ Th_ErrorMessage(interp, | | < | 1192 1193 1194 1195 1196 1197 1198 1199 1200 1201 1202 1203 1204 1205 1206 | Tcl_CallWhenDeleted(tclInterp, Th1DeleteProc, interp); Tcl_CreateObjCommand(tclInterp, "th1Eval", Th1EvalObjCmd, interp, NULL); Tcl_CreateObjCommand(tclInterp, "th1Expr", Th1ExprObjCmd, interp, NULL); /* If necessary, evaluate the custom Tcl setup script. */ setup = tclContext->setup; if( setup && Tcl_EvalEx(tclInterp, setup, -1, 0)!=TCL_OK ){ Th_ErrorMessage(interp, "Tcl setup script error:", Tcl_GetString(Tcl_GetObjResult(tclInterp)), -1); Tcl_DeleteInterp(tclInterp); tclContext->interp = tclInterp = 0; return TH_ERROR; } return TH_OK; } |
︙ | ︙ |
Changes to src/timeline.c.
︙ | ︙ | |||
33 34 35 36 37 38 39 | */ #define TIMELINE_MODE_NONE 0 #define TIMELINE_MODE_BEFORE 1 #define TIMELINE_MODE_AFTER 2 #define TIMELINE_MODE_CHILDREN 3 #define TIMELINE_MODE_PARENTS 4 | < < < < < < < | 33 34 35 36 37 38 39 40 41 42 43 44 45 46 | */ #define TIMELINE_MODE_NONE 0 #define TIMELINE_MODE_BEFORE 1 #define TIMELINE_MODE_AFTER 2 #define TIMELINE_MODE_CHILDREN 3 #define TIMELINE_MODE_PARENTS 4 /* ** Add an appropriate tag to the output if "rid" is unpublished (private) */ #define UNPUB_TAG "<em>(unpublished)</em>" void tag_private_status(int rid){ if( content_is_private(rid) ){ cgi_printf(" %s", UNPUB_TAG); |
︙ | ︙ | |||
151 152 153 154 155 156 157 | db_bind_int(&q, "$rid", rid); res = db_step(&q)==SQLITE_ROW; db_reset(&q); return res; } /* | | | 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 | db_bind_int(&q, "$rid", rid); res = db_step(&q)==SQLITE_ROW; db_reset(&q); return res; } /* ** Return the text of the unformatted ** forum post given by the RID in the argument. */ static void forum_post_content_function( sqlite3_context *context, int argc, sqlite3_value **argv ){ |
︙ | ︙ | |||
364 365 366 367 368 369 370 | int isClosed = 0; if( is_ticket(zTktid, &isClosed) && isClosed ){ zExtraClass = " tktTlClosed"; }else{ zExtraClass = " tktTlOpen"; } fossil_free(zTktid); | | | 357 358 359 360 361 362 363 364 365 366 367 368 369 370 371 | int isClosed = 0; if( is_ticket(zTktid, &isClosed) && isClosed ){ zExtraClass = " tktTlClosed"; }else{ zExtraClass = " tktTlOpen"; } fossil_free(zTktid); } } if( zType[0]=='e' && tagid ){ if( bTimestampLinksToInfo ){ char *zId; zId = db_text(0, "SELECT substr(tagname, 7) FROM tag WHERE tagid=%d", tagid); zDateLink = href("%R/technote/%s",zId); |
︙ | ︙ | |||
674 675 676 677 678 679 680 | cgi_printf(" tags: %h", zTagList); } } if( tmFlags & TIMELINE_SHOWRID ){ int srcId = delta_source_rid(rid); if( srcId ){ | | | 667 668 669 670 671 672 673 674 675 676 677 678 679 680 681 | cgi_printf(" tags: %h", zTagList); } } if( tmFlags & TIMELINE_SHOWRID ){ int srcId = delta_source_rid(rid); if( srcId ){ cgi_printf(" id: %z%d←%d</a>", href("%R/deltachain/%d",rid), rid, srcId); }else{ cgi_printf(" id: %z%d</a>", href("%R/deltachain/%d",rid), rid); } } tag_private_status(rid); |
︙ | ︙ | |||
1422 1423 1424 1425 1426 1427 1428 | zIntro = "regular expression "; }else/* if( matchStyle==MS_BRLIST )*/{ zStart = "tagname IN ('sym-"; zDelimiter = "','sym-"; zEnd = "')"; zPrefix = ""; zSuffix = ""; | | | 1415 1416 1417 1418 1419 1420 1421 1422 1423 1424 1425 1426 1427 1428 1429 | zIntro = "regular expression "; }else/* if( matchStyle==MS_BRLIST )*/{ zStart = "tagname IN ('sym-"; zDelimiter = "','sym-"; zEnd = "')"; zPrefix = ""; zSuffix = ""; zIntro = "any of "; } /* Convert the list of matches into an SQL expression and text description. */ blob_zero(&expr); blob_zero(&desc); blob_zero(&err); while( 1 ){ |
︙ | ︙ | |||
1566 1567 1568 1569 1570 1571 1572 | } zEDate[j] = 0; /* It looks like this may be a date. Return it with punctuation added. */ return zEDate; } | < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < | | | < | < < < < < < | 1559 1560 1561 1562 1563 1564 1565 1566 1567 1568 1569 1570 1571 1572 1573 1574 1575 1576 1577 1578 1579 1580 1581 1582 1583 1584 1585 1586 1587 1588 1589 1590 1591 1592 1593 1594 1595 1596 1597 1598 1599 1600 1601 1602 1603 1604 1605 1606 1607 1608 1609 1610 1611 1612 1613 1614 1615 1616 1617 1618 1619 1620 1621 1622 1623 1624 1625 1626 1627 | } zEDate[j] = 0; /* It looks like this may be a date. Return it with punctuation added. */ return zEDate; } /* ** WEBPAGE: timeline ** ** Query parameters: ** ** a=TIMEORTAG Show events after TIMEORTAG ** b=TIMEORTAG Show events before TIMEORTAG ** c=TIMEORTAG Show events that happen "circa" TIMEORTAG ** cf=FILEHASH Show events around the time of the first use of ** the file with FILEHASH ** m=TIMEORTAG Highlight the event at TIMEORTAG, or the closest available ** event if TIMEORTAG is not part of the timeline. If ** the t= or r= is used, the m event is added to the timeline ** if it isn't there already. ** sel1=TIMEORTAG Highlight the check-in at TIMEORTAG if it is part of ** the timeline. Similar to m= except TIMEORTAG must ** match a check-in that is already in the timeline. ** sel2=TIMEORTAG Like sel1= but use the secondary highlight. ** n=COUNT Maximum number of events. "all" for no limit ** n1=COUNT Same as "n" but doesn't set the display-preference cookie ** Use "n1=COUNT" for a one-time display change ** p=CHECKIN Parents and ancestors of CHECKIN ** bt=PRIOR ... going back to PRIOR ** d=CHECKIN Children and descendants of CHECKIN ** ft=DESCENDANT ... going forward to DESCENDANT ** dp=CHECKIN Same as 'd=CHECKIN&p=CHECKIN' ** df=CHECKIN Same as 'd=CHECKIN&n1=all&nd'. Mnemonic: "Derived From" ** bt=CHECKIN In conjunction with p=CX, this means show all ** ancestors of CX going back to the time of CHECKIN. ** All qualifying check-ins are shown unless there ** is also an n= or n1= query parameter. ** t=TAG Show only check-ins with the given TAG ** r=TAG Show check-ins related to TAG, equivalent to t=TAG&rel ** rel Show related check-ins as well as those matching t=TAG ** mionly Limit rel to show ancestors but not descendants ** nowiki Do not show wiki associated with branch or tag ** ms=MATCHSTYLE Set tag match style to EXACT, GLOB, LIKE, REGEXP ** u=USER Only show items associated with USER ** y=TYPE 'ci', 'w', 't', 'n', 'e', 'f', or 'all'. ** ss=VIEWSTYLE c: "Compact", v: "Verbose", m: "Modern", j: "Columnar", ** x: "Classic". ** advm Use the "Advanced" or "Busy" menu design. ** ng No Graph. ** ncp Omit cherrypick merges ** nd Do not highlight the focus check-in ** nsm Omit the submenu ** nc Omit all graph colors other than highlights ** v Show details of files changed ** vfx Show complete text of forum messages ** f=CHECKIN Show family (immediate parents and children) of CHECKIN ** from=CHECKIN Path from... ** to=CHECKIN ... to this ** shortest ... show only the shortest path ** rel ... also show related checkins ** uf=FILE_HASH Show only check-ins that contain the given file version ** All qualifying check-ins are shown unless there is ** also an n= or n1= query parameter. ** chng=GLOBLIST Show only check-ins that involve changes to a file whose ** name matches one of the comma-separate GLOBLIST ** brbg Background color determined by branch name ** ubg Background color determined by user |
︙ | ︙ | |||
1836 1837 1838 1839 1840 1841 1842 | const char *zBisect = P("bid"); /* Bisect description */ int cpOnly = PB("cherrypicks"); /* Show all cherrypick checkins */ int tmFlags = 0; /* Timeline flags */ const char *zThisTag = 0; /* Suppress links to this tag */ const char *zThisUser = 0; /* Suppress links to this user */ HQuery url; /* URL for various branch links */ int from_rid = name_to_typed_rid(P("from"),"ci"); /* from= for paths */ | < | < | 1696 1697 1698 1699 1700 1701 1702 1703 1704 1705 1706 1707 1708 1709 1710 1711 1712 1713 1714 1715 1716 1717 1718 1719 1720 1721 1722 1723 1724 1725 1726 1727 | const char *zBisect = P("bid"); /* Bisect description */ int cpOnly = PB("cherrypicks"); /* Show all cherrypick checkins */ int tmFlags = 0; /* Timeline flags */ const char *zThisTag = 0; /* Suppress links to this tag */ const char *zThisUser = 0; /* Suppress links to this user */ HQuery url; /* URL for various branch links */ int from_rid = name_to_typed_rid(P("from"),"ci"); /* from= for paths */ int to_rid = name_to_typed_rid(P("to"),"ci"); /* to= for path timelines */ int noMerge = P("shortest")==0; /* Follow merge links if shorter */ int me_rid = name_to_typed_rid(P("me"),"ci"); /* me= for common ancestory */ int you_rid = name_to_typed_rid(P("you"),"ci");/* you= for common ancst */ int pd_rid; double rBefore, rAfter, rCirca; /* Boundary times */ const char *z; char *zOlderButton = 0; /* URL for Older button at the bottom */ char *zOlderButtonLabel = 0; /* Label for the Older Button */ char *zNewerButton = 0; /* URL for Newer button at the top */ char *zNewerButtonLabel = 0; /* Label for the Newer button */ int selectedRid = 0; /* Show a highlight on this RID */ int secondaryRid = 0; /* Show secondary highlight */ int disableY = 0; /* Disable type selector on submenu */ int advancedMenu = 0; /* Use the advanced menu design */ char *zPlural; /* Ending for plural forms */ int showCherrypicks = 1; /* True to show cherrypick merges */ int haveParameterN; /* True if n= query parameter present */ url_initialize(&url, "timeline"); cgi_query_parameters_to_url(&url); (void)P_NoBot("ss") /* "ss" is processed via the udc but at least one spider likes to ** try to SQL inject via this argument, so let's catch that. */; |
︙ | ︙ | |||
1918 1919 1920 1921 1922 1923 1924 | } /* Undocumented query parameter to set JS mode */ builtin_set_js_delivery_mode(P("jsmode"),1); secondaryRid = name_to_typed_rid(P("sel2"),"ci"); selectedRid = name_to_typed_rid(P("sel1"),"ci"); | < < < < | 1776 1777 1778 1779 1780 1781 1782 1783 1784 1785 1786 1787 1788 1789 | } /* Undocumented query parameter to set JS mode */ builtin_set_js_delivery_mode(P("jsmode"),1); secondaryRid = name_to_typed_rid(P("sel2"),"ci"); selectedRid = name_to_typed_rid(P("sel1"),"ci"); tmFlags |= timeline_ss_submenu(); cookie_link_parameter("advm","advm","0"); advancedMenu = atoi(PD("advm","0")); /* Omit all cherry-pick merge lines if the "ncp" query parameter is ** present or if this repository lacks a "cherrypick" table. */ if( PB("ncp") || !db_table_exists("repository","cherrypick") ){ |
︙ | ︙ | |||
1975 1976 1977 1978 1979 1980 1981 | " WHERE mlink.fid=(SELECT rid FROM blob WHERE uuid LIKE '%q%%')" " AND event.objid=mlink.mid" " ORDER BY event.mtime LIMIT 1", P("cf") ); } | < < < < < < < < < < < < < < | 1829 1830 1831 1832 1833 1834 1835 1836 1837 1838 1839 1840 1841 1842 | " WHERE mlink.fid=(SELECT rid FROM blob WHERE uuid LIKE '%q%%')" " AND event.objid=mlink.mid" " ORDER BY event.mtime LIMIT 1", P("cf") ); } /* Convert r=TAG to t=TAG&rel in order to populate the UI style widgets. */ if( zBrName && !related ){ cgi_delete_query_parameter("r"); cgi_set_query_parameter("t", zBrName); (void)P("t"); cgi_set_query_parameter("rel", "1"); zTagName = zBrName; related = 1; |
︙ | ︙ | |||
2173 2174 2175 2176 2177 2178 2179 | if( (tmFlags & TIMELINE_UNHIDE)==0 ){ blob_append_sql(&sql, " AND NOT EXISTS(SELECT 1 FROM tagxref" " WHERE tagid=%d AND tagtype>0 AND rid=blob.rid)\n", TAG_HIDDEN ); } | < < < < < < < < < < < < < < < < < < < < < | < < < < < | | 2013 2014 2015 2016 2017 2018 2019 2020 2021 2022 2023 2024 2025 2026 2027 2028 2029 2030 2031 2032 2033 2034 2035 2036 2037 2038 2039 | if( (tmFlags & TIMELINE_UNHIDE)==0 ){ blob_append_sql(&sql, " AND NOT EXISTS(SELECT 1 FROM tagxref" " WHERE tagid=%d AND tagtype>0 AND rid=blob.rid)\n", TAG_HIDDEN ); } if( ((from_rid && to_rid) || (me_rid && you_rid)) && g.perm.Read ){ /* If from= and to= are present, display all nodes on a path connecting ** the two */ PathNode *p = 0; const char *zFrom = 0; const char *zTo = 0; Blob ins; int nNodeOnPath = 0; if( from_rid && to_rid ){ p = path_shortest(from_rid, to_rid, noMerge, 0, 0); zFrom = P("from"); zTo = P("to"); }else{ if( path_common_ancestor(me_rid, you_rid) ){ p = path_first(); } zFrom = P("me"); zTo = P("you"); } |
︙ | ︙ | |||
2277 2278 2279 2280 2281 2282 2283 | } tmFlags |= TIMELINE_XMERGE | TIMELINE_FILLGAPS; db_multi_exec("%s", blob_sql_text(&sql)); if( advancedMenu ){ style_submenu_checkbox("v", "Files", (zType[0]!='a' && zType[0]!='c'),0); } nNodeOnPath = db_int(0, "SELECT count(*) FROM temp.pathnode"); | < < < < < | < < < < < < < < | < < < | < | | | | | < | 2091 2092 2093 2094 2095 2096 2097 2098 2099 2100 2101 2102 2103 2104 2105 2106 2107 2108 2109 2110 2111 2112 2113 | } tmFlags |= TIMELINE_XMERGE | TIMELINE_FILLGAPS; db_multi_exec("%s", blob_sql_text(&sql)); if( advancedMenu ){ style_submenu_checkbox("v", "Files", (zType[0]!='a' && zType[0]!='c'),0); } nNodeOnPath = db_int(0, "SELECT count(*) FROM temp.pathnode"); blob_appendf(&desc, "%d check-ins going from ", nNodeOnPath); blob_appendf(&desc, "%z%h</a>", href("%R/info/%h", zFrom), zFrom); blob_append(&desc, " to ", -1); blob_appendf(&desc, "%z%h</a>", href("%R/info/%h",zTo), zTo); if( related ){ int nRelated = db_int(0, "SELECT count(*) FROM timeline") - nNodeOnPath; if( nRelated>0 ){ blob_appendf(&desc, " and %d related check-in%s", nRelated, nRelated>1 ? "s" : ""); } } addFileGlobDescription(zChng, &desc); }else if( (p_rid || d_rid) && g.perm.Read && zTagSql==0 ){ /* If p= or d= is present, ignore all other parameters other than n= */ char *zUuid; const char *zCiName; |
︙ | ︙ | |||
2399 2400 2401 2402 2403 2404 2405 | } blob_appendf(&desc, " of %z%h</a>", href("%R/info?name=%h", zCiName), zCiName); if( ridBackTo ){ if( np==0 ){ blob_reset(&desc); | | | | 2195 2196 2197 2198 2199 2200 2201 2202 2203 2204 2205 2206 2207 2208 2209 2210 2211 2212 2213 2214 2215 2216 2217 2218 2219 2220 2221 2222 2223 2224 | } blob_appendf(&desc, " of %z%h</a>", href("%R/info?name=%h", zCiName), zCiName); if( ridBackTo ){ if( np==0 ){ blob_reset(&desc); blob_appendf(&desc, "Check-in %z%h</a> only (%z%h</a> is not an ancestor)", href("%R/info?name=%h",zCiName), zCiName, href("%R/info?name=%h",zBackTo), zBackTo); }else{ blob_appendf(&desc, " back to %z%h</a>", href("%R/info?name=%h",zBackTo), zBackTo); if( ridFwdTo && zFwdTo ){ blob_appendf(&desc, " and up to %z%h</a>", href("%R/info?name=%h",zFwdTo), zFwdTo); } } }else if( ridFwdTo ){ if( nd==0 ){ blob_reset(&desc); blob_appendf(&desc, "Check-in %z%h</a> only (%z%h</a> is not an descendant)", href("%R/info?name=%h",zCiName), zCiName, href("%R/info?name=%h",zFwdTo), zFwdTo); }else{ blob_appendf(&desc, " up to %z%h</a>", href("%R/info?name=%h",zFwdTo), zFwdTo); } |
︙ | ︙ | |||
2668 2669 2670 2671 2672 2673 2674 | if( zMark ){ /* If the t=release option is used with m=UUID, then also ** include the UUID check-in in the display list */ int ridMark = name_to_rid(zMark); db_multi_exec( "INSERT OR IGNORE INTO selected_nodes(rid) VALUES(%d)", ridMark); } | < < < < < < < < < < < < < < < < < | 2464 2465 2466 2467 2468 2469 2470 2471 2472 2473 2474 2475 2476 2477 | if( zMark ){ /* If the t=release option is used with m=UUID, then also ** include the UUID check-in in the display list */ int ridMark = name_to_rid(zMark); db_multi_exec( "INSERT OR IGNORE INTO selected_nodes(rid) VALUES(%d)", ridMark); } if( !related ){ blob_append_sql(&cond, " AND blob.rid IN selected_nodes"); }else{ db_multi_exec( "CREATE TEMP TABLE related_nodes(rid INTEGER PRIMARY KEY);" "INSERT INTO related_nodes SELECT rid FROM selected_nodes;" ); |
︙ | ︙ | |||
2853 2854 2855 2856 2857 2858 2859 | } if( PB("showsql") ){ @ <pre>%h(blob_sql_text(&sql2))</pre> } db_multi_exec("%s", blob_sql_text(&sql2)); if( nEntry>0 ){ nEntry -= db_int(0,"select count(*) from timeline"); | < | 2632 2633 2634 2635 2636 2637 2638 2639 2640 2641 2642 2643 2644 2645 | } if( PB("showsql") ){ @ <pre>%h(blob_sql_text(&sql2))</pre> } db_multi_exec("%s", blob_sql_text(&sql2)); if( nEntry>0 ){ nEntry -= db_int(0,"select count(*) from timeline"); } blob_reset(&sql2); blob_append_sql(&sql, " AND event.mtime<=%f ORDER BY event.mtime DESC", rCirca ); if( zMark==0 ) zMark = zCirca; |
︙ | ︙ | |||
2912 2913 2914 2915 2916 2917 2918 | tmFlags |= TIMELINE_CHPICK|TIMELINE_DISJOINT; } if( zUser ){ blob_appendf(&desc, " by user %h", zUser); tmFlags |= TIMELINE_XMERGE | TIMELINE_FILLGAPS; } if( zTagSql ){ | | | 2690 2691 2692 2693 2694 2695 2696 2697 2698 2699 2700 2701 2702 2703 2704 | tmFlags |= TIMELINE_CHPICK|TIMELINE_DISJOINT; } if( zUser ){ blob_appendf(&desc, " by user %h", zUser); tmFlags |= TIMELINE_XMERGE | TIMELINE_FILLGAPS; } if( zTagSql ){ if( matchStyle==MS_EXACT ){ if( related ){ blob_appendf(&desc, " related to %h", zMatchDesc); }else{ blob_appendf(&desc, " tagged with %h", zMatchDesc); } }else{ if( related ){ |
︙ | ︙ | |||
3195 3196 3197 3198 3199 3200 3201 | ** 6. mtime ** 7. branch ** 8. event-type: 'ci', 'w', 't', 'f', and so forth. ** 9. comment ** 10. user ** 11. tags */ | | < < < < < < < < | 2973 2974 2975 2976 2977 2978 2979 2980 2981 2982 2983 2984 2985 2986 2987 2988 2989 2990 2991 2992 2993 2994 2995 | ** 6. mtime ** 7. branch ** 8. event-type: 'ci', 'w', 't', 'f', and so forth. ** 9. comment ** 10. user ** 11. tags */ void print_timeline(Stmt *q, int nLimit, int width, const char *zFormat, int verboseFlag){ int nAbsLimit = (nLimit >= 0) ? nLimit : -nLimit; int nLine = 0; int nEntry = 0; char zPrevDate[20]; const char *zCurrentUuid = 0; int fchngQueryInit = 0; /* True if fchngQuery is initialized */ Stmt fchngQuery; /* Query for file changes on check-ins */ int rc; zPrevDate[0] = 0; if( g.localOpen ){ int rid = db_lget_int("checkout", 0); zCurrentUuid = db_text(0, "SELECT uuid FROM blob WHERE rid=%d", rid); } |
︙ | ︙ | |||
3303 3304 3305 3306 3307 3308 3309 | if( zFormat ){ char *zEntry; int nEntryLine = 0; if( nChild==0 ){ sqlite3_snprintf(sizeof(zPrefix)-n, &zPrefix[n], "*LEAF* "); } | | | < | 3073 3074 3075 3076 3077 3078 3079 3080 3081 3082 3083 3084 3085 3086 3087 3088 | if( zFormat ){ char *zEntry; int nEntryLine = 0; if( nChild==0 ){ sqlite3_snprintf(sizeof(zPrefix)-n, &zPrefix[n], "*LEAF* "); } zEntry = timeline_entry_subst(zFormat, &nEntryLine, zId, zDate, zUserShort, zComShort, zBranch, zTags, zPrefix); nLine += nEntryLine; fossil_print("%s\n", zEntry); fossil_free(zEntry); } else{ /* record another X lines */ nLine += comment_print(zFree, zCom, 9, width, get_comment_format()); |
︙ | ︙ | |||
3345 3346 3347 3348 3349 3350 3351 | fossil_print(" DELETED %s\n",zFilename); }else{ fossil_print(" EDITED %s\n", zFilename); } nLine++; /* record another line */ } db_reset(&fchngQuery); | < < < > > > > | | 3114 3115 3116 3117 3118 3119 3120 3121 3122 3123 3124 3125 3126 3127 3128 3129 3130 3131 3132 3133 | fossil_print(" DELETED %s\n",zFilename); }else{ fossil_print(" EDITED %s\n", zFilename); } nLine++; /* record another line */ } db_reset(&fchngQuery); } /* With special formatting (except for "oneline") and --verbose, ** print a newline after the file listing */ if( zFormat!=0 && (fossil_strcmp(zFormat, "%h %c")!=0) ){ fossil_print("\n"); } nEntry++; /* record another complete entry */ } if( rc==SQLITE_DONE ){ /* Did the underlying query actually have all entries? */ if( nAbsLimit==0 ){ fossil_print("+++ end of timeline (%d) +++\n", nEntry); }else{ |
︙ | ︙ | |||
3393 3394 3395 3396 3397 3398 3399 | @ event.type @ , coalesce(ecomment,comment) AS comment0 @ , coalesce(euser,user,'?') AS user0 @ , (SELECT case when length(x)>0 then x else '' end @ FROM (SELECT group_concat(substr(tagname,5), ', ') AS x @ FROM tag, tagxref @ WHERE tagname GLOB 'sym-*' AND tag.tagid=tagxref.tagid | | | 3163 3164 3165 3166 3167 3168 3169 3170 3171 3172 3173 3174 3175 3176 3177 | @ event.type @ , coalesce(ecomment,comment) AS comment0 @ , coalesce(euser,user,'?') AS user0 @ , (SELECT case when length(x)>0 then x else '' end @ FROM (SELECT group_concat(substr(tagname,5), ', ') AS x @ FROM tag, tagxref @ WHERE tagname GLOB 'sym-*' AND tag.tagid=tagxref.tagid @ AND tagxref.rid=blob.rid AND tagxref.tagtype>0)) AS tags @ FROM tag CROSS JOIN event CROSS JOIN blob @ LEFT JOIN tagxref ON tagxref.tagid=tag.tagid @ AND tagxref.tagtype>0 @ AND tagxref.rid=blob.rid @ WHERE blob.rid=event.objid @ AND tag.tagname='branch' ; |
︙ | ︙ | |||
3454 3455 3456 3457 3458 3459 3460 | ** means UTC. ** ** ** Options: ** -b|--branch BRANCH Show only items on the branch named BRANCH ** -c|--current-branch Show only items on the current branch ** -F|--format Entry format. Values "oneline", "medium", and "full" | | | | 3224 3225 3226 3227 3228 3229 3230 3231 3232 3233 3234 3235 3236 3237 3238 3239 3240 3241 3242 3243 3244 3245 3246 | ** means UTC. ** ** ** Options: ** -b|--branch BRANCH Show only items on the branch named BRANCH ** -c|--current-branch Show only items on the current branch ** -F|--format Entry format. Values "oneline", "medium", and "full" ** get mapped to the full options below. Otherwise a ** string which can contain these placeholders: ** %n newline ** %% a raw % ** %H commit hash ** %h abbreviated commit hash ** %a author name ** %d date ** %c comment (NL, TAB replaced by space, LF deleted) ** %b branch ** %t tags ** %p phase: zero or more of *CURRENT*, *MERGE*, ** *FORK*, *UNPUBLISHED*, *LEAF*, *BRANCH* ** --oneline Show only short hash and comment for each entry ** --medium Medium-verbose entry formatting ** --full Extra verbose entry formatting |
︙ | ︙ | |||
3534 3535 3536 3537 3538 3539 3540 | fossil_fatal("not within an open check-out"); }else{ int vid = db_lget_int("checkout", 0); zBr = db_text(0, "SELECT value FROM tagxref WHERE rid=%d AND tagid=%d", vid, TAG_BRANCH); } } | | | < | | < | | < > | 3304 3305 3306 3307 3308 3309 3310 3311 3312 3313 3314 3315 3316 3317 3318 3319 3320 3321 3322 3323 3324 | fossil_fatal("not within an open check-out"); }else{ int vid = db_lget_int("checkout", 0); zBr = db_text(0, "SELECT value FROM tagxref WHERE rid=%d AND tagid=%d", vid, TAG_BRANCH); } } if( find_option("oneline",0,0)!= 0 || fossil_strcmp(zFormat,"oneline")==0 ) zFormat = "%h %c"; if( find_option("medium",0,0)!= 0 || fossil_strcmp(zFormat,"medium")==0 ) zFormat = "Commit: %h%nDate: %d%nAuthor: %a%nComment: %c"; if( find_option("full",0,0)!= 0 || fossil_strcmp(zFormat,"full")==0 ) zFormat = "Commit: %H%nDate: %d%nAuthor: %a%nComment: %c%n" "Branch: %b%nTags: %t%nPhase: %p"; showSql = find_option("sql",0,0)!=0; if( !zLimit ){ zLimit = find_option("count",0,1); } if( zLimit ){ n = atoi(zLimit); |
︙ | ︙ |
Changes to src/tkt.c.
︙ | ︙ | |||
555 556 557 558 559 560 561 | case SQLITE_CREATE_VIEW: case SQLITE_CREATE_TABLE: { if( sqlite3_stricmp(z2,"main")!=0 && sqlite3_stricmp(z2,"repository")!=0 ){ goto ticket_schema_error; } | | | 555 556 557 558 559 560 561 562 563 564 565 566 567 568 569 | case SQLITE_CREATE_VIEW: case SQLITE_CREATE_TABLE: { if( sqlite3_stricmp(z2,"main")!=0 && sqlite3_stricmp(z2,"repository")!=0 ){ goto ticket_schema_error; } if( sqlite3_strnicmp(z0,"ticket",6)!=0 && sqlite3_strnicmp(z0,"fx_",3)!=0 ){ goto ticket_schema_error; } break; } case SQLITE_DROP_INDEX: |
︙ | ︙ | |||
1211 1212 1213 1214 1215 1216 1217 | } /* ** WEBPAGE: tkttimeline ** URL: /tkttimeline/TICKETUUID ** ** Show the change history for a single ticket in timeline format. | | | 1211 1212 1213 1214 1215 1216 1217 1218 1219 1220 1221 1222 1223 1224 1225 | } /* ** WEBPAGE: tkttimeline ** URL: /tkttimeline/TICKETUUID ** ** Show the change history for a single ticket in timeline format. ** ** Query parameters: ** ** y=ci Show only check-ins associated with the ticket */ void tkttimeline_page(void){ char *zTitle; const char *zUuid; |
︙ | ︙ |
Changes to src/unicode.c.
︙ | ︙ | |||
238 239 240 241 242 243 244 | iLo = iTest+1; }else{ iHi = iTest-1; } } assert( key>=aDia[iRes] ); if( bComplex==0 && (aChar[iRes] & 0x80) ) return c; | | < | 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 | iLo = iTest+1; }else{ iHi = iTest-1; } } assert( key>=aDia[iRes] ); if( bComplex==0 && (aChar[iRes] & 0x80) ) return c; return (c > (aDia[iRes]>>3) + (aDia[iRes]&0x07)) ? c : ((int)aChar[iRes] & 0x7F); } /* ** Return true if the argument interpreted as a unicode codepoint ** is a diacritical modifier character. */ |
︙ | ︙ |
Changes to src/unversioned.c.
︙ | ︙ | |||
306 307 308 309 310 311 312 | nCmd = (int)strlen(zCmd); if( zMtime==0 ){ mtime = time(0); }else{ mtime = db_int(0, "SELECT strftime('%%s',%Q)", zMtime); if( mtime<=0 ) fossil_fatal("bad timestamp: %Q", zMtime); } | | | 306 307 308 309 310 311 312 313 314 315 316 317 318 319 320 | nCmd = (int)strlen(zCmd); if( zMtime==0 ){ mtime = time(0); }else{ mtime = db_int(0, "SELECT strftime('%%s',%Q)", zMtime); if( mtime<=0 ) fossil_fatal("bad timestamp: %Q", zMtime); } if( memcmp(zCmd, "add", nCmd)==0 ){ const char *zError = 0; const char *zIn; const char *zAs; Blob file; int i; zAs = find_option("as",0,1); |
︙ | ︙ | |||
338 339 340 341 342 343 344 | } blob_init(&file,0,0); blob_read_from_file(&file, g.argv[i], ExtFILE); unversioned_write(zIn, &file, mtime); blob_reset(&file); } db_end_transaction(0); | | | | 338 339 340 341 342 343 344 345 346 347 348 349 350 351 352 353 354 355 356 357 358 359 360 361 362 363 364 | } blob_init(&file,0,0); blob_read_from_file(&file, g.argv[i], ExtFILE); unversioned_write(zIn, &file, mtime); blob_reset(&file); } db_end_transaction(0); }else if( memcmp(zCmd, "cat", nCmd)==0 ){ int i; verify_all_options(); db_begin_transaction(); for(i=3; i<g.argc; i++){ Blob content; if( unversioned_content(g.argv[i], &content)!=0 ){ blob_write_to_file(&content, "-"); } blob_reset(&content); } db_end_transaction(0); }else if( memcmp(zCmd, "edit", nCmd)==0 ){ const char *zEditor; /* Name of the text-editor command */ const char *zTFile; /* Temporary file */ const char *zUVFile; /* Name of the unversioned file */ char *zCmd; /* Command to run the text editor */ Blob content; /* Content of the unversioned file */ verify_all_options(); |
︙ | ︙ | |||
393 394 395 396 397 398 399 | blob_to_lf_only(&content); #endif file_delete(zTFile); if( zMtime==0 ) mtime = time(0); unversioned_write(zUVFile, &content, mtime); db_end_transaction(0); blob_reset(&content); | | | | | 393 394 395 396 397 398 399 400 401 402 403 404 405 406 407 408 409 410 411 412 413 414 415 416 417 418 419 420 | blob_to_lf_only(&content); #endif file_delete(zTFile); if( zMtime==0 ) mtime = time(0); unversioned_write(zUVFile, &content, mtime); db_end_transaction(0); blob_reset(&content); }else if( memcmp(zCmd, "export", nCmd)==0 ){ Blob content; verify_all_options(); if( g.argc!=5 ) usage("export UVFILE OUTPUT"); if( unversioned_content(g.argv[3], &content)==0 ){ fossil_fatal("no such uv-file: %Q", g.argv[3]); } blob_write_to_file(&content, g.argv[4]); blob_reset(&content); }else if( memcmp(zCmd, "hash", nCmd)==0 ){ /* undocumented */ /* Show the hash value used during uv sync */ int debugFlag = find_option("debug",0,0)!=0; fossil_print("%s\n", unversioned_content_hash(debugFlag)); }else if( memcmp(zCmd, "list", nCmd)==0 || memcmp(zCmd, "ls", nCmd)==0 ){ Stmt q; int allFlag = find_option("all","a",0)!=0; int longFlag = find_option("l",0,0)!=0 || (nCmd>1 && zCmd[1]=='i'); char *zPattern = sqlite3_mprintf("true"); const char *zGlob; zGlob = find_option("glob",0,1); if( zGlob ){ |
︙ | ︙ | |||
462 463 464 465 466 467 468 | db_column_text(&q,4), zNoContent ); } } db_finalize(&q); sqlite3_free(zPattern); | | | | | 462 463 464 465 466 467 468 469 470 471 472 473 474 475 476 477 478 479 480 481 482 483 | db_column_text(&q,4), zNoContent ); } } db_finalize(&q); sqlite3_free(zPattern); }else if( memcmp(zCmd, "revert", nCmd)==0 ){ unsigned syncFlags = unversioned_sync_flags(SYNC_UNVERSIONED|SYNC_UV_REVERT); g.argv[1] = "sync"; g.argv[2] = "--uv-noop"; sync_unversioned(syncFlags); }else if( memcmp(zCmd, "remove", nCmd)==0 || memcmp(zCmd, "rm", nCmd)==0 || memcmp(zCmd, "delete", nCmd)==0 ){ int i; const char *zGlob; db_begin_transaction(); while( (zGlob = find_option("glob",0,1))!=0 ){ db_multi_exec( "UPDATE unversioned" " SET hash=NULL, content=NULL, mtime=%lld, sz=0 WHERE name GLOB %Q", |
︙ | ︙ | |||
497 498 499 500 501 502 503 | "UPDATE unversioned" " SET hash=NULL, content=NULL, mtime=%lld, sz=0 WHERE name=%Q", mtime, g.argv[i] ); } db_unset("uv-hash", 0); db_end_transaction(0); | | | | 497 498 499 500 501 502 503 504 505 506 507 508 509 510 511 512 513 514 515 516 | "UPDATE unversioned" " SET hash=NULL, content=NULL, mtime=%lld, sz=0 WHERE name=%Q", mtime, g.argv[i] ); } db_unset("uv-hash", 0); db_end_transaction(0); }else if( memcmp(zCmd,"sync",nCmd)==0 ){ unsigned syncFlags = unversioned_sync_flags(SYNC_UNVERSIONED); g.argv[1] = "sync"; g.argv[2] = "--uv-noop"; sync_unversioned(syncFlags); }else if( memcmp(zCmd, "touch", nCmd)==0 ){ int i; verify_all_options(); db_begin_transaction(); for(i=3; i<g.argc; i++){ db_multi_exec( "UPDATE unversioned SET mtime=%lld WHERE name=%Q", mtime, g.argv[i] |
︙ | ︙ | |||
569 570 571 572 573 574 575 | ); iNow = db_int64(0, "SELECT strftime('%%s','now');"); while( db_step(&q)==SQLITE_ROW ){ const char *zName = db_column_text(&q, 0); sqlite3_int64 mtime = db_column_int(&q, 1); const char *zHash = db_column_text(&q, 2); int isDeleted = zHash==0; | < < < < | 569 570 571 572 573 574 575 576 577 578 579 580 581 582 583 584 585 586 587 588 589 590 591 592 593 594 595 596 597 598 | ); iNow = db_int64(0, "SELECT strftime('%%s','now');"); while( db_step(&q)==SQLITE_ROW ){ const char *zName = db_column_text(&q, 0); sqlite3_int64 mtime = db_column_int(&q, 1); const char *zHash = db_column_text(&q, 2); int isDeleted = zHash==0; int fullSize = db_column_int(&q, 3); char *zAge = human_readable_age((iNow - mtime)/86400.0); const char *zLogin = db_column_text(&q, 4); int rcvid = db_column_int(&q,5); if( zLogin==0 ) zLogin = ""; if( (n++)==0 ){ style_table_sorter(); @ <div class="uvlist"> @ <table cellpadding="2" cellspacing="0" border="1" class='sortable' \ @ data-column-types='tkKttn' data-init-sort='1'> @ <thead><tr> @ <th> Name @ <th> Age @ <th> Size @ <th> User @ <th> Hash if( g.perm.Admin ){ @ <th> rcvid } @ </tr></thead> @ <tbody> } @ <tr> |
︙ | ︙ | |||
610 611 612 613 614 615 616 | iTotalSz += fullSize; cnt++; @ <td> <a href='%R/uv/%T(zName)'>%h(zName)</a> </td> } @ <td data-sortkey='%016llx(-mtime)'> %s(zAge) </td> @ <td data-sortkey='%08x(fullSize)'> %s(zSzName) </td> @ <td> %h(zLogin) </td> | | < < | 606 607 608 609 610 611 612 613 614 615 616 617 618 619 620 621 622 623 624 625 626 627 628 629 630 631 632 633 634 635 636 637 638 639 | iTotalSz += fullSize; cnt++; @ <td> <a href='%R/uv/%T(zName)'>%h(zName)</a> </td> } @ <td data-sortkey='%016llx(-mtime)'> %s(zAge) </td> @ <td data-sortkey='%08x(fullSize)'> %s(zSzName) </td> @ <td> %h(zLogin) </td> @ <td> %h(zHash) </td> if( g.perm.Admin ){ if( rcvid ){ @ <td> <a href="%R/rcvfrom?rcvid=%d(rcvid)">%d(rcvid)</a> }else{ @ <td> } } @ </tr> fossil_free(zAge); } db_finalize(&q); if( n ){ approxSizeName(sizeof(zSzName), zSzName, iTotalSz); @ </tbody> @ <tfoot><tr><td><b>Total for %d(cnt) files</b><td><td>%s(zSzName) @ <td><td> if( g.perm.Admin ){ @ <td> } @ </tfoot> @ </table></div> }else{ @ No unversioned files on this server. } style_finish_page(); } |
︙ | ︙ |
Changes to src/update.c.
︙ | ︙ | |||
565 566 567 568 569 570 571 | db_finalize(&q); db_finalize(&mtimeXfer); fossil_print("%.79c\n",'-'); if( nUpdate==0 ){ show_common_info(tid, "checkout:", 1, 0); fossil_print("%-13s None. Already up-to-date\n", "changes:"); }else{ | | | 565 566 567 568 569 570 571 572 573 574 575 576 577 578 579 | db_finalize(&q); db_finalize(&mtimeXfer); fossil_print("%.79c\n",'-'); if( nUpdate==0 ){ show_common_info(tid, "checkout:", 1, 0); fossil_print("%-13s None. Already up-to-date\n", "changes:"); }else{ fossil_print("%-13s %.40s %s\n", "updated-from:", rid_to_uuid(vid), db_text("", "SELECT datetime(mtime) || ' UTC' FROM event " " WHERE objid=%d", vid)); show_common_info(tid, "updated-to:", 1, 0); fossil_print("%-13s %d file%s modified.\n", "changes:", nUpdate, nUpdate>1 ? "s" : ""); } |
︙ | ︙ |
Changes to src/url.c.
︙ | ︙ | |||
31 32 33 34 35 36 37 | #endif #endif #if INTERFACE /* ** Flags for url_parse() */ | | | | | | | | | < < < | 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 | #endif #endif #if INTERFACE /* ** Flags for url_parse() */ #define URL_PROMPT_PW 0x001 /* Prompt for password if needed */ #define URL_REMEMBER 0x002 /* Remember the url for later reuse */ #define URL_ASK_REMEMBER_PW 0x004 /* Ask whether to remember prompted pw */ #define URL_REMEMBER_PW 0x008 /* Should remember pw */ #define URL_PROMPTED 0x010 /* Prompted for PW already */ #define URL_OMIT_USER 0x020 /* Omit the user name from URL */ #define URL_USE_CONFIG 0x040 /* Use remembered URLs from CONFIG table */ #define URL_USE_PARENT 0x080 /* Use the URL of the parent project */ /* ** The URL related data used with this subsystem. */ struct UrlData { int isFile; /* True if a "file:" url */ int isHttps; /* True if a "https:" url */ |
︙ | ︙ | |||
90 91 92 93 94 95 96 | ** path Path name for HTTP or HTTPS. ** user Userid. ** passwd Password. ** hostname HOST:PORT or just HOST if port is the default. ** canonical The URL in canonical form, omitting the password ** ** If URL_USECONFIG is set and zUrl is NULL or "default", then parse the | | | | 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 | ** path Path name for HTTP or HTTPS. ** user Userid. ** passwd Password. ** hostname HOST:PORT or just HOST if port is the default. ** canonical The URL in canonical form, omitting the password ** ** If URL_USECONFIG is set and zUrl is NULL or "default", then parse the ** URL stored in last-sync-url and last-sync-pw of the CONFIG table. Or if ** URL_USE_PARENT is also set, then use parent-project-url and ** parent-project-pw from the CONFIG table instead of last-sync-url ** and last-sync-pw. ** ** If URL_USE_CONFIG is set and zUrl is a symbolic name, then look up ** the URL in sync-url:%Q and sync-pw:%Q elements of the CONFIG table where ** %Q is the symbolic name. ** ** This routine differs from url_parse() in that this routine stores the ** results in pUrlData and does not change the values of global variables. ** The url_parse() routine puts its result in g.url. */ void url_parse_local( const char *zUrl, unsigned int urlFlags, UrlData *pUrlData ){ int i, j, c; char *zFile = 0; pUrlData->pwConfig = 0; if( urlFlags & URL_USE_CONFIG ){ if( zUrl==0 || strcmp(zUrl,"default")==0 ){ const char *zPwConfig = "last-sync-pw"; if( urlFlags & URL_USE_PARENT ){ zUrl = db_get("parent-project-url", 0); if( zUrl==0 ){ zUrl = db_get("last-sync-url",0); |
︙ | ︙ | |||
160 161 162 163 164 165 166 167 168 169 170 171 172 173 | || strncmp(zUrl, "ssh://", 6)==0 ){ int iStart; char *zLogin; char *zExe; char cQuerySep = '?'; if( zUrl[4]=='s' ){ pUrlData->isHttps = 1; pUrlData->protocol = "https"; pUrlData->dfltPort = 443; iStart = 8; }else if( zUrl[0]=='s' ){ pUrlData->isSsh = 1; | > > | 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 | || strncmp(zUrl, "ssh://", 6)==0 ){ int iStart; char *zLogin; char *zExe; char cQuerySep = '?'; pUrlData->isFile = 0; pUrlData->useProxy = 0; if( zUrl[4]=='s' ){ pUrlData->isHttps = 1; pUrlData->protocol = "https"; pUrlData->dfltPort = 443; iStart = 8; }else if( zUrl[0]=='s' ){ pUrlData->isSsh = 1; |
︙ | ︙ | |||
254 255 256 257 258 259 260 | while( pUrlData->path[i] && pUrlData->path[i]!='&' ){ i++; } } if( pUrlData->path[i] ){ pUrlData->path[i] = 0; i++; } if( fossil_strcmp(zName,"fossil")==0 ){ | < < | 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 | while( pUrlData->path[i] && pUrlData->path[i]!='&' ){ i++; } } if( pUrlData->path[i] ){ pUrlData->path[i] = 0; i++; } if( fossil_strcmp(zName,"fossil")==0 ){ pUrlData->fossil = fossil_strdup(zValue); dehttpize(pUrlData->fossil); fossil_free(zExe); zExe = mprintf("%cfossil=%T", cQuerySep, pUrlData->fossil); cQuerySep = '&'; } } dehttpize(pUrlData->path); if( pUrlData->dfltPort==pUrlData->port ){ pUrlData->canonical = mprintf( "%s://%s%T%T%z", |
︙ | ︙ | |||
318 319 320 321 322 323 324 | free(zFile); zFile = 0; pUrlData->protocol = "file"; pUrlData->path = mprintf(""); pUrlData->name = mprintf("%b", &cfile); pUrlData->canonical = mprintf("file://%T", pUrlData->name); blob_reset(&cfile); | | | 315 316 317 318 319 320 321 322 323 324 325 326 327 328 329 | free(zFile); zFile = 0; pUrlData->protocol = "file"; pUrlData->path = mprintf(""); pUrlData->name = mprintf("%b", &cfile); pUrlData->canonical = mprintf("file://%T", pUrlData->name); blob_reset(&cfile); }else if( pUrlData->user!=0 && pUrlData->passwd==0 && (urlFlags & URL_PROMPT_PW)!=0 ){ url_prompt_for_password_local(pUrlData); }else if( pUrlData->user!=0 && ( urlFlags & URL_ASK_REMEMBER_PW ) ){ if( isatty(fileno(stdin)) && ( urlFlags & URL_REMEMBER_PW )==0 ){ if( save_password_prompt(pUrlData->passwd) ){ pUrlData->flags = urlFlags |= URL_REMEMBER_PW; }else{ |
︙ | ︙ | |||
413 414 415 416 417 418 419 | fossil_free(p->user); fossil_free(p->passwd); fossil_free(p->fossil); fossil_free(p->pwConfig); memset(p, 0, sizeof(*p)); } | < < < < < < < < < | 410 411 412 413 414 415 416 417 418 419 420 421 422 423 | fossil_free(p->user); fossil_free(p->passwd); fossil_free(p->fossil); fossil_free(p->pwConfig); memset(p, 0, sizeof(*p)); } /* ** Parse the given URL, which describes a sync server. Populate variables ** in the global "g.url" structure as shown below. If zUrl is NULL, then ** parse the URL given in the last-sync-url setting, taking the password ** form last-sync-pw. ** ** g.url.isFile True if FILE: |
︙ | ︙ | |||
464 465 466 467 468 469 470 | ** set to the CONFIG.NAME value from which that password is taken. Otherwise, ** g.url.pwConfig is NULL. */ void url_parse(const char *zUrl, unsigned int urlFlags){ url_parse_local(zUrl, urlFlags, &g.url); } | < < < < < < < < < < < < < < < < < < < < < < < < < < | 452 453 454 455 456 457 458 459 460 461 462 463 464 465 | ** set to the CONFIG.NAME value from which that password is taken. Otherwise, ** g.url.pwConfig is NULL. */ void url_parse(const char *zUrl, unsigned int urlFlags){ url_parse_local(zUrl, urlFlags, &g.url); } /* ** COMMAND: test-urlparser ** ** Usage: %fossil test-urlparser URL ?options? ** ** --prompt-pw Prompt for password if missing ** --remember Store results in last-sync-url |
︙ | ︙ | |||
518 519 520 521 522 523 524 | if( find_option("show-pw",0,0) ) showPw = 1; if( (fg & URL_USE_CONFIG)==0 ) showPw = 1; if( g.argc!=3 && g.argc!=4 ){ usage("URL"); } url_parse(g.argv[2], fg); for(i=0; i<2; i++){ | > > > > > > > > > > | > > > > > > > > > | 480 481 482 483 484 485 486 487 488 489 490 491 492 493 494 495 496 497 498 499 500 501 502 503 504 505 506 507 508 509 510 511 512 513 | if( find_option("show-pw",0,0) ) showPw = 1; if( (fg & URL_USE_CONFIG)==0 ) showPw = 1; if( g.argc!=3 && g.argc!=4 ){ usage("URL"); } url_parse(g.argv[2], fg); for(i=0; i<2; i++){ fossil_print("g.url.isFile = %d\n", g.url.isFile); fossil_print("g.url.isHttps = %d\n", g.url.isHttps); fossil_print("g.url.isSsh = %d\n", g.url.isSsh); fossil_print("g.url.protocol = %s\n", g.url.protocol); fossil_print("g.url.name = %s\n", g.url.name); fossil_print("g.url.port = %d\n", g.url.port); fossil_print("g.url.dfltPort = %d\n", g.url.dfltPort); fossil_print("g.url.hostname = %s\n", g.url.hostname); fossil_print("g.url.path = %s\n", g.url.path); fossil_print("g.url.user = %s\n", g.url.user); if( showPw || g.url.pwConfig==0 ){ fossil_print("g.url.passwd = %s\n", g.url.passwd); }else{ fossil_print("g.url.passwd = ************\n"); } fossil_print("g.url.pwConfig = %s\n", g.url.pwConfig); fossil_print("g.url.canonical = %s\n", g.url.canonical); fossil_print("g.url.fossil = %s\n", g.url.fossil); fossil_print("g.url.flags = 0x%02x\n", g.url.flags); fossil_print("url_full(g.url) = %z\n", url_full(&g.url)); if( g.url.isFile || g.url.isSsh ) break; if( i==0 ){ fossil_print("********\n"); url_enable_proxy("Using proxy: "); } url_unparse(0); } |
︙ | ︙ | |||
808 809 810 811 812 813 814 | ** Given a URL for a remote repository clone point, try to come up with a ** reasonable basename of a local clone of that repository. ** ** * If the URL has a path, use the tail of the path, with any suffix ** elided. ** ** * If the URL is just a domain name, without a path, then use the | | | 789 790 791 792 793 794 795 796 797 798 799 800 801 802 803 | ** Given a URL for a remote repository clone point, try to come up with a ** reasonable basename of a local clone of that repository. ** ** * If the URL has a path, use the tail of the path, with any suffix ** elided. ** ** * If the URL is just a domain name, without a path, then use the ** first element of the domain name, except skip over "www." if ** present and if there is a ".com" or ".org" or similar suffix. ** ** The string returned is obtained from fossil_malloc(). NULL might be ** returned if there is an error. */ char *url_to_repo_basename(const char *zUrl){ const char *zTail = 0; |
︙ | ︙ |
Changes to src/user.c.
︙ | ︙ | |||
397 398 399 400 401 402 403 | } if( g.localOpen ){ db_lset("default-user", g.argv[3]); }else{ db_set("default-user", g.argv[3], 0); } } | | < | 397 398 399 400 401 402 403 404 405 406 407 408 409 410 411 | } if( g.localOpen ){ db_lset("default-user", g.argv[3]); }else{ db_set("default-user", g.argv[3], 0); } } }else if(( n>=2 && strncmp(g.argv[2],"list",n)==0 ) || ( n>=2 && strncmp(g.argv[2],"ls",n)==0 )){ Stmt q; db_prepare(&q, "SELECT login, info FROM user ORDER BY login"); while( db_step(&q)==SQLITE_ROW ){ fossil_print("%-12s %s\n", db_column_text(&q, 0), db_column_text(&q, 1)); } db_finalize(&q); }else if( n>=2 && strncmp(g.argv[2],"password",2)==0 ){ |
︙ | ︙ | |||
665 666 667 668 669 670 671 | iVerify = atoi(g.argv[3]); prompt_for_password(g.argv[2], &answer, iVerify); fossil_print("[%s]\n", blob_str(&answer)); } /* ** WEBPAGE: access_log | < | | | | | | | > > | | | | | | | | | 664 665 666 667 668 669 670 671 672 673 674 675 676 677 678 679 680 681 682 683 684 685 686 687 688 689 690 691 692 693 694 695 696 697 698 699 700 701 702 703 704 705 706 707 708 709 710 711 712 713 714 715 716 717 718 719 720 721 722 723 724 725 726 727 728 729 730 731 732 733 734 735 736 737 738 739 740 741 742 743 744 745 746 747 748 749 750 751 752 753 754 755 756 757 758 759 760 761 762 763 764 765 766 767 768 769 770 771 772 773 774 775 776 777 778 779 780 781 782 783 784 785 786 787 788 789 790 791 792 793 794 795 796 797 | iVerify = atoi(g.argv[3]); prompt_for_password(g.argv[2], &answer, iVerify); fossil_print("[%s]\n", blob_str(&answer)); } /* ** WEBPAGE: access_log ** ** Show login attempts, including timestamp and IP address. ** Requires Admin privileges. ** ** Query parameters: ** ** y=N 1: success only. 2: failure only. 3: both (default: 3) ** n=N Number of entries to show (default: 200) ** o=N Skip this many entries (default: 0) */ void access_log_page(void){ int y = atoi(PD("y","3")); int n = atoi(PD("n","200")); int skip = atoi(PD("o","0")); const char *zUser = P("u"); Blob sql; Stmt q; int cnt = 0; int rc; int fLogEnabled; login_check_credentials(); if( !g.perm.Admin ){ login_needed(0); return; } create_accesslog_table(); if( P("delall") && P("delallbtn") ){ db_multi_exec("DELETE FROM accesslog"); cgi_redirectf("%R/access_log?y=%d&n=%d&o=%o", y, n, skip); return; } if( P("delanon") && P("delanonbtn") ){ db_multi_exec("DELETE FROM accesslog WHERE uname='anonymous'"); cgi_redirectf("%R/access_log?y=%d&n=%d&o=%o", y, n, skip); return; } if( P("delfail") && P("delfailbtn") ){ db_multi_exec("DELETE FROM accesslog WHERE NOT success"); cgi_redirectf("%R/access_log?y=%d&n=%d&o=%o", y, n, skip); return; } if( P("delold") && P("deloldbtn") ){ db_multi_exec("DELETE FROM accesslog WHERE rowid in" "(SELECT rowid FROM accesslog ORDER BY rowid DESC" " LIMIT -1 OFFSET 200)"); cgi_redirectf("%R/access_log?y=%d&n=%d", y, n); return; } style_header("Access Log"); style_submenu_element("Admin-Log", "admin_log"); style_submenu_element("Artifact-Log", "rcvfromlist"); style_submenu_element("Error-Log", "errorlog"); blob_zero(&sql); blob_append_sql(&sql, "SELECT uname, ipaddr, datetime(mtime,toLocal()), success" " FROM accesslog" ); if( zUser ){ blob_append_sql(&sql, " WHERE uname=%Q", zUser); n = 1000000000; skip = 0; }else if( y==1 ){ blob_append(&sql, " WHERE success", -1); }else if( y==2 ){ blob_append(&sql, " WHERE NOT success", -1); } blob_append_sql(&sql," ORDER BY rowid DESC LIMIT %d OFFSET %d", n+1, skip); if( skip ){ style_submenu_element("Newer", "%R/access_log?o=%d&n=%d&y=%d", skip>=n ? skip-n : 0, n, y); } rc = db_prepare_ignore_error(&q, "%s", blob_sql_text(&sql)); fLogEnabled = db_get_boolean("access-log", 0); @ <div align="center">Access logging is %s(fLogEnabled?"on":"off"). @ (Change this on the <a href="setup_settings">settings</a> page.)</div> @ <table border="1" cellpadding="5" class="sortable" align="center" \ @ data-column-types='Ttt' data-init-sort='1'> @ <thead><tr><th width="33%%">Date</th><th width="34%%">User</th> @ <th width="33%%">IP Address</th></tr></thead><tbody> while( rc==SQLITE_OK && db_step(&q)==SQLITE_ROW ){ const char *zName = db_column_text(&q, 0); const char *zIP = db_column_text(&q, 1); const char *zDate = db_column_text(&q, 2); int bSuccess = db_column_int(&q, 3); cnt++; if( cnt>n ){ style_submenu_element("Older", "%R/access_log?o=%d&n=%d&y=%d", skip+n, n, y); break; } if( bSuccess ){ @ <tr> }else{ @ <tr bgcolor="#ffacc0"> } @ <td>%s(zDate)</td><td>%h(zName)</td><td>%h(zIP)</td></tr> } if( skip>0 || cnt>n ){ style_submenu_element("All", "%R/access_log?n=10000000"); } @ </tbody></table> db_finalize(&q); @ <hr> @ <form method="post" action="%R/access_log"> @ <label><input type="checkbox" name="delold"> @ Delete all but the most recent 200 entries</input></label> @ <input type="submit" name="deloldbtn" value="Delete"></input> @ </form> @ <form method="post" action="%R/access_log"> @ <label><input type="checkbox" name="delanon"> @ Delete all entries for user "anonymous"</input></label> @ <input type="submit" name="delanonbtn" value="Delete"></input> @ </form> @ <form method="post" action="%R/access_log"> @ <label><input type="checkbox" name="delfail"> @ Delete all failed login attempts</input></label> @ <input type="submit" name="delfailbtn" value="Delete"></input> @ </form> @ <form method="post" action="%R/access_log"> @ <label><input type="checkbox" name="delall"> @ Delete all entries</input></label> @ <input type="submit" name="delallbtn" value="Delete"></input> @ </form> style_table_sorter(); style_finish_page(); } |
Changes to src/vfile.c.
︙ | ︙ | |||
408 409 410 411 412 413 414 | "original", "output", }; int i, j, n; if( sqlite3_strglob("ci-comment-????????????.txt", zName)==0 ) return 1; for(; zName[0]!=0; zName++){ | < | | 408 409 410 411 412 413 414 415 416 417 418 419 420 421 422 | "original", "output", }; int i, j, n; if( sqlite3_strglob("ci-comment-????????????.txt", zName)==0 ) return 1; for(; zName[0]!=0; zName++){ if( zName[0]=='/' && sqlite3_strglob("/ci-comment-????????????.txt", zName)==0 ){ return 1; } if( zName[0]!='-' ) continue; for(i=0; i<count(azTemp); i++){ n = (int)strlen(azTemp[i]); if( memcmp(azTemp[i], zName+1, n) ) continue; if( zName[n+1]==0 ) return 1; |
︙ | ︙ | |||
753 754 755 756 757 758 759 | md5sum_step_text(" 0\n", -1); continue; } fseek(in, 0L, SEEK_END); sqlite3_snprintf(sizeof(zBuf), zBuf, " %ld\n", ftell(in)); fseek(in, 0L, SEEK_SET); md5sum_step_text(zBuf, -1); | | | 752 753 754 755 756 757 758 759 760 761 762 763 764 765 766 | md5sum_step_text(" 0\n", -1); continue; } fseek(in, 0L, SEEK_END); sqlite3_snprintf(sizeof(zBuf), zBuf, " %ld\n", ftell(in)); fseek(in, 0L, SEEK_SET); md5sum_step_text(zBuf, -1); /*printf("%s %s %s",md5sum_current_state(),zName,zBuf); fflush(stdout);*/ for(;;){ int n; n = fread(zBuf, 1, sizeof(zBuf), in); if( n<=0 ) break; md5sum_step_text(zBuf, n); } fclose(in); |
︙ | ︙ | |||
1040 1041 1042 1043 1044 1045 1046 | /* Add RID values for merged-in files */ db_multi_exec( "INSERT OR IGNORE INTO idMap(oldrid, newrid)" " SELECT vfile.mrid, blob.rid FROM vfile, blob" " WHERE blob.uuid=vfile.mhash;" ); | | | | 1039 1040 1041 1042 1043 1044 1045 1046 1047 1048 1049 1050 1051 1052 1053 1054 1055 1056 1057 1058 1059 | /* Add RID values for merged-in files */ db_multi_exec( "INSERT OR IGNORE INTO idMap(oldrid, newrid)" " SELECT vfile.mrid, blob.rid FROM vfile, blob" " WHERE blob.uuid=vfile.mhash;" ); if( dryRun ){ Stmt q; db_prepare(&q, "SELECT oldrid, newrid, blob.uuid" " FROM idMap, blob WHERE blob.rid=idMap.newrid"); while( db_step(&q)==SQLITE_ROW ){ fossil_print("%8d -> %8d %.25s\n", db_column_int(&q,0), db_column_int(&q,1), db_column_text(&q,2)); } db_finalize(&q); } |
︙ | ︙ | |||
1068 1069 1070 1071 1072 1073 1074 | " UNION SELECT %d" ")" "SELECT group_concat(x,' ') FROM allrid" " WHERE x<>0 AND x NOT IN (SELECT oldrid FROM idMap);", oldVid ); if( zUnresolved[0] ){ | | < < < < < < < | 1067 1068 1069 1070 1071 1072 1073 1074 1075 1076 1077 1078 1079 1080 1081 | " UNION SELECT %d" ")" "SELECT group_concat(x,' ') FROM allrid" " WHERE x<>0 AND x NOT IN (SELECT oldrid FROM idMap);", oldVid ); if( zUnresolved[0] ){ fossil_fatal("Unresolved RID values: %s\n", zUnresolved); } /* Make the changes to the VFILE and VMERGE tables */ if( !dryRun ){ db_multi_exec( "UPDATE vfile" " SET rid=(SELECT newrid FROM idMap WHERE oldrid=vfile.rid)" |
︙ | ︙ |
Changes to src/wiki.c.
︙ | ︙ | |||
83 84 85 86 87 88 89 | } int wiki_tagid2(const char *zPrefix, const char *zPageName){ return db_int(0, "SELECT tagid FROM tag WHERE tagname='wiki-%q/%q'", zPrefix, zPageName); } /* | | | 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 | } int wiki_tagid2(const char *zPrefix, const char *zPageName){ return db_int(0, "SELECT tagid FROM tag WHERE tagname='wiki-%q/%q'", zPrefix, zPageName); } /* ** Return the RID of the next or previous version of a wiki page. ** Return 0 if rid is the last/first version. */ int wiki_next(int tagid, double mtime){ return db_int(0, "SELECT srcid FROM tagxref" " WHERE tagid=%d AND mtime>%.16g" " ORDER BY mtime ASC LIMIT 1", |
︙ | ︙ | |||
202 203 204 205 206 207 208 | }else if( fossil_strcmp(zMimetype, "text/x-markdown")==0 ){ Blob tail = BLOB_INITIALIZER; markdown_to_html(pWiki, 0, &tail); safe_html(&tail); @ %s(blob_str(&tail)) blob_reset(&tail); }else if( fossil_strcmp(zMimetype, "text/x-pikchr")==0 ){ | < < < | < < | < | 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 | }else if( fossil_strcmp(zMimetype, "text/x-markdown")==0 ){ Blob tail = BLOB_INITIALIZER; markdown_to_html(pWiki, 0, &tail); safe_html(&tail); @ %s(blob_str(&tail)) blob_reset(&tail); }else if( fossil_strcmp(zMimetype, "text/x-pikchr")==0 ){ const char *zPikchr = blob_str(pWiki); int w, h; char *zOut = pikchr(zPikchr, "pikchr", 0, &w, &h); if( w>0 ){ @ <div class="pikchr-svg" style="max-width:%d(w)px"> @ %s(zOut) @ </div> }else{ @ <pre class='error'> @ %h(zOut) @ </pre> } free(zOut); }else{ |
︙ | ︙ | |||
417 418 419 420 421 422 423 | /* ** Figure out what type of wiki page we are dealing with. */ int wiki_page_type(const char *zPageName){ if( db_get_boolean("wiki-about",1)==0 ){ return WIKITYPE_NORMAL; }else | | | 411 412 413 414 415 416 417 418 419 420 421 422 423 424 425 | /* ** Figure out what type of wiki page we are dealing with. */ int wiki_page_type(const char *zPageName){ if( db_get_boolean("wiki-about",1)==0 ){ return WIKITYPE_NORMAL; }else if( sqlite3_strglob("checkin/*", zPageName)==0 && db_exists("SELECT 1 FROM blob WHERE uuid=%Q",zPageName+8) ){ return WIKITYPE_CHECKIN; }else if( sqlite3_strglob("branch/*", zPageName)==0 ){ return WIKITYPE_BRANCH; }else |
︙ | ︙ | |||
451 452 453 454 455 456 457 | /* ** Add an appropriate style_header() for either the /wiki or /wikiedit page ** for zPageName. zExtra is an empty string for /wiki but has the text ** "Edit: " for /wikiedit. ** ** If the page is /wiki and the page is one of the special times (check-in, | | | 445 446 447 448 449 450 451 452 453 454 455 456 457 458 459 | /* ** Add an appropriate style_header() for either the /wiki or /wikiedit page ** for zPageName. zExtra is an empty string for /wiki but has the text ** "Edit: " for /wikiedit. ** ** If the page is /wiki and the page is one of the special times (check-in, ** branch, or tag) and the "p" query parameter is omitted, then do a ** redirect to the display of the check-in, branch, or tag rather than ** continuing to the plain wiki display. */ static int wiki_page_header( int eType, /* Page type. Might be WIKITYPE_UNKNOWN */ const char *zPageName, /* Name of the page */ const char *zExtra /* Extra prefix text on the page header */ |
︙ | ︙ | |||
473 474 475 476 477 478 479 | } case WIKITYPE_CHECKIN: { zPageName += 8; if( zExtra[0]==0 && !P("p") ){ cgi_redirectf("%R/info/%s",zPageName); }else{ style_header("Notes About Check-in %S", zPageName); | | < | 467 468 469 470 471 472 473 474 475 476 477 478 479 480 481 | } case WIKITYPE_CHECKIN: { zPageName += 8; if( zExtra[0]==0 && !P("p") ){ cgi_redirectf("%R/info/%s",zPageName); }else{ style_header("Notes About Check-in %S", zPageName); style_submenu_element("Check-in Timeline","%R/timeline?f=%s", zPageName); style_submenu_element("Check-in Info","%R/info/%s", zPageName); } break; } case WIKITYPE_BRANCH: { zPageName += 7; if( zExtra[0]==0 && !P("p") ){ |
︙ | ︙ | |||
556 557 558 559 560 561 562 | int isPopup = P("popup")!=0; char *zBody = mprintf("%s","<i>Empty Page</i>"); int noSubmenu = P("nsm")!=0 || g.isHome; login_check_credentials(); if( !g.perm.RdWiki ){ login_needed(g.anon.RdWiki); return; } zPageName = P("name"); | < | 549 550 551 552 553 554 555 556 557 558 559 560 561 562 | int isPopup = P("popup")!=0; char *zBody = mprintf("%s","<i>Empty Page</i>"); int noSubmenu = P("nsm")!=0 || g.isHome; login_check_credentials(); if( !g.perm.RdWiki ){ login_needed(g.anon.RdWiki); return; } zPageName = P("name"); cgi_check_for_malice(); if( zPageName==0 ){ if( search_restrict(SRCH_WIKI)!=0 ){ wiki_srchpage(); }else{ wiki_helppage(); } |
︙ | ︙ | |||
749 750 751 752 753 754 755 | ** Note that the sandbox is a special case: it is a pseudo-page with ** no rid and the /wikiajax API does not allow anyone to actually save ** a sandbox page, but it is reported as writable here (with rid 0). */ static int wiki_ajax_can_write(const char *zPageName, int * pRid){ int rid = 0; const char * zErr = 0; | | | 741 742 743 744 745 746 747 748 749 750 751 752 753 754 755 | ** Note that the sandbox is a special case: it is a pseudo-page with ** no rid and the /wikiajax API does not allow anyone to actually save ** a sandbox page, but it is reported as writable here (with rid 0). */ static int wiki_ajax_can_write(const char *zPageName, int * pRid){ int rid = 0; const char * zErr = 0; if(pRid) *pRid = 0; if(!zPageName || !*zPageName || !wiki_name_is_wellformed((unsigned const char *)zPageName)){ zErr = "Invalid page name."; }else if(is_sandbox(zPageName)){ return 1; }else{ |
︙ | ︙ | |||
772 773 774 775 776 777 778 | }else if(!rid && !g.perm.NewWiki){ zErr = "Requires new-wiki permissions."; }else{ zErr = "Cannot happen! Please report this as a bug."; } } ajax_route_error(403, "%s", zErr); | | | 764 765 766 767 768 769 770 771 772 773 774 775 776 777 778 | }else if(!rid && !g.perm.NewWiki){ zErr = "Requires new-wiki permissions."; }else{ zErr = "Cannot happen! Please report this as a bug."; } } ajax_route_error(403, "%s", zErr); return 0; } /* ** Emits an array of attachment info records for the given wiki page ** artifact. ** |
︙ | ︙ | |||
1018 1019 1020 1021 1022 1023 1024 | ** ** Responds with JSON. On error, an object in the form documented by ** ajax_route_error(). On success, an object in the form documented ** for wiki_ajax_emit_page_object(). */ static void wiki_ajax_route_fetch(void){ const char * zPageName = P("page"); | | | 1010 1011 1012 1013 1014 1015 1016 1017 1018 1019 1020 1021 1022 1023 1024 | ** ** Responds with JSON. On error, an object in the form documented by ** ajax_route_error(). On success, an object in the form documented ** for wiki_ajax_emit_page_object(). */ static void wiki_ajax_route_fetch(void){ const char * zPageName = P("page"); if( zPageName==0 || zPageName[0]==0 ){ ajax_route_error(400,"Missing page name."); return; } cgi_set_content_type("application/json"); wiki_ajax_emit_page_object(zPageName, 1); } |
︙ | ︙ | |||
1209 1210 1211 1212 1213 1214 1215 | } /* ** WEBPAGE: wikiajax hidden ** ** An internal dispatcher for wiki AJAX operations. Not for direct ** client use. All routes defined by this interface are app-internal, | | | 1201 1202 1203 1204 1205 1206 1207 1208 1209 1210 1211 1212 1213 1214 1215 | } /* ** WEBPAGE: wikiajax hidden ** ** An internal dispatcher for wiki AJAX operations. Not for direct ** client use. All routes defined by this interface are app-internal, ** subject to change */ void wiki_ajax_page(void){ const char * zName = P("name"); AjaxRoute routeName = {0,0,0,0}; const AjaxRoute * pRoute = 0; const AjaxRoute routes[] = { /* Keep these sorted by zName (for bsearch()) */ |
︙ | ︙ | |||
1254 1255 1256 1257 1258 1259 1260 | "Referer headers is enabled for XHR " "connections)."); return; } pRoute->xCallback(); } | < < < < < < < < < < < < < < < < | 1246 1247 1248 1249 1250 1251 1252 1253 1254 1255 1256 1257 1258 1259 | "Referer headers is enabled for XHR " "connections)."); return; } pRoute->xCallback(); } /* ** WEBPAGE: wikiedit ** URL: /wikedit?name=PAGENAME ** ** The main front-end for the Ajax-based wiki editor app. Passing ** in the name of an unknown page will trigger the creation ** of a new page (which is not actually created in the database |
︙ | ︙ | |||
1333 1334 1335 1336 1337 1338 1339 | "Status messages will go here.</div>\n" /* will be moved into the tab container via JS */); CX("<div id='wikiedit-edit-status''>" "<span class='name'></span>" "<span class='links'></span>" "</div>"); | | | | 1309 1310 1311 1312 1313 1314 1315 1316 1317 1318 1319 1320 1321 1322 1323 1324 1325 1326 1327 1328 1329 1330 1331 1332 1333 1334 1335 1336 1337 1338 1339 | "Status messages will go here.</div>\n" /* will be moved into the tab container via JS */); CX("<div id='wikiedit-edit-status''>" "<span class='name'></span>" "<span class='links'></span>" "</div>"); /* Main tab container... */ CX("<div id='wikiedit-tabs' class='tab-container'>Loading...</div>"); /* The .hidden class on the following tab elements is to help lessen the FOUC effect of the tabs before JS re-assembles them. */ /******* Page list *******/ { CX("<div id='wikiedit-tab-pages' " "data-tab-parent='wikiedit-tabs' " "data-tab-label='Wiki Page List' " "class='hidden'" ">"); CX("<div>Loading wiki pages list...</div>"); CX("</div>"/*#wikiedit-tab-pages*/); } /******* Content tab *******/ { CX("<div id='wikiedit-tab-content' " "data-tab-parent='wikiedit-tabs' " "data-tab-label='Editor' " "class='hidden'" ">"); |
︙ | ︙ | |||
1393 1394 1395 1396 1397 1398 1399 | "<div class='help-buttonlet'>" "Reload the file from the server, discarding " "any local edits. To help avoid accidental loss of " "edits, it requires confirmation (a second click) within " "a few seconds or it will not reload." "</div>" "</div>"); | | | 1369 1370 1371 1372 1373 1374 1375 1376 1377 1378 1379 1380 1381 1382 1383 | "<div class='help-buttonlet'>" "Reload the file from the server, discarding " "any local edits. To help avoid accidental loss of " "edits, it requires confirmation (a second click) within " "a few seconds or it will not reload." "</div>" "</div>"); CX("</div>"); CX("<div class='flex-container flex-column stretch'>"); CX("<textarea name='content' id='wikiedit-content-editor' " "class='wikiedit' rows='25'>"); CX("</textarea>"); CX("</div>"/*textarea wrapper*/); CX("</div>"/*#tab-file-content*/); |
︙ | ︙ | |||
1911 1912 1913 1914 1915 1916 1917 | ** wsort Sort names by this label ** wrid rid of the most recent version of the page ** wmtime time most recent version was created ** wcnt Number of versions of this wiki page ** ** The wrid value is zero for deleted wiki pages. */ | | | 1887 1888 1889 1890 1891 1892 1893 1894 1895 1896 1897 1898 1899 1900 1901 | ** wsort Sort names by this label ** wrid rid of the most recent version of the page ** wmtime time most recent version was created ** wcnt Number of versions of this wiki page ** ** The wrid value is zero for deleted wiki pages. */ static const char listAllWikiPages[] = @ SELECT @ substr(tag.tagname, 6) AS wname, @ lower(substr(tag.tagname, 6)) AS sortname, @ tagxref.value+0 AS wrid, @ max(tagxref.mtime) AS wmtime, @ count(*) AS wcnt @ FROM |
︙ | ︙ | |||
2139 2140 2141 2142 2143 2144 2145 | if( !rid ) { /* ** At present, technote tags are prefixed with 'sym-', which shouldn't ** be the case, so we check for both with and without the prefix until ** such time as tags have the errant prefix dropped. */ rid = db_int(0, "SELECT e.objid" | | | | | | | | | 2115 2116 2117 2118 2119 2120 2121 2122 2123 2124 2125 2126 2127 2128 2129 2130 2131 2132 2133 2134 2135 2136 2137 2138 2139 2140 2141 | if( !rid ) { /* ** At present, technote tags are prefixed with 'sym-', which shouldn't ** be the case, so we check for both with and without the prefix until ** such time as tags have the errant prefix dropped. */ rid = db_int(0, "SELECT e.objid" " FROM event e, tag t, tagxref tx" " WHERE e.type='e'" " AND e.tagid IS NOT NULL" " AND e.objid IN" " (SELECT rid FROM tagxref" " WHERE tagid=(SELECT tagid FROM tag" " WHERE tagname GLOB '%q'))" " OR e.objid IN" " (SELECT rid FROM tagxref" " WHERE tagid=(SELECT tagid FROM tag" " WHERE tagname GLOB 'sym-%q'))" " ORDER BY e.mtime DESC LIMIT 1", zETime, zETime); } return rid; } /* ** COMMAND: wiki* ** |
︙ | ︙ | |||
2488 2489 2490 2491 2492 2493 2494 | } while( db_step(&q)==SQLITE_ROW ){ const char *zName = db_column_text(&q, 0); const int wrid = db_column_int(&q, 2); if(!showAll && !wrid){ continue; } | | | 2464 2465 2466 2467 2468 2469 2470 2471 2472 2473 2474 2475 2476 2477 2478 | } while( db_step(&q)==SQLITE_ROW ){ const char *zName = db_column_text(&q, 0); const int wrid = db_column_int(&q, 2); if(!showAll && !wrid){ continue; } if( !showCkBr && (sqlite3_strglob("checkin/*", zName)==0 || sqlite3_strglob("branch/*", zName)==0) ){ continue; } if( showIds ){ const char *zUuid = db_column_text(&q, 1); fossil_print("%s ",zUuid); |
︙ | ︙ |
Changes to src/wikiformat.c.
︙ | ︙ | |||
458 459 460 461 462 463 464 | int state; /* Flag that govern rendering */ unsigned renderFlags; /* Flags from the client */ int wikiList; /* Current wiki list type */ int inVerbatim; /* True in <verbatim> mode */ int preVerbState; /* Value of state prior to verbatim */ int wantAutoParagraph; /* True if a <p> is desired */ int inAutoParagraph; /* True if within an automatic paragraph */ | < | 458 459 460 461 462 463 464 465 466 467 468 469 470 471 | int state; /* Flag that govern rendering */ unsigned renderFlags; /* Flags from the client */ int wikiList; /* Current wiki list type */ int inVerbatim; /* True in <verbatim> mode */ int preVerbState; /* Value of state prior to verbatim */ int wantAutoParagraph; /* True if a <p> is desired */ int inAutoParagraph; /* True if within an automatic paragraph */ const char *zVerbatimId; /* The id= attribute of <verbatim> */ int nStack; /* Number of elements on the stack */ int nAlloc; /* Space allocated for aStack */ struct sStack { short iCode; /* Markup code */ short allowWiki; /* ALLOW_WIKI if wiki allowed before tag */ const char *zId; /* ID attribute or NULL */ |
︙ | ︙ | |||
1780 1781 1782 1783 1784 1785 1786 | }else if( markup.iType==MUTYPE_TD ){ if( backupToType(p, MUTYPE_TABLE|MUTYPE_TR) ){ if( stackTopType(p)==MUTYPE_TABLE ){ pushStack(p, MARKUP_TR); blob_append_string(p->pOut, "<tr>"); } | < | 1779 1780 1781 1782 1783 1784 1785 1786 1787 1788 1789 1790 1791 1792 | }else if( markup.iType==MUTYPE_TD ){ if( backupToType(p, MUTYPE_TABLE|MUTYPE_TR) ){ if( stackTopType(p)==MUTYPE_TABLE ){ pushStack(p, MARKUP_TR); blob_append_string(p->pOut, "<tr>"); } pushStack(p, markup.iCode); renderMarkup(p->pOut, &markup); } }else if( markup.iType==MUTYPE_HYPERLINK ){ if( !isButtonHyperlink(p, &markup, z, &n) ){ popStackToTag(p, markup.iCode); |
︙ | ︙ | |||
1873 1874 1875 1876 1877 1878 1879 | ** Options: ** --buttons Set the WIKI_BUTTONS flag ** --htmlonly Set the WIKI_HTMLONLY flag ** --linksonly Set the WIKI_LINKSONLY flag ** --nobadlinks Set the WIKI_NOBADLINKS flag ** --inline Set the WIKI_INLINE flag ** --noblock Set the WIKI_NOBLOCK flag | < < < < < < < < | 1871 1872 1873 1874 1875 1876 1877 1878 1879 1880 1881 1882 1883 1884 1885 1886 1887 1888 1889 1890 1891 1892 1893 1894 1895 1896 1897 1898 1899 1900 1901 1902 1903 1904 1905 1906 1907 1908 1909 1910 1911 1912 1913 1914 1915 1916 1917 1918 1919 1920 1921 | ** Options: ** --buttons Set the WIKI_BUTTONS flag ** --htmlonly Set the WIKI_HTMLONLY flag ** --linksonly Set the WIKI_LINKSONLY flag ** --nobadlinks Set the WIKI_NOBADLINKS flag ** --inline Set the WIKI_INLINE flag ** --noblock Set the WIKI_NOBLOCK flag */ void test_wiki_render(void){ Blob in, out; int flags = 0; if( find_option("buttons",0,0)!=0 ) flags |= WIKI_BUTTONS; if( find_option("htmlonly",0,0)!=0 ) flags |= WIKI_HTMLONLY; if( find_option("linksonly",0,0)!=0 ) flags |= WIKI_LINKSONLY; if( find_option("nobadlinks",0,0)!=0 ) flags |= WIKI_NOBADLINKS; if( find_option("inline",0,0)!=0 ) flags |= WIKI_INLINE; if( find_option("noblock",0,0)!=0 ) flags |= WIKI_NOBLOCK; db_find_and_open_repository(OPEN_OK_NOT_FOUND|OPEN_SUBSTITUTE,0); verify_all_options(); if( g.argc!=3 ) usage("FILE"); blob_zero(&out); blob_read_from_file(&in, g.argv[2], ExtFILE); wiki_convert(&in, &out, flags); blob_write_to_file(&out, "-"); } /* ** COMMAND: test-markdown-render ** ** Usage: %fossil test-markdown-render FILE ... ** ** Render markdown in FILE as HTML on stdout. ** Options: ** ** --safe Restrict the output to use only "safe" HTML ** --lint-footnotes Print stats for footnotes-related issues */ void test_markdown_render(void){ Blob in, out; int i; int bSafe = 0, bFnLint = 0; db_find_and_open_repository(OPEN_OK_NOT_FOUND|OPEN_SUBSTITUTE,0); bSafe = find_option("safe",0,0)!=0; bFnLint = find_option("lint-footnotes",0,0)!=0; verify_all_options(); for(i=2; i<g.argc; i++){ blob_zero(&out); blob_read_from_file(&in, g.argv[i], ExtFILE); if( g.argc>3 ){ fossil_print("<!------ %h ------->\n", g.argv[i]); } |
︙ | ︙ | |||
2227 2228 2229 2230 2231 2232 2233 | iMatchCnt = 1; }else if( n==1 && zStart[0]=='=' && iMatchCnt==1 ){ iMatchCnt = 2; }else if( iMatchCnt==2 ){ if( (zStart[0]=='"' || zStart[0]=='\'') && zStart[n-1]==zStart[0] ){ zStart++; n -= 2; | | | 2217 2218 2219 2220 2221 2222 2223 2224 2225 2226 2227 2228 2229 2230 2231 | iMatchCnt = 1; }else if( n==1 && zStart[0]=='=' && iMatchCnt==1 ){ iMatchCnt = 2; }else if( iMatchCnt==2 ){ if( (zStart[0]=='"' || zStart[0]=='\'') && zStart[n-1]==zStart[0] ){ zStart++; n -= 2; } *pLen = n; return zStart; }else{ iMatchCnt = 0; } } return 0; |
︙ | ︙ |
Changes to src/winhttp.c.
︙ | ︙ | |||
666 667 668 669 670 671 672 | fossil_panic("unable to get path to the temporary directory."); } /* Use a subdirectory for temp files (can then be excluded from virus scan) */ zTempSubDirPath = mprintf("%s%s\\",fossil_path_to_utf8(zTmpPath),zTempSubDir); if ( !file_mkdir(zTempSubDirPath, ExtFILE, 0) || file_isdir(zTempSubDirPath, ExtFILE)==1 ){ wcscpy(zTmpPath, fossil_utf8_to_path(zTempSubDirPath, 1)); | | | 666 667 668 669 670 671 672 673 674 675 676 677 678 679 680 | fossil_panic("unable to get path to the temporary directory."); } /* Use a subdirectory for temp files (can then be excluded from virus scan) */ zTempSubDirPath = mprintf("%s%s\\",fossil_path_to_utf8(zTmpPath),zTempSubDir); if ( !file_mkdir(zTempSubDirPath, ExtFILE, 0) || file_isdir(zTempSubDirPath, ExtFILE)==1 ){ wcscpy(zTmpPath, fossil_utf8_to_path(zTempSubDirPath, 1)); } if( g.fHttpTrace ){ zTempPrefix = mprintf("httptrace"); }else{ zTempPrefix = mprintf("%sfossil_server_P%d", fossil_unicode_to_utf8(zTmpPath), iPort); } fossil_print("Temporary files: %s*\n", zTempPrefix); |
︙ | ︙ | |||
1370 1371 1372 1373 1374 1375 1376 | if( !hScm ) winhttp_fatal("start", zSvcName, win32_get_last_errmsg()); hSvc = OpenServiceW(hScm, fossil_utf8_to_unicode(zSvcName), SERVICE_ALL_ACCESS); if( !hSvc ) winhttp_fatal("start", zSvcName, win32_get_last_errmsg()); QueryServiceStatus(hSvc, &sstat); if( sstat.dwCurrentState!=SERVICE_RUNNING ){ fossil_print("Starting service '%s'", zSvcName); | | | | | | | | 1370 1371 1372 1373 1374 1375 1376 1377 1378 1379 1380 1381 1382 1383 1384 1385 1386 1387 1388 1389 | if( !hScm ) winhttp_fatal("start", zSvcName, win32_get_last_errmsg()); hSvc = OpenServiceW(hScm, fossil_utf8_to_unicode(zSvcName), SERVICE_ALL_ACCESS); if( !hSvc ) winhttp_fatal("start", zSvcName, win32_get_last_errmsg()); QueryServiceStatus(hSvc, &sstat); if( sstat.dwCurrentState!=SERVICE_RUNNING ){ fossil_print("Starting service '%s'", zSvcName); if( sstat.dwCurrentState!=SERVICE_START_PENDING ){ if( !StartServiceW(hSvc, 0, NULL) ){ winhttp_fatal("start", zSvcName, win32_get_last_errmsg()); } QueryServiceStatus(hSvc, &sstat); } while( sstat.dwCurrentState==SERVICE_START_PENDING || sstat.dwCurrentState==SERVICE_STOPPED ){ Sleep(100); fossil_print("."); QueryServiceStatus(hSvc, &sstat); } if( sstat.dwCurrentState==SERVICE_RUNNING ){ |
︙ | ︙ |
Changes to src/xfer.c.
︙ | ︙ | |||
353 354 355 356 357 358 359 | } }else{ nullContent = 1; } /* The isWriter flag must be true in order to land the new file */ if( !isWriter ){ | | | 353 354 355 356 357 358 359 360 361 362 363 364 365 366 367 | } }else{ nullContent = 1; } /* The isWriter flag must be true in order to land the new file */ if( !isWriter ){ blob_appendf(&pXfer->err, "Write permissions for unversioned files missing"); goto end_accept_unversioned_file; } /* Make sure we have a valid g.rcvid marker */ content_rcvid_init(0); /* Check to see if current content really should be overwritten. Ideally, |
︙ | ︙ | |||
1187 1188 1189 1190 1191 1192 1193 | /* ** The CGI/HTTP preprocessor always redirects requests with a content-type ** of application/x-fossil or application/x-fossil-debug to this page, ** regardless of what path was specified in the HTTP header. This allows ** clone clients to specify a URL that omits default pathnames, such ** as "http://fossil-scm.org/" instead of "http://fossil-scm.org/index.cgi". ** | | | 1187 1188 1189 1190 1191 1192 1193 1194 1195 1196 1197 1198 1199 1200 1201 | /* ** The CGI/HTTP preprocessor always redirects requests with a content-type ** of application/x-fossil or application/x-fossil-debug to this page, ** regardless of what path was specified in the HTTP header. This allows ** clone clients to specify a URL that omits default pathnames, such ** as "http://fossil-scm.org/" instead of "http://fossil-scm.org/index.cgi". ** ** WEBPAGE: xfer raw-content ** ** This is the transfer handler on the server side. The transfer ** message has been uncompressed and placed in the g.cgiIn blob. ** Process this message and form an appropriate reply. */ void page_xfer(void){ int isPull = 0; |
︙ | ︙ | |||
1582 1583 1584 1585 1586 1587 1588 | xfer.nextIsPrivate = 1; } }else /* pragma NAME VALUE... ** | | | 1582 1583 1584 1585 1586 1587 1588 1589 1590 1591 1592 1593 1594 1595 1596 | xfer.nextIsPrivate = 1; } }else /* pragma NAME VALUE... ** ** The client issue pragmas to try to influence the behavior of the ** server. These are requests only. Unknown pragmas are silently ** ignored. */ if( blob_eq(&xfer.aToken[0], "pragma") && xfer.nToken>=2 ){ /* pragma send-private ** |
︙ | ︙ | |||
1832 1833 1834 1835 1836 1837 1838 | const char *zArg = db_column_text(&q, 1); i64 iMtime = db_column_int64(&q, 2); memset(&x, 0, sizeof(x)); url_parse_local(zUrl, URL_OMIT_USER, &x); if( x.name!=0 && sqlite3_strlike("%localhost%", x.name, 0)!=0 ){ @ pragma link %F(x.canonical) %F(zArg) %lld(iMtime) } | | | 1832 1833 1834 1835 1836 1837 1838 1839 1840 1841 1842 1843 1844 1845 1846 | const char *zArg = db_column_text(&q, 1); i64 iMtime = db_column_int64(&q, 2); memset(&x, 0, sizeof(x)); url_parse_local(zUrl, URL_OMIT_USER, &x); if( x.name!=0 && sqlite3_strlike("%localhost%", x.name, 0)!=0 ){ @ pragma link %F(x.canonical) %F(zArg) %lld(iMtime) } url_unparse(&x); } db_finalize(&q); } /* Send the server timestamp last, in case prior processing happened ** to use up a significant fraction of our time window. */ |
︙ | ︙ | |||
1857 1858 1859 1860 1861 1862 1863 | ** ** Usage: %fossil test-xfer ?OPTIONS? XFERFILE ** ** Pass the sync-protocol input file XFERFILE into the server-side sync ** protocol handler. Generate a reply on standard output. ** ** This command was original created to help debug the server side of | | | 1857 1858 1859 1860 1861 1862 1863 1864 1865 1866 1867 1868 1869 1870 1871 | ** ** Usage: %fossil test-xfer ?OPTIONS? XFERFILE ** ** Pass the sync-protocol input file XFERFILE into the server-side sync ** protocol handler. Generate a reply on standard output. ** ** This command was original created to help debug the server side of ** sync messages. The XFERFILE is the uncompressed content of an ** "xfer" HTTP request from client to server. This command interprets ** that message and generates the content of an HTTP reply (without any ** encoding and without the HTTP reply headers) and writes that reply ** on standard output. ** ** One possible usages scenario is to capture some XFERFILE examples ** using a command like: |
︙ | ︙ | |||
1925 1926 1927 1928 1929 1930 1931 | #define SYNC_UV_TRACE 0x00400 /* Describe UV activities */ #define SYNC_UV_DRYRUN 0x00800 /* Do not actually exchange files */ #define SYNC_IFABLE 0x01000 /* Inability to sync is not fatal */ #define SYNC_CKIN_LOCK 0x02000 /* Lock the current check-in */ #define SYNC_NOHTTPCOMPRESS 0x04000 /* Do not compression HTTP messages */ #define SYNC_ALLURL 0x08000 /* The --all flag - sync to all URLs */ #define SYNC_SHARE_LINKS 0x10000 /* Request alternate repo links */ | < | 1925 1926 1927 1928 1929 1930 1931 1932 1933 1934 1935 1936 1937 1938 | #define SYNC_UV_TRACE 0x00400 /* Describe UV activities */ #define SYNC_UV_DRYRUN 0x00800 /* Do not actually exchange files */ #define SYNC_IFABLE 0x01000 /* Inability to sync is not fatal */ #define SYNC_CKIN_LOCK 0x02000 /* Lock the current check-in */ #define SYNC_NOHTTPCOMPRESS 0x04000 /* Do not compression HTTP messages */ #define SYNC_ALLURL 0x08000 /* The --all flag - sync to all URLs */ #define SYNC_SHARE_LINKS 0x10000 /* Request alternate repo links */ #endif /* ** Floating-point absolute value */ static double fossil_fabs(double x){ return x>0.0 ? x : -x; |
︙ | ︙ | |||
1947 1948 1949 1950 1951 1952 1953 | ** are pulled if pullFlag is true. A full sync occurs if both are ** true. */ int client_sync( unsigned syncFlags, /* Mask of SYNC_* flags */ unsigned configRcvMask, /* Receive these configuration items */ unsigned configSendMask, /* Send these configuration items */ | | < | 1946 1947 1948 1949 1950 1951 1952 1953 1954 1955 1956 1957 1958 1959 1960 | ** are pulled if pullFlag is true. A full sync occurs if both are ** true. */ int client_sync( unsigned syncFlags, /* Mask of SYNC_* flags */ unsigned configRcvMask, /* Receive these configuration items */ unsigned configSendMask, /* Send these configuration items */ const char *zAltPCode /* Alternative project code (usually NULL) */ ){ int go = 1; /* Loop until zero */ int nCardSent = 0; /* Number of cards sent */ int nCardRcvd = 0; /* Number of cards received */ int nCycle = 0; /* Number of round trips to the server */ int size; /* Size of a config value or uvfile */ int origConfigRcvMask; /* Original value of configRcvMask */ |
︙ | ︙ | |||
1988 1989 1990 1991 1992 1993 1994 | int nUvFileRcvd = 0; /* Number of uvfile cards received on this cycle */ sqlite3_int64 mtime; /* Modification time on a UV file */ int autopushFailed = 0; /* Autopush following commit failed if true */ const char *zCkinLock; /* Name of check-in to lock. NULL for none */ const char *zClientId; /* A unique identifier for this check-out */ unsigned int mHttpFlags;/* Flags for the http_exchange() subsystem */ | < | 1986 1987 1988 1989 1990 1991 1992 1993 1994 1995 1996 1997 1998 1999 | int nUvFileRcvd = 0; /* Number of uvfile cards received on this cycle */ sqlite3_int64 mtime; /* Modification time on a UV file */ int autopushFailed = 0; /* Autopush following commit failed if true */ const char *zCkinLock; /* Name of check-in to lock. NULL for none */ const char *zClientId; /* A unique identifier for this check-out */ unsigned int mHttpFlags;/* Flags for the http_exchange() subsystem */ if( db_get_boolean("dont-push", 0) ) syncFlags &= ~SYNC_PUSH; if( (syncFlags & (SYNC_PUSH|SYNC_PULL|SYNC_CLONE|SYNC_UNVERSIONED))==0 && configRcvMask==0 && configSendMask==0 ){ return 0; /* Nothing to do */ } |
︙ | ︙ | |||
2261 2262 2263 2264 2265 2266 2267 | ** messages unique so that that the login-card nonce will always ** be unique. */ zRandomness = db_text(0, "SELECT hex(randomblob(20))"); blob_appendf(&send, "# %s\n", zRandomness); free(zRandomness); | | < < < < < | 2258 2259 2260 2261 2262 2263 2264 2265 2266 2267 2268 2269 2270 2271 2272 2273 2274 2275 2276 2277 2278 2279 2280 2281 2282 2283 2284 2285 | ** messages unique so that that the login-card nonce will always ** be unique. */ zRandomness = db_text(0, "SELECT hex(randomblob(20))"); blob_appendf(&send, "# %s\n", zRandomness); free(zRandomness); if( syncFlags & SYNC_VERBOSE ){ fossil_print("waiting for server..."); } fflush(stdout); /* Exchange messages with the server */ if( (syncFlags & SYNC_CLONE)!=0 && nCycle==0 ){ /* Do not send a login card on the first round-trip of a clone */ mHttpFlags = 0; }else{ mHttpFlags = HTTP_USE_LOGIN; } if( syncFlags & SYNC_NOHTTPCOMPRESS ){ mHttpFlags |= HTTP_NOCOMPRESS; } /* Do the round-trip to the server */ if( http_exchange(&send, &recv, mHttpFlags, MAX_REDIRECTS, 0) ){ nErr++; go = 2; break; } |
︙ | ︙ | |||
2528 2529 2530 2531 2532 2533 2534 | if( iStatus>=4 && uvPullOnly==1 ){ fossil_warning( "Warning: uv-pull-only \n" " Unable to push unversioned content because you lack\n" " sufficient permission on the server\n" ); uvPullOnly = 2; | | | 2520 2521 2522 2523 2524 2525 2526 2527 2528 2529 2530 2531 2532 2533 2534 | if( iStatus>=4 && uvPullOnly==1 ){ fossil_warning( "Warning: uv-pull-only \n" " Unable to push unversioned content because you lack\n" " sufficient permission on the server\n" ); uvPullOnly = 2; } if( iStatus<=3 || uvPullOnly ){ db_multi_exec("DELETE FROM uv_tosend WHERE name=%Q", zName); }else if( iStatus==4 ){ db_multi_exec("UPDATE uv_tosend SET mtimeOnly=1 WHERE name=%Q",zName); }else if( iStatus==5 ){ db_multi_exec("REPLACE INTO uv_tosend(name,mtimeOnly) VALUES(%Q,0)", zName); |
︙ | ︙ | |||
2645 2646 2647 2648 2649 2650 2651 | ** The server can send pragmas to try to convey meta-information to ** the client. These are informational only. Unknown pragmas are ** silently ignored. */ if( blob_eq(&xfer.aToken[0], "pragma") && xfer.nToken>=2 ){ /* pragma server-version VERSION ?DATE? ?TIME? ** | | | | 2637 2638 2639 2640 2641 2642 2643 2644 2645 2646 2647 2648 2649 2650 2651 2652 2653 2654 2655 2656 2657 2658 2659 2660 2661 2662 2663 2664 2665 2666 | ** The server can send pragmas to try to convey meta-information to ** the client. These are informational only. Unknown pragmas are ** silently ignored. */ if( blob_eq(&xfer.aToken[0], "pragma") && xfer.nToken>=2 ){ /* pragma server-version VERSION ?DATE? ?TIME? ** ** The servger announces to the server what version of Fossil it ** is running. The DATE and TIME are a pure numeric ISO8601 time ** for the specific check-in of the client. */ if( xfer.nToken>=3 && blob_eq(&xfer.aToken[1], "server-version") ){ xfer.remoteVersion = atoi(blob_str(&xfer.aToken[2])); if( xfer.nToken>=5 ){ xfer.remoteDate = atoi(blob_str(&xfer.aToken[3])); xfer.remoteTime = atoi(blob_str(&xfer.aToken[4])); } } /* pragma uv-pull-only ** pragma uv-push-ok ** ** If the server is unwill to accept new unversioned content (because ** this client lacks the necessary permissions) then it sends a ** "uv-pull-only" pragma so that the client will know not to waste ** bandwidth trying to upload unversioned content. If the server ** does accept new unversioned content, it sends "uv-push-ok". */ else if( syncFlags & SYNC_UNVERSIONED ){ if( blob_eq(&xfer.aToken[1], "uv-pull-only") ){ |
︙ | ︙ | |||
2852 2853 2854 2855 2856 2857 2858 | }else{ manifest_crosslink_end(MC_PERMIT_HOOKS); content_enable_dephantomize(1); } db_end_transaction(0); }; transport_stats(&nSent, &nRcvd, 1); | < | 2844 2845 2846 2847 2848 2849 2850 2851 2852 2853 2854 2855 2856 2857 | }else{ manifest_crosslink_end(MC_PERMIT_HOOKS); content_enable_dephantomize(1); } db_end_transaction(0); }; transport_stats(&nSent, &nRcvd, 1); if( (rSkew*24.0*3600.0) > 10.0 ){ fossil_warning("*** time skew *** server is fast by %s", db_timespan_name(rSkew)); g.clockSkewSeen = 1; }else if( rSkew*24.0*3600.0 < -10.0 ){ fossil_warning("*** time skew *** server is slow by %s", db_timespan_name(-rSkew)); |
︙ | ︙ | |||
2883 2884 2885 2886 2887 2888 2889 | zOpType, nSent, nRcvd, g.zIpAddr); } } if( syncFlags & SYNC_VERBOSE ){ fossil_print( "Uncompressed payload sent: %lld received: %lld\n", nUncSent, nUncRcvd); } | < < | 2874 2875 2876 2877 2878 2879 2880 2881 2882 2883 2884 2885 2886 2887 | zOpType, nSent, nRcvd, g.zIpAddr); } } if( syncFlags & SYNC_VERBOSE ){ fossil_print( "Uncompressed payload sent: %lld received: %lld\n", nUncSent, nUncRcvd); } transport_close(&g.url); transport_global_shutdown(&g.url); if( nErr && go==2 ){ db_multi_exec("DROP TABLE onremote; DROP TABLE unk;"); manifest_crosslink_end(MC_PERMIT_HOOKS); content_enable_dephantomize(1); db_end_transaction(0); |
︙ | ︙ |
Changes to src/xfersetup.c.
︙ | ︙ | |||
78 79 80 81 82 83 84 | @ <input type="submit" name="sync" value="%h(zButton)"> @ </div></form> @ if( P("sync") ){ user_select(); url_enable_proxy(0); @ <pre class="xfersetup"> | | | 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 | @ <input type="submit" name="sync" value="%h(zButton)"> @ </div></form> @ if( P("sync") ){ user_select(); url_enable_proxy(0); @ <pre class="xfersetup"> client_sync(syncFlags, 0, 0, 0); @ </pre> } } style_finish_page(); } |
︙ | ︙ |
Changes to src/zip.c.
︙ | ︙ | |||
136 137 138 139 140 141 142 | return 512; } static int archiveDeviceCharacteristics(sqlite3_file *pFile){ return 0; } static int archiveOpen( | | | 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 | return 512; } static int archiveDeviceCharacteristics(sqlite3_file *pFile){ return 0; } static int archiveOpen( sqlite3_vfs *pVfs, const char *zName, sqlite3_file *pFile, int flags, int *pOutFlags ){ static struct sqlite3_io_methods methods = { 1, /* iVersion */ archiveClose, archiveRead, archiveWrite, |
︙ | ︙ | |||
245 246 247 248 249 250 251 | ** Append a single file to a growing ZIP archive. ** ** pFile is the file to be appended. zName is the name ** that the file should be saved as. */ static void zip_add_file_to_zip( Archive *p, | | | | 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 | ** Append a single file to a growing ZIP archive. ** ** pFile is the file to be appended. zName is the name ** that the file should be saved as. */ static void zip_add_file_to_zip( Archive *p, const char *zName, const Blob *pFile, int mPerm ){ z_stream stream; int nameLen; int toOut = 0; int iStart; unsigned long iCRC = 0; |
︙ | ︙ | |||
372 373 374 375 376 377 378 | put16(&zExTime[2], 5); blob_append(&toc, zExTime, 9); nEntry++; } static void zip_add_file_to_sqlar( Archive *p, | | | | | | | | 372 373 374 375 376 377 378 379 380 381 382 383 384 385 386 387 388 389 390 391 392 393 394 395 396 397 398 399 400 401 402 403 404 405 406 407 408 409 410 411 412 413 414 415 416 417 418 419 420 421 422 423 424 425 426 427 | put16(&zExTime[2], 5); blob_append(&toc, zExTime, 9); nEntry++; } static void zip_add_file_to_sqlar( Archive *p, const char *zName, const Blob *pFile, int mPerm ){ int nName = (int)strlen(zName); if( p->db==0 ){ assert( p->vfs.zName==0 ); p->vfs.zName = (const char*)mprintf("archivevfs%p", (void*)p); p->vfs.iVersion = 1; p->vfs.szOsFile = sizeof(ArchiveFile); p->vfs.mxPathname = 512; p->vfs.pAppData = (void*)p->pBlob; p->vfs.xOpen = archiveOpen; p->vfs.xDelete = archiveDelete; p->vfs.xAccess = archiveAccess; p->vfs.xFullPathname = archiveFullPathname; p->vfs.xRandomness = archiveRandomness; p->vfs.xSleep = archiveSleep; p->vfs.xCurrentTime = archiveCurrentTime; p->vfs.xGetLastError = archiveGetLastError; sqlite3_vfs_register(&p->vfs, 0); sqlite3_open_v2("file:xyz.db", &p->db, SQLITE_OPEN_CREATE|SQLITE_OPEN_READWRITE, p->vfs.zName ); assert( p->db ); blob_zero(&p->tmp); sqlite3_exec(p->db, "PRAGMA page_size=512;" "PRAGMA journal_mode = off;" "PRAGMA cache_spill = off;" "BEGIN;" "CREATE TABLE sqlar(" "name TEXT PRIMARY KEY, -- name of the file\n" "mode INT, -- access permissions\n" "mtime INT, -- last modification time\n" "sz INT, -- original file size\n" "data BLOB -- compressed content\n" ");", 0, 0, 0 ); sqlite3_prepare(p->db, "INSERT INTO sqlar VALUES(?, ?, ?, ?, ?)", -1, &p->pInsert, 0 ); assert( p->pInsert ); sqlite3_bind_int64(p->pInsert, 3, unixTime); blob_zero(p->pBlob); } |
︙ | ︙ | |||
435 436 437 438 439 440 441 | sqlite3_bind_int(p->pInsert, 4, 0); sqlite3_bind_null(p->pInsert, 5); }else{ sqlite3_bind_text(p->pInsert, 1, zName, nName, SQLITE_STATIC); if( mPerm==PERM_LNK ){ sqlite3_bind_int(p->pInsert, 2, 0120755); sqlite3_bind_int(p->pInsert, 4, -1); | | | | | | | 435 436 437 438 439 440 441 442 443 444 445 446 447 448 449 450 451 452 453 454 455 456 457 458 459 460 461 462 463 464 465 466 467 468 469 470 471 472 473 474 475 476 477 478 479 480 | sqlite3_bind_int(p->pInsert, 4, 0); sqlite3_bind_null(p->pInsert, 5); }else{ sqlite3_bind_text(p->pInsert, 1, zName, nName, SQLITE_STATIC); if( mPerm==PERM_LNK ){ sqlite3_bind_int(p->pInsert, 2, 0120755); sqlite3_bind_int(p->pInsert, 4, -1); sqlite3_bind_text(p->pInsert, 5, blob_buffer(pFile), blob_size(pFile), SQLITE_STATIC ); }else{ unsigned int nIn = blob_size(pFile); unsigned long int nOut = nIn; sqlite3_bind_int(p->pInsert, 2, mPerm==PERM_EXE ? 0100755 : 0100644); sqlite3_bind_int(p->pInsert, 4, nIn); zip_blob_minsize(&p->tmp, nIn); compress( (unsigned char*) blob_buffer(&p->tmp), &nOut, (unsigned char*)blob_buffer(pFile), nIn ); if( nOut>=(unsigned long)nIn ){ sqlite3_bind_blob(p->pInsert, 5, blob_buffer(pFile), blob_size(pFile), SQLITE_STATIC ); }else{ sqlite3_bind_blob(p->pInsert, 5, blob_buffer(&p->tmp), nOut, SQLITE_STATIC ); } } } sqlite3_step(p->pInsert); sqlite3_reset(p->pInsert); } static void zip_add_file( Archive *p, const char *zName, const Blob *pFile, int mPerm ){ if( p->eType==ARCHIVE_ZIP ){ zip_add_file_to_zip(p, zName, pFile, mPerm); }else{ zip_add_file_to_sqlar(p, zName, pFile, mPerm); } |
︙ | ︙ | |||
784 785 786 787 788 789 790 | " || substr(blob.uuid, 1, 10)" " FROM event, blob" " WHERE event.objid=%d" " AND blob.rid=%d", db_get("project-name", "unnamed"), rid, rid ); } | | | 784 785 786 787 788 789 790 791 792 793 794 795 796 797 798 | " || substr(blob.uuid, 1, 10)" " FROM event, blob" " WHERE event.objid=%d" " AND blob.rid=%d", db_get("project-name", "unnamed"), rid, rid ); } zip_of_checkin(eType, rid, zOut ? &zip : 0, zName, pInclude, pExclude, listFlag); glob_free(pInclude); glob_free(pExclude); if( zOut ){ blob_write_to_file(&zip, zOut); blob_reset(&zip); } |
︙ | ︙ | |||
945 946 947 948 949 950 951 | zInclude = P("in"); if( zInclude ) pInclude = glob_create(zInclude); zExclude = P("ex"); if( zExclude ) pExclude = glob_create(zExclude); if( zInclude==0 && zExclude==0 ){ etag_check_for_invariant_name(z); } | | | | 945 946 947 948 949 950 951 952 953 954 955 956 957 958 959 960 961 962 963 964 965 966 | zInclude = P("in"); if( zInclude ) pInclude = glob_create(zInclude); zExclude = P("ex"); if( zExclude ) pExclude = glob_create(zExclude); if( zInclude==0 && zExclude==0 ){ etag_check_for_invariant_name(z); } if( eType==ARCHIVE_ZIP && nName>4 && fossil_strcmp(&zName[nName-4], ".zip")==0 ){ /* Special case: Remove the ".zip" suffix. */ nName -= 4; zName[nName] = 0; }else if( eType==ARCHIVE_SQLAR && nName>6 && fossil_strcmp(&zName[nName-6], ".sqlar")==0 ){ /* Special case: Remove the ".sqlar" suffix. */ nName -= 6; zName[nName] = 0; }else{ |
︙ | ︙ |
Changes to test/amend.test.
︙ | ︙ | |||
304 305 306 307 308 309 310 311 312 313 | set t5exp "*" foreach tag $tagt { lappend tags -tag $tag lappend cancels -cancel $tag } foreach res $result { append t1exp ", $res" append t3exp "Add*tag*\"$res\".*" append t5exp "Cancel*tag*\"$res\".*" } | > < < < | 304 305 306 307 308 309 310 311 312 313 314 315 316 317 318 319 320 321 | set t5exp "*" foreach tag $tagt { lappend tags -tag $tag lappend cancels -cancel $tag } foreach res $result { append t1exp ", $res" append t2exp "sym-$res*" append t3exp "Add*tag*\"$res\".*" append t5exp "Cancel*tag*\"$res\".*" } eval fossil amend $HASH $tags test amend-tag-$tc.1 {[string match "*hash:*$HASH*tags:*$t1exp*" $RESULT]} fossil tag ls --raw $HASH test amend-tag-$tc.2 {[string match $t2exp $RESULT]} fossil timeline -n 1 test amend-tag-$tc.3 {[string match $t3exp $RESULT]} eval fossil amend $HASH $cancels |
︙ | ︙ |
Changes to test/commit-warning.test.
︙ | ︙ | |||
170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 | # of source files that MUST NEVER BE TEXT. # test_block_in_checkout pre-commit-warnings-fossil-1 { fossil test-commit-warning --no-settings } { test pre-commit-warnings-fossil-1 {[normalize_result] eq \ [subst -nocommands -novariables [string trim { 1\tcompat/zlib/contrib/blast/test.pk\tbinary data 1\tcompat/zlib/contrib/dotzlib/DotZLib.build\tCR/LF line endings 1\tcompat/zlib/contrib/dotzlib/DotZLib.chm\tbinary data 1\tcompat/zlib/contrib/dotzlib/DotZLib.sln\tCR/LF line endings 1\tcompat/zlib/contrib/dotzlib/DotZLib/AssemblyInfo.cs\tCR/LF line endings 1\tcompat/zlib/contrib/dotzlib/DotZLib/ChecksumImpl.cs\tinvalid UTF-8 1\tcompat/zlib/contrib/dotzlib/DotZLib/CircularBuffer.cs\tinvalid UTF-8 1\tcompat/zlib/contrib/dotzlib/DotZLib/CodecBase.cs\tinvalid UTF-8 1\tcompat/zlib/contrib/dotzlib/DotZLib/Deflater.cs\tinvalid UTF-8 1\tcompat/zlib/contrib/dotzlib/DotZLib/DotZLib.cs\tinvalid UTF-8 1\tcompat/zlib/contrib/dotzlib/DotZLib/DotZLib.csproj\tCR/LF line endings 1\tcompat/zlib/contrib/dotzlib/DotZLib/GZipStream.cs\tinvalid UTF-8 1\tcompat/zlib/contrib/dotzlib/DotZLib/Inflater.cs\tinvalid UTF-8 1\tcompat/zlib/contrib/dotzlib/DotZLib/UnitTests.cs\tCR/LF line endings 1\tcompat/zlib/contrib/dotzlib/LICENSE_1_0.txt\tCR/LF line endings 1\tcompat/zlib/contrib/dotzlib/readme.txt\tCR/LF line endings 1\tcompat/zlib/contrib/gcc_gvmat64/gvmat64.S\tCR/LF line endings 1\tcompat/zlib/contrib/puff/zeros.raw\tbinary data 1\tcompat/zlib/contrib/testzlib/testzlib.c\tCR/LF line endings 1\tcompat/zlib/contrib/testzlib/testzlib.txt\tCR/LF line endings 1\tcompat/zlib/contrib/vstudio/readme.txt\tCR/LF line endings 1\tcompat/zlib/contrib/vstudio/vc10/miniunz.vcxproj\tCR/LF line endings 1\tcompat/zlib/contrib/vstudio/vc10/miniunz.vcxproj.filters\tCR/LF line endings 1\tcompat/zlib/contrib/vstudio/vc10/minizip.vcxproj\tCR/LF line endings | > > > > > > > > > > > > | 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 | # of source files that MUST NEVER BE TEXT. # test_block_in_checkout pre-commit-warnings-fossil-1 { fossil test-commit-warning --no-settings } { test pre-commit-warnings-fossil-1 {[normalize_result] eq \ [subst -nocommands -novariables [string trim { 1\tart/branching.odp\tbinary data 1\tart/concept1.dia\tbinary data 1\tart/concept2.dia\tbinary data 1\tcompat/zlib/contrib/blast/test.pk\tbinary data 1\tcompat/zlib/contrib/dotzlib/DotZLib.build\tCR/LF line endings 1\tcompat/zlib/contrib/dotzlib/DotZLib.chm\tbinary data 1\tcompat/zlib/contrib/dotzlib/DotZLib.sln\tCR/LF line endings 1\tcompat/zlib/contrib/dotzlib/DotZLib/AssemblyInfo.cs\tCR/LF line endings 1\tcompat/zlib/contrib/dotzlib/DotZLib/ChecksumImpl.cs\tinvalid UTF-8 1\tcompat/zlib/contrib/dotzlib/DotZLib/CircularBuffer.cs\tinvalid UTF-8 1\tcompat/zlib/contrib/dotzlib/DotZLib/CodecBase.cs\tinvalid UTF-8 1\tcompat/zlib/contrib/dotzlib/DotZLib/Deflater.cs\tinvalid UTF-8 1\tcompat/zlib/contrib/dotzlib/DotZLib/DotZLib.cs\tinvalid UTF-8 1\tcompat/zlib/contrib/dotzlib/DotZLib/DotZLib.csproj\tCR/LF line endings 1\tcompat/zlib/contrib/dotzlib/DotZLib/GZipStream.cs\tinvalid UTF-8 1\tcompat/zlib/contrib/dotzlib/DotZLib/Inflater.cs\tinvalid UTF-8 1\tcompat/zlib/contrib/dotzlib/DotZLib/UnitTests.cs\tCR/LF line endings 1\tcompat/zlib/contrib/dotzlib/LICENSE_1_0.txt\tCR/LF line endings 1\tcompat/zlib/contrib/dotzlib/readme.txt\tCR/LF line endings 1\tcompat/zlib/contrib/gcc_gvmat64/gvmat64.S\tCR/LF line endings 1\tcompat/zlib/contrib/masmx64/bld_ml64.bat\tCR/LF line endings 1\tcompat/zlib/contrib/masmx64/gvmat64.asm\tCR/LF line endings 1\tcompat/zlib/contrib/masmx64/inffas8664.c\tCR/LF line endings 1\tcompat/zlib/contrib/masmx64/inffasx64.asm\tCR/LF line endings 1\tcompat/zlib/contrib/masmx64/readme.txt\tCR/LF line endings 1\tcompat/zlib/contrib/masmx86/bld_ml32.bat\tCR/LF line endings 1\tcompat/zlib/contrib/masmx86/inffas32.asm\tCR/LF line endings 1\tcompat/zlib/contrib/masmx86/match686.asm\tCR/LF line endings 1\tcompat/zlib/contrib/masmx86/readme.txt\tCR/LF line endings 1\tcompat/zlib/contrib/puff/zeros.raw\tbinary data 1\tcompat/zlib/contrib/testzlib/testzlib.c\tCR/LF line endings 1\tcompat/zlib/contrib/testzlib/testzlib.txt\tCR/LF line endings 1\tcompat/zlib/contrib/vstudio/readme.txt\tCR/LF line endings 1\tcompat/zlib/contrib/vstudio/vc10/miniunz.vcxproj\tCR/LF line endings 1\tcompat/zlib/contrib/vstudio/vc10/miniunz.vcxproj.filters\tCR/LF line endings 1\tcompat/zlib/contrib/vstudio/vc10/minizip.vcxproj\tCR/LF line endings |
︙ | ︙ | |||
229 230 231 232 233 234 235 | 1\tcompat/zlib/contrib/vstudio/vc9/zlibstat.vcproj\tCR/LF line endings 1\tcompat/zlib/contrib/vstudio/vc9/zlibvc.def\tCR/LF line endings 1\tcompat/zlib/contrib/vstudio/vc9/zlibvc.sln\tCR/LF line endings 1\tcompat/zlib/contrib/vstudio/vc9/zlibvc.vcproj\tCR/LF line endings 1\tcompat/zlib/win32/zlib.def\tCR/LF line endings 1\tcompat/zlib/zlib.3.pdf\tbinary data 1\tcompat/zlib/zlib.map\tCR/LF line endings | < > | 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 | 1\tcompat/zlib/contrib/vstudio/vc9/zlibstat.vcproj\tCR/LF line endings 1\tcompat/zlib/contrib/vstudio/vc9/zlibvc.def\tCR/LF line endings 1\tcompat/zlib/contrib/vstudio/vc9/zlibvc.sln\tCR/LF line endings 1\tcompat/zlib/contrib/vstudio/vc9/zlibvc.vcproj\tCR/LF line endings 1\tcompat/zlib/win32/zlib.def\tCR/LF line endings 1\tcompat/zlib/zlib.3.pdf\tbinary data 1\tcompat/zlib/zlib.map\tCR/LF line endings 1\tskins/blitz/arrow_project.png\tbinary data 1\tskins/blitz/dir.png\tbinary data 1\tskins/blitz/file.png\tbinary data 1\tskins/blitz/fossil_100.png\tbinary data 1\tskins/blitz/fossil_80_reversed_darkcyan.png\tbinary data 1\tskins/blitz/fossil_80_reversed_darkcyan_text.png\tbinary data 1\tskins/blitz/rss_20.png\tbinary data 1\tskins/bootstrap/css.txt\tlong lines 1\tsrc/alerts/bflat2.wav\tbinary data 1\tsrc/alerts/bflat3.wav\tbinary data 1\tsrc/alerts/bloop.wav\tbinary data 1\tsrc/alerts/plunk.wav\tbinary data 1\tsrc/sounds/0.wav\tbinary data 1\tsrc/sounds/1.wav\tbinary data 1\tsrc/sounds/2.wav\tbinary data |
︙ | ︙ |
Changes to test/delta1.test.
︙ | ︙ | |||
23 24 25 26 27 28 29 | # Use test script files as the basis for this test. # # For each test, copy the file intact to "./t1". Make # some random changes in "./t2". Then call test-delta on the # two files to make sure that deltas between these two files # work properly. # | | | 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 | # Use test script files as the basis for this test. # # For each test, copy the file intact to "./t1". Make # some random changes in "./t2". Then call test-delta on the # two files to make sure that deltas between these two files # work properly. # set filelist [glob $testdir/*] foreach f $filelist { if {[file isdir $f]} continue set base [file root [file tail $f]] set f1 [read_file $f] write_file t1 $f1 for {set i 0} {$i<100} {incr i} { write_file t2 [random_changes $f1 1 1 0 0.1] |
︙ | ︙ |
Changes to test/diff.test.
︙ | ︙ | |||
107 108 109 110 111 112 113 | test diff-file5-1 {[normalize_result] eq {Index: file5.dat ================================================================== --- file5.dat +++ file5.dat cannot compute difference between binary files}} | < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < | 107 108 109 110 111 112 113 114 115 116 | test diff-file5-1 {[normalize_result] eq {Index: file5.dat ================================================================== --- file5.dat +++ file5.dat cannot compute difference between binary files}} ############################################################################### test_cleanup |
Changes to test/fake-editor.tcl.
︙ | ︙ | |||
48 49 50 51 52 53 54 | return "" } ############################################################################### set fileName [lindex $argv 0] | < < < < < | 48 49 50 51 52 53 54 55 56 57 58 59 60 61 | return "" } ############################################################################### set fileName [lindex $argv 0] if {[file exists $fileName]} { set data [readFile $fileName] } else { set data "" } ############################################################################### |
︙ | ︙ |
Changes to test/json.test.
︙ | ︙ | |||
175 176 177 178 179 180 181 | proc test_json_payload {testname okfields badfields} { test_dict_keys $testname [dict get $::JR payload] $okfields $badfields } #### VERSION AKA HAI # The JSON API generally assumes we have a respository, so let it have one. | < < < < < < < < | 175 176 177 178 179 180 181 182 183 184 185 186 187 188 | proc test_json_payload {testname okfields badfields} { test_dict_keys $testname [dict get $::JR payload] $okfields $badfields } #### VERSION AKA HAI # The JSON API generally assumes we have a respository, so let it have one. test_setup # Stop backoffice from running during this test as it can cause hangs. fossil settings backoffice-disable 1 # Check for basic envelope fields in the result with an error fossil_json -expectError |
︙ | ︙ | |||
282 283 284 285 286 287 288 | test_json_payload json-login-a {authToken name capabilities loginCookieName} {} set AuthAnon [dict get $JR payload] proc test_hascaps {testname need caps} { foreach n [split $need {}] { test $testname-$n {[string first $n $caps] >= 0} } } | | | 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 | test_json_payload json-login-a {authToken name capabilities loginCookieName} {} set AuthAnon [dict get $JR payload] proc test_hascaps {testname need caps} { foreach n [split $need {}] { test $testname-$n {[string first $n $caps] >= 0} } } test_hascaps json-login-c "hmnc" [dict get $AuthAnon capabilities] fossil user new U1 User-1 Uone fossil user capabilities U1 s write_file u1 { { "command":"login", "payload":{ |
︙ | ︙ | |||
893 894 895 896 897 898 899 900 901 902 903 904 905 906 907 908 909 910 911 912 913 914 915 916 917 918 919 | # Fossil repository db file could not be found. fossil close fossil_json HAI -expectError test json-RC-4102-CLI-exit {$CODE != 0} test_json_envelope json-RC-4102-CLI-exit {fossil timestamp command procTimeUs \ procTimeMs resultCode resultText} {payload} test json-RC-4102 {[dict get $JR resultCode] eq "FOSSIL-4102"} # FOSSIL-4103 FSL_JSON_E_DB_NOT_VALID # Fossil repository db file is not valid. write_file nope.fossil { This is not a fossil repo. It ought to be a SQLite db with a well-known schema, but it is actually just a block of text. } fossil_json HAI -R nope.fossil -expectError test json-RC-4103-CLI-exit {$CODE != 0} if { $JR ne "" } { test_json_envelope json-RC-4103-CLI {fossil timestamp command procTimeUs \ procTimeMs resultCode resultText} {payload} test json-RC-4103 {[dict get $JR resultCode] eq "FOSSIL-4103"} } else { test json-RC-4103 0 knownBug } ############################################################################### test_cleanup | > < < < < < < | 885 886 887 888 889 890 891 892 893 894 895 896 897 898 899 900 901 902 903 904 905 906 907 908 909 910 911 912 | # Fossil repository db file could not be found. fossil close fossil_json HAI -expectError test json-RC-4102-CLI-exit {$CODE != 0} test_json_envelope json-RC-4102-CLI-exit {fossil timestamp command procTimeUs \ procTimeMs resultCode resultText} {payload} test json-RC-4102 {[dict get $JR resultCode] eq "FOSSIL-4102"} fossil open .rep.fossil # FOSSIL-4103 FSL_JSON_E_DB_NOT_VALID # Fossil repository db file is not valid. write_file nope.fossil { This is not a fossil repo. It ought to be a SQLite db with a well-known schema, but it is actually just a block of text. } fossil_json HAI -R nope.fossil -expectError test json-RC-4103-CLI-exit {$CODE != 0} if { $JR ne "" } { test_json_envelope json-RC-4103-CLI {fossil timestamp command procTimeUs \ procTimeMs resultCode resultText} {payload} test json-RC-4103 {[dict get $JR resultCode] eq "FOSSIL-4103"} } else { test json-RC-4103 0 knownBug } ############################################################################### test_cleanup |
Changes to test/merge1.test.
︙ | ︙ | |||
71 72 73 74 75 76 77 | 111 - This is line one OF the demo program - 1111 222 - The second line program line in code - 2222 333 - This is a test of the merging algohm - 3333 444 - If all goes well, we will be pleased - 4444 555 - we think it well and other stuff too - 5555 } write_file_indented t23 { | | | | | | | | | | | | 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 | 111 - This is line one OF the demo program - 1111 222 - The second line program line in code - 2222 333 - This is a test of the merging algohm - 3333 444 - If all goes well, we will be pleased - 4444 555 - we think it well and other stuff too - 5555 } write_file_indented t23 { <<<<<<< BEGIN MERGE CONFLICT: local copy shown first <<<<<<<<<<<<<<< 111 - This is line ONE of the demo program - 1111 ||||||| COMMON ANCESTOR content follows |||||||||||||||||||||||||||| 111 - This is line one of the demo program - 1111 ======= MERGED IN content follows ================================== 111 - This is line one OF the demo program - 1111 >>>>>>> END MERGE CONFLICT >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> 222 - The second line program line in code - 2222 333 - This is a test of the merging algohm - 3333 444 - If all goes well, we will be pleased - 4444 555 - we think it well and other stuff too - 5555 } write_file_indented t32 { <<<<<<< BEGIN MERGE CONFLICT: local copy shown first <<<<<<<<<<<<<<< 111 - This is line one OF the demo program - 1111 ||||||| COMMON ANCESTOR content follows |||||||||||||||||||||||||||| 111 - This is line one of the demo program - 1111 ======= MERGED IN content follows ================================== 111 - This is line ONE of the demo program - 1111 >>>>>>> END MERGE CONFLICT >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> 222 - The second line program line in code - 2222 333 - This is a test of the merging algohm - 3333 444 - If all goes well, we will be pleased - 4444 555 - we think it well and other stuff too - 5555 } fossil 3-way-merge t1 t3 t2 a32 test merge1-2.1 {[same_file t32 a32]} fossil 3-way-merge t1 t2 t3 a23 test merge1-2.2 {[same_file t23 a23]} write_file_indented t1 { 111 - This is line one of the demo program - 1111 222 - The second line program line in code - 2222 333 - This is a test of the merging algohm - 3333 444 - If all goes well, we will be pleased - 4444 |
︙ | ︙ | |||
156 157 158 159 160 161 162 | write_file_indented t3 { 222 - The second line program line in code - 2222 333 - This is a test of the merging algohm - 3333 444 - If all goes well, we will be pleased - 4444 555 - we think it well and other stuff too - 5555 } write_file_indented t32 { | | | | | | | | | | | | 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 | write_file_indented t3 { 222 - The second line program line in code - 2222 333 - This is a test of the merging algohm - 3333 444 - If all goes well, we will be pleased - 4444 555 - we think it well and other stuff too - 5555 } write_file_indented t32 { <<<<<<< BEGIN MERGE CONFLICT: local copy shown first <<<<<<<<<<<<<<< ||||||| COMMON ANCESTOR content follows |||||||||||||||||||||||||||| 111 - This is line one of the demo program - 1111 ======= MERGED IN content follows ================================== 000 - Zero lines added to the beginning of - 0000 111 - This is line one of the demo program - 1111 >>>>>>> END MERGE CONFLICT >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> 222 - The second line program line in code - 2222 333 - This is a test of the merging algohm - 3333 444 - If all goes well, we will be pleased - 4444 555 - we think it well and other stuff too - 5555 } write_file_indented t23 { <<<<<<< BEGIN MERGE CONFLICT: local copy shown first <<<<<<<<<<<<<<< 000 - Zero lines added to the beginning of - 0000 111 - This is line one of the demo program - 1111 ||||||| COMMON ANCESTOR content follows |||||||||||||||||||||||||||| 111 - This is line one of the demo program - 1111 ======= MERGED IN content follows ================================== >>>>>>> END MERGE CONFLICT >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> 222 - The second line program line in code - 2222 333 - This is a test of the merging algohm - 3333 444 - If all goes well, we will be pleased - 4444 555 - we think it well and other stuff too - 5555 } fossil 3-way-merge t1 t3 t2 a32 test merge1-4.1 {[same_file t32 a32]} fossil 3-way-merge t1 t2 t3 a23 test merge1-4.2 {[same_file t23 a23]} write_file_indented t1 { 111 - This is line one of the demo program - 1111 222 - The second line program line in code - 2222 333 - This is a test of the merging algohm - 3333 444 - If all goes well, we will be pleased - 4444 |
︙ | ︙ | |||
295 296 297 298 299 300 301 | KLMN OPQR STUV XYZ. } write_file_indented t23 { abcd | | | | | | | 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324 325 326 327 328 329 330 331 332 333 334 335 336 337 338 339 340 341 342 | KLMN OPQR STUV XYZ. } write_file_indented t23 { abcd <<<<<<< BEGIN MERGE CONFLICT: local copy shown first <<<<<<<<<<<<<<< efgh 2 ijkl 2 mnop 2 qrst uvwx yzAB 2 CDEF 2 GHIJ 2 ||||||| COMMON ANCESTOR content follows |||||||||||||||||||||||||||| efgh ijkl mnop qrst uvwx yzAB CDEF GHIJ ======= MERGED IN content follows ================================== efgh ijkl mnop 3 qrst 3 uvwx 3 yzAB 3 CDEF GHIJ >>>>>>> END MERGE CONFLICT >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> KLMN OPQR STUV XYZ. } fossil 3-way-merge t1 t2 t3 a23 test merge1-7.1 {[same_file t23 a23]} write_file_indented t2 { abcd efgh 2 ijkl 2 mnop |
︙ | ︙ | |||
363 364 365 366 367 368 369 | KLMN OPQR STUV XYZ. } write_file_indented t23 { abcd | | | | | | | 363 364 365 366 367 368 369 370 371 372 373 374 375 376 377 378 379 380 381 382 383 384 385 386 387 388 389 390 391 392 393 394 395 396 397 398 399 400 401 402 403 404 405 406 407 408 | KLMN OPQR STUV XYZ. } write_file_indented t23 { abcd <<<<<<< BEGIN MERGE CONFLICT: local copy shown first <<<<<<<<<<<<<<< efgh 2 ijkl 2 mnop qrst uvwx yzAB 2 CDEF 2 GHIJ 2 ||||||| COMMON ANCESTOR content follows |||||||||||||||||||||||||||| efgh ijkl mnop qrst uvwx yzAB CDEF GHIJ ======= MERGED IN content follows ================================== efgh ijkl mnop 3 qrst 3 uvwx 3 yzAB 3 CDEF GHIJ >>>>>>> END MERGE CONFLICT >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> KLMN OPQR STUV XYZ. } fossil 3-way-merge t1 t2 t3 a23 test merge1-7.2 {[same_file t23 a23]} ############################################################################### test_cleanup |
Changes to test/merge2.test.
︙ | ︙ | |||
16 17 18 19 20 21 22 | ############################################################################ # # Tests of the delta mechanism. # test_setup "" | | | 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 | ############################################################################ # # Tests of the delta mechanism. # test_setup "" set filelist [glob $testdir/*] foreach f $filelist { if {[file isdir $f]} continue set base [file root [file tail $f]] if {[string match "utf16*" $base]} continue set f1 [read_file $f] write_file t1 $f1 for {set i 0} {$i<100} {incr i} { |
︙ | ︙ |
Changes to test/merge3.test.
︙ | ︙ | |||
16 17 18 19 20 21 22 | ############################################################################ # # Tests of the 3-way merge # test_setup "" | | | < | | < | < < | < < | < | 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 | ############################################################################ # # Tests of the 3-way merge # test_setup "" proc merge-test {testid basis v1 v2 result} { write_file t1 [join [string trim $basis] \n]\n write_file t2 [join [string trim $v1] \n]\n write_file t3 [join [string trim $v2] \n]\n fossil 3-way-merge t1 t2 t3 t4 set x [read_file t4] regsub -all {<<<<<<< BEGIN MERGE CONFLICT: local copy shown first <+} $x \ {MINE:} x regsub -all {\|\|\|\|\|\|\| COMMON ANCESTOR content follows \|+} $x {COM:} x regsub -all {======= MERGED IN content follows =+} $x {YOURS:} x regsub -all {>>>>>>> END MERGE CONFLICT >+} $x {END} x set x [split [string trim $x] \n] set result [string trim $result] if {$x!=$result} { protOut " Expected \[$result\]" protOut " Got \[$x\]" test merge3-$testid 0 } else { |
︙ | ︙ | |||
72 73 74 75 76 77 78 | 1 2 3 4 5 6 7 8 9 } { 1 2 3b 4b 5b 6 7 8 9 } { 1 2 3 4 5c 6 7 8 9 } { 1 2 MINE: 3b 4b 5b COM: 3 4 5 YOURS: 3 4 5c END 6 7 8 9 | < > < > < > < > < > < > | 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 | 1 2 3 4 5 6 7 8 9 } { 1 2 3b 4b 5b 6 7 8 9 } { 1 2 3 4 5c 6 7 8 9 } { 1 2 MINE: 3b 4b 5b COM: 3 4 5 YOURS: 3 4 5c END 6 7 8 9 } merge-test 4 { 1 2 3 4 5 6 7 8 9 } { 1 2 3b 4b 5b 6b 7 8 9 } { 1 2 3 4 5c 6 7 8 9 } { 1 2 MINE: 3b 4b 5b 6b COM: 3 4 5 6 YOURS: 3 4 5c 6 END 7 8 9 } merge-test 5 { 1 2 3 4 5 6 7 8 9 } { 1 2 3b 4b 5b 6b 7 8 9 } { 1 2 3 4 5c 6c 7c 8 9 } { 1 2 MINE: 3b 4b 5b 6b 7 COM: 3 4 5 6 7 YOURS: 3 4 5c 6c 7c END 8 9 } merge-test 6 { 1 2 3 4 5 6 7 8 9 } { 1 2 3b 4b 5b 6b 7 8b 9 } { 1 2 3 4 5c 6c 7c 8 9 } { 1 2 MINE: 3b 4b 5b 6b 7 COM: 3 4 5 6 7 YOURS: 3 4 5c 6c 7c END 8b 9 } merge-test 7 { 1 2 3 4 5 6 7 8 9 } { 1 2 3b 4b 5b 6b 7 8b 9 } { 1 2 3 4 5c 6c 7c 8c 9 } { 1 2 MINE: 3b 4b 5b 6b 7 8b COM: 3 4 5 6 7 8 YOURS: 3 4 5c 6c 7c 8c END 9 } merge-test 8 { 1 2 3 4 5 6 7 8 9 } { 1 2 3b 4b 5b 6b 7 8b 9b } { 1 2 3 4 5c 6c 7c 8c 9 } { 1 2 MINE: 3b 4b 5b 6b 7 8b 9b COM: 3 4 5 6 7 8 9 YOURS: 3 4 5c 6c 7c 8c 9 END } merge-test 9 { 1 2 3 4 5 6 7 8 9 } { 1 2 3b 4b 5 6 7 8b 9b } { 1 2 3 4 5c 6c 7c 8 9 } { |
︙ | ︙ | |||
145 146 147 148 149 150 151 | 1 2 3 4 5 6 7 8 9 } { 1 2 3b 4b 5 6 7 8b 9b } { 1 2 3b 4c 5 6c 7c 8 9 } { 1 2 MINE: 3b 4b COM: 3 4 YOURS: 3b 4c END 5 6c 7c 8b 9b | < > | 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 | 1 2 3 4 5 6 7 8 9 } { 1 2 3b 4b 5 6 7 8b 9b } { 1 2 3b 4c 5 6c 7c 8 9 } { 1 2 MINE: 3b 4b COM: 3 4 YOURS: 3b 4c END 5 6c 7c 8b 9b } merge-test 12 { 1 2 3 4 5 6 7 8 9 } { 1 2 3b4b 5 6 7 8b 9b } { 1 2 3b4b 5 6c 7c 8 9 } { |
︙ | ︙ | |||
200 201 202 203 204 205 206 | 1 2 3 4 5 6 7 8 9 } { 1 6 7 8 9 } { 1 2 3 4 9 } { 1 MINE: 6 7 8 COM: 2 3 4 5 6 7 8 YOURS: 2 3 4 END 9 | < > < > | 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 | 1 2 3 4 5 6 7 8 9 } { 1 6 7 8 9 } { 1 2 3 4 9 } { 1 MINE: 6 7 8 COM: 2 3 4 5 6 7 8 YOURS: 2 3 4 END 9 } merge-test 25 { 1 2 3 4 5 6 7 8 9 } { 1 7 8 9 } { 1 2 3 9 } { 1 MINE: 7 8 COM: 2 3 4 5 6 7 8 YOURS: 2 3 END 9 } merge-test 30 { 1 2 3 4 5 6 7 8 9 } { 1 2 3 4 5 6 7 9 } { 1 3 4 5 6 7 8 9 |
︙ | ︙ | |||
255 256 257 258 259 260 261 | 1 2 3 4 5 6 7 8 9 } { 1 2 3 4 9 } { 1 6 7 8 9 } { 1 MINE: 2 3 4 COM: 2 3 4 5 6 7 8 YOURS: 6 7 8 END 9 | < > < > | 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 | 1 2 3 4 5 6 7 8 9 } { 1 2 3 4 9 } { 1 6 7 8 9 } { 1 MINE: 2 3 4 COM: 2 3 4 5 6 7 8 YOURS: 6 7 8 END 9 } merge-test 35 { 1 2 3 4 5 6 7 8 9 } { 1 2 3 9 } { 1 7 8 9 } { 1 MINE: 2 3 COM: 2 3 4 5 6 7 8 YOURS: 7 8 END 9 } merge-test 40 { 2 3 4 5 6 7 8 } { 3 4 5 6 7 8 } { 2 3 4 5 6 7 |
︙ | ︙ | |||
310 311 312 313 314 315 316 | 2 3 4 5 6 7 8 } { 6 7 8 } { 2 3 4 } { MINE: 6 7 8 COM: 2 3 4 5 6 7 8 YOURS: 2 3 4 END | < > < > | 303 304 305 306 307 308 309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324 325 326 | 2 3 4 5 6 7 8 } { 6 7 8 } { 2 3 4 } { MINE: 6 7 8 COM: 2 3 4 5 6 7 8 YOURS: 2 3 4 END } merge-test 45 { 2 3 4 5 6 7 8 } { 7 8 } { 2 3 } { MINE: 7 8 COM: 2 3 4 5 6 7 8 YOURS: 2 3 END } merge-test 50 { 2 3 4 5 6 7 8 } { 2 3 4 5 6 7 } { 3 4 5 6 7 8 |
︙ | ︙ | |||
364 365 366 367 368 369 370 | 2 3 4 5 6 7 8 } { 2 3 4 } { 6 7 8 } { MINE: 2 3 4 COM: 2 3 4 5 6 7 8 YOURS: 6 7 8 END | < > < > | 357 358 359 360 361 362 363 364 365 366 367 368 369 370 371 372 373 374 375 376 377 378 379 380 | 2 3 4 5 6 7 8 } { 2 3 4 } { 6 7 8 } { MINE: 2 3 4 COM: 2 3 4 5 6 7 8 YOURS: 6 7 8 END } merge-test 55 { 2 3 4 5 6 7 8 } { 2 3 } { 7 8 } { MINE: 2 3 COM: 2 3 4 5 6 7 8 YOURS: 7 8 END } merge-test 60 { 1 2 3 4 5 6 7 8 9 } { 1 2b 3 4 5 6 7 8 9 } { 1 2 3 4 5 6 7 9 |
︙ | ︙ | |||
419 420 421 422 423 424 425 | 1 2 3 4 5 6 7 8 9 } { 1 2b 3b 4b 5b 6 7 8 9 } { 1 2 3 4 9 } { 1 MINE: 2b 3b 4b 5b 6 7 8 COM: 2 3 4 5 6 7 8 YOURS: 2 3 4 END 9 | < > < > | 412 413 414 415 416 417 418 419 420 421 422 423 424 425 426 427 428 429 430 431 432 433 434 435 | 1 2 3 4 5 6 7 8 9 } { 1 2b 3b 4b 5b 6 7 8 9 } { 1 2 3 4 9 } { 1 MINE: 2b 3b 4b 5b 6 7 8 COM: 2 3 4 5 6 7 8 YOURS: 2 3 4 END 9 } merge-test 65 { 1 2 3 4 5 6 7 8 9 } { 1 2b 3b 4b 5b 6b 7 8 9 } { 1 2 3 9 } { 1 MINE: 2b 3b 4b 5b 6b 7 8 COM: 2 3 4 5 6 7 8 YOURS: 2 3 END 9 } merge-test 70 { 1 2 3 4 5 6 7 8 9 } { 1 2 3 4 5 6 7 9 } { 1 2b 3 4 5 6 7 8 9 |
︙ | ︙ | |||
474 475 476 477 478 479 480 | 1 2 3 4 5 6 7 8 9 } { 1 2 3 4 9 } { 1 2b 3b 4b 5b 6 7 8 9 } { 1 MINE: 2 3 4 COM: 2 3 4 5 6 7 8 YOURS: 2b 3b 4b 5b 6 7 8 END 9 | < > < > | 467 468 469 470 471 472 473 474 475 476 477 478 479 480 481 482 483 484 485 486 487 488 489 490 | 1 2 3 4 5 6 7 8 9 } { 1 2 3 4 9 } { 1 2b 3b 4b 5b 6 7 8 9 } { 1 MINE: 2 3 4 COM: 2 3 4 5 6 7 8 YOURS: 2b 3b 4b 5b 6 7 8 END 9 } merge-test 75 { 1 2 3 4 5 6 7 8 9 } { 1 2 3 9 } { 1 2b 3b 4b 5b 6b 7 8 9 } { 1 MINE: 2 3 COM: 2 3 4 5 6 7 8 YOURS: 2b 3b 4b 5b 6b 7 8 END 9 } merge-test 80 { 2 3 4 5 6 7 8 } { 2b 3 4 5 6 7 8 } { 2 3 4 5 6 7 |
︙ | ︙ | |||
529 530 531 532 533 534 535 | 2 3 4 5 6 7 8 } { 2b 3b 4b 5b 6 7 8 } { 2 3 4 } { MINE: 2b 3b 4b 5b 6 7 8 COM: 2 3 4 5 6 7 8 YOURS: 2 3 4 END | < > < > | 522 523 524 525 526 527 528 529 530 531 532 533 534 535 536 537 538 539 540 541 542 543 544 545 | 2 3 4 5 6 7 8 } { 2b 3b 4b 5b 6 7 8 } { 2 3 4 } { MINE: 2b 3b 4b 5b 6 7 8 COM: 2 3 4 5 6 7 8 YOURS: 2 3 4 END } merge-test 85 { 2 3 4 5 6 7 8 } { 2b 3b 4b 5b 6b 7 8 } { 2 3 } { MINE: 2b 3b 4b 5b 6b 7 8 COM: 2 3 4 5 6 7 8 YOURS: 2 3 END } merge-test 90 { 2 3 4 5 6 7 8 } { 2 3 4 5 6 7 } { 2b 3 4 5 6 7 8 |
︙ | ︙ | |||
584 585 586 587 588 589 590 | 2 3 4 5 6 7 8 } { 2 3 4 } { 2b 3b 4b 5b 6 7 8 } { MINE: 2 3 4 COM: 2 3 4 5 6 7 8 YOURS: 2b 3b 4b 5b 6 7 8 END | < > < > | 577 578 579 580 581 582 583 584 585 586 587 588 589 590 591 592 593 594 595 596 597 598 599 600 | 2 3 4 5 6 7 8 } { 2 3 4 } { 2b 3b 4b 5b 6 7 8 } { MINE: 2 3 4 COM: 2 3 4 5 6 7 8 YOURS: 2b 3b 4b 5b 6 7 8 END } merge-test 95 { 2 3 4 5 6 7 8 } { 2 3 } { 2b 3b 4b 5b 6b 7 8 } { MINE: 2 3 COM: 2 3 4 5 6 7 8 YOURS: 2b 3b 4b 5b 6b 7 8 END } merge-test 100 { 1 2 3 4 5 6 7 8 9 } { 1 2b 3 4 5 7 8 9 a b c d e } { 1 2b 3 4 5 7 8 9 a b c d e |
︙ | ︙ | |||
630 631 632 633 634 635 636 | 1 2 3 4 5 6 7 8 9 } { 1 2 3 4 5 7 8 9b } { 1 2 3 4 5 7 8 9b a b c d e } { 1 2 3 4 5 7 8 MINE: 9b COM: 9 YOURS: 9b a b c d e END | < > < > | 623 624 625 626 627 628 629 630 631 632 633 634 635 636 637 638 639 640 641 642 643 | 1 2 3 4 5 6 7 8 9 } { 1 2 3 4 5 7 8 9b } { 1 2 3 4 5 7 8 9b a b c d e } { 1 2 3 4 5 7 8 MINE: 9b COM: 9 YOURS: 9b a b c d e END } merge-test 104 { 1 2 3 4 5 6 7 8 9 } { 1 2 3 4 5 7 8 9b a b c d e } { 1 2 3 4 5 7 8 9b } { 1 2 3 4 5 7 8 MINE: 9b a b c d e COM: 9 YOURS: 9b END } ############################################################################### test_cleanup |
Changes to test/merge4.test.
︙ | ︙ | |||
16 17 18 19 20 21 22 | ############################################################################ # # Tests of the 3-way merge # test_setup "" | | | | | | | | | 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 | ############################################################################ # # Tests of the 3-way merge # test_setup "" proc merge-test {testid basis v1 v2 result1 result2} { write_file t1 [join [string trim $basis] \n]\n write_file t2 [join [string trim $v1] \n]\n write_file t3 [join [string trim $v2] \n]\n fossil 3-way-merge t1 t2 t3 t4 fossil 3-way-merge t1 t3 t2 t5 set x [read_file t4] regsub -all {<<<<<<< BEGIN MERGE CONFLICT.*<<} $x {>} x regsub -all {\|\|\|\|\|\|\|.*=======} $x {=} x regsub -all {>>>>>>> END MERGE CONFLICT.*>>>>} $x {<} x set x [split [string trim $x] \n] set y [read_file t5] regsub -all {<<<<<<< BEGIN MERGE CONFLICT.*<<} $y {>} y regsub -all {\|\|\|\|\|\|\|.*=======} $y {=} y regsub -all {>>>>>>> END MERGE CONFLICT.*>>>>} $y {<} y set y [split [string trim $y] \n] set result1 [string trim $result1] if {$x!=$result1} { protOut " Expected \[$result1\]" protOut " Got \[$x\]" test merge4-$testid 0 |
︙ | ︙ | |||
59 60 61 62 63 64 65 | 1 2b 3b 4b 5 6b 7b 8b 9 } { 1 2 3 4c 5c 6c 7 8 9 } { 1 > 2b 3b 4b 5 6b 7b 8b = 2 3 4c 5c 6c 7 8 < 9 } { 1 > 2 3 4c 5c 6c 7 8 = 2b 3b 4b 5 6b 7b 8b < 9 | < > | 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 | 1 2b 3b 4b 5 6b 7b 8b 9 } { 1 2 3 4c 5c 6c 7 8 9 } { 1 > 2b 3b 4b 5 6b 7b 8b = 2 3 4c 5c 6c 7 8 < 9 } { 1 > 2 3 4c 5c 6c 7 8 = 2b 3b 4b 5 6b 7b 8b < 9 } merge-test 1001 { 1 2 3 4 5 6 7 8 9 } { 1 2b 3b 4 5 6 7b 8b 9 } { 1 2 3 4c 5c 6c 7 8 9 } { |
︙ | ︙ | |||
81 82 83 84 85 86 87 | 2b 3b 4b 5 6b 7b 8b } { 2 3 4c 5c 6c 7 8 } { > 2b 3b 4b 5 6b 7b 8b = 2 3 4c 5c 6c 7 8 < } { > 2 3 4c 5c 6c 7 8 = 2b 3b 4b 5 6b 7b 8b < | < > | 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 | 2b 3b 4b 5 6b 7b 8b } { 2 3 4c 5c 6c 7 8 } { > 2b 3b 4b 5 6b 7b 8b = 2 3 4c 5c 6c 7 8 < } { > 2 3 4c 5c 6c 7 8 = 2b 3b 4b 5 6b 7b 8b < } merge-test 1003 { 2 3 4 5 6 7 8 } { 2b 3b 4 5 6 7b 8b } { 2 3 4c 5c 6c 7 8 } { |
︙ | ︙ |
Changes to test/merge5.test.
︙ | ︙ | |||
14 15 16 17 18 19 20 | # http://www.hwaci.com/drh/ # ############################################################################ # # Tests of the "merge" command # | < | < | 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 | # http://www.hwaci.com/drh/ # ############################################################################ # # Tests of the "merge" command # puts "Skipping Merge5 tests" protOut { fossil sqlite3 --no-repository reacts badly to SQL dumped from repositories created from fossil older than version 2.0. } test merge5-sqlite3-issue false knownBug test_cleanup_then_return |
︙ | ︙ |
Changes to test/merge_renames.test.
︙ | ︙ | |||
260 261 262 263 264 265 266 | fossil update trunk write_file f1 "f1.2" fossil add f1 fossil commit -b b2 -m "add f1" fossil update trunk fossil merge b1 | | < | | < | | 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 | fossil update trunk write_file f1 "f1.2" fossil add f1 fossil commit -b b2 -m "add f1" fossil update trunk fossil merge b1 fossil merge b2 test_status_list merge_renames-8-1 $RESULT { WARNING: no common ancestor for f1 } fossil revert fossil merge --integrate b1 fossil merge b2 test_status_list merge_renames-8-2 $RESULT { WARNING: no common ancestor for f1 } ############################################# # Test 9 # # Merging a delete/rename/add combination # ############################################# |
︙ | ︙ | |||
308 309 310 311 312 313 314 | ADDED f1 } test_status_list merge_renames-9-1 $RESULT $expectedMerge fossil changes test_status_list merge_renames-9-2 $RESULT " MERGED_WITH [commit_id b] ADDED_BY_MERGE f1 | | | | | 306 307 308 309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324 325 326 327 328 329 330 331 332 | ADDED f1 } test_status_list merge_renames-9-1 $RESULT $expectedMerge fossil changes test_status_list merge_renames-9-2 $RESULT " MERGED_WITH [commit_id b] ADDED_BY_MERGE f1 RENAMED f2 DELETED f2 (overwritten by rename) " test_file_contents merge_renames-9-3 f1 "f1.1" test_file_contents merge_renames-9-4 f2 "f1" # Undo and ensure a dry run merge results in no changes fossil undo test_status_list merge_renames-9-5 $RESULT { UNDO f1 UNDO f2 } fossil merge -n b test_status_list merge_renames-9-6 $RESULT " $expectedMerge REMINDER: this was a dry run - no files were actually changed. " test merge_renames-9-7 {[fossil changes] eq ""} ################################################################### |
︙ | ︙ | |||
368 369 370 371 372 373 374 | test_status_list merge_renames-10-4 $RESULT { RENAME f1 -> f2 RENAME f2 -> f1 } test_file_contents merge_renames-10-5 f1 "f1" test_file_contents merge_renames-10-6 f2 "f2" test_status_list merge_renames-10-7 [fossil changes] " | | | | 366 367 368 369 370 371 372 373 374 375 376 377 378 379 380 381 | test_status_list merge_renames-10-4 $RESULT { RENAME f1 -> f2 RENAME f2 -> f1 } test_file_contents merge_renames-10-5 f1 "f1" test_file_contents merge_renames-10-6 f2 "f2" test_status_list merge_renames-10-7 [fossil changes] " RENAMED f1 RENAMED f2 BACKOUT [commit_id trunk] " fossil commit -m "swap back" ;# V fossil merge b test_status_list merge_renames-10-8 $RESULT { UPDATE f1 |
︙ | ︙ | |||
495 496 497 498 499 500 501 | ADD f2 } fossil merge trunk fossil commit -m "merge trunk" --tag c4 fossil mv --hard f2 f2n test_status_list merge_renames-13-3 $RESULT " RENAME f2 f2n | | | 493 494 495 496 497 498 499 500 501 502 503 504 505 506 507 | ADD f2 } fossil merge trunk fossil commit -m "merge trunk" --tag c4 fossil mv --hard f2 f2n test_status_list merge_renames-13-3 $RESULT " RENAME f2 f2n MOVED_FILE $repoDir/f2 " fossil commit -m "renamed f2->f2n" --tag c5 fossil update trunk fossil merge b test_status_list merge_renames-13-4 $RESULT {ADDED f2n} fossil commit -m "merge f2n" --tag m1 --tag c6 |
︙ | ︙ |
Changes to test/merge_warn.test.
︙ | ︙ | |||
38 39 40 41 42 43 44 | write_file f4 "f4" fossil add f4 fossil commit -m "add f4" fossil update trunk write_file f1 "f1.1" write_file f3 "f3.1" | | < | | | | > | 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 | write_file f4 "f4" fossil add f4 fossil commit -m "add f4" fossil update trunk write_file f1 "f1.1" write_file f3 "f3.1" fossil merge --integrate mrg test_status_list merge_warn-1 $RESULT { WARNING: no common ancestor for f2 DELETE f1 WARNING: local edits lost for f1 ADDED f3 (overwrites an unmanaged file) WARNING: 1 merge conflicts WARNING: 1 unmanaged files were overwritten } test merge_warn-2 { [string first "ignoring --integrate: mrg is not a leaf" $RESULT]>=0 } ############################################################################### |
︙ | ︙ |
Changes to test/release-checklist.wiki.
︙ | ︙ | |||
45 46 47 48 49 50 51 | <li><p> Shift-click on each of the links in [./fileage-test-1.wiki] and verify correct operation of the file-age computation. <li><p> Verify correct name-change tracking behavior (no net changes) for: | > | | | 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 | <li><p> Shift-click on each of the links in [./fileage-test-1.wiki] and verify correct operation of the file-age computation. <li><p> Verify correct name-change tracking behavior (no net changes) for: <blockquote><b> fossil test-name-changes --debug b120bc8b262ac 374920b20944b </b></blockquote> <li><p> Compile for all of the following platforms: <ol type="a"> <li> Linux x86 <li> Linux x86_64 <li> Mac x86 |
︙ | ︙ |
Changes to test/revert.test.
︙ | ︙ | |||
96 97 98 99 100 101 102 | # Test with a single filename argument # revert-test 1-2 f0 { UNMANAGE f0 } -changes { DELETED f1 EDITED f2 | | | | | 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 | # Test with a single filename argument # revert-test 1-2 f0 { UNMANAGE f0 } -changes { DELETED f1 EDITED f2 RENAMED f3n } -addremove { ADDED f0 } -exists {f0 f2 f3n} -notexists f3 revert-test 1-3 f1 { REVERT f1 } -changes { ADDED f0 EDITED f2 RENAMED f3n } -exists {f0 f1 f2 f3n} -notexists f3 revert-test 1-4 f2 { REVERT f2 } -changes { ADDED f0 DELETED f1 RENAMED f3n } -exists {f0 f2 f3n} -notexists {f1 f3} # Both files involved in a rename are reverted regardless of which filename # is used as an argument to 'fossil revert' # revert-test 1-5 f3 { REVERT f3 |
︙ | ︙ |
Deleted test/rewrite-test-output.tcl.
|
| < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < |
Changes to test/set-manifest.test.
︙ | ︙ | |||
44 45 46 47 48 49 50 | test_setup #### Verify classic behavior of the manifest setting # Setting is off by default, and there are no extra files. fossil settings manifest test "set-manifest-1" {[regexp {^manifest *$} $RESULT]} | | | | 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 | test_setup #### Verify classic behavior of the manifest setting # Setting is off by default, and there are no extra files. fossil settings manifest test "set-manifest-1" {[regexp {^manifest *$} $RESULT]} set filelist [glob -nocomplain manifest*] test "set-manifest-1-n" {[llength $filelist] == 0} # Classic behavior: TRUE value creates manifest and manifest.uuid set truths [list true on 1] foreach v $truths { fossil settings manifest $v test "set-manifest-2-$v" {$RESULT eq ""} fossil settings manifest test "set-manifest-2-$v-a" {[regexp "^manifest\\s+\\(local\\)\\s+$v\\s*$" $RESULT]} set filelist [glob manifest*] test "set-manifest-2-$v-n" {[llength $filelist] == 2} foreach f $filelist { test "set-manifest-2-$v-f-$f" {[file isfile $f]} } } # ... and manifest.uuid is the checkout's hash |
︙ | ︙ | |||
86 87 88 89 90 91 92 | # Classic behavior: FALSE value removes manifest and manifest.uuid set falses [list false off 0] foreach v $falses { fossil settings manifest $v test "set-manifest-3-$v" {$RESULT eq ""} fossil settings manifest test "set-manifest-3-$v-a" {[regexp "^manifest\\s+\\(local\\)\\s+$v\\s*$" $RESULT]} | | | | | 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 | # Classic behavior: FALSE value removes manifest and manifest.uuid set falses [list false off 0] foreach v $falses { fossil settings manifest $v test "set-manifest-3-$v" {$RESULT eq ""} fossil settings manifest test "set-manifest-3-$v-a" {[regexp "^manifest\\s+\\(local\\)\\s+$v\\s*$" $RESULT]} set filelist [glob -nocomplain manifest*] test "set-manifest-3-$v-n" {[llength $filelist] == 0} } # Classic behavior: unset removes manifest and manifest.uuid fossil unset manifest test "set-manifest-4" {$RESULT eq ""} fossil settings manifest test "set-manifest-4-a" {[regexp {^manifest *$} $RESULT]} set filelist [glob -nocomplain manifest*] test "set-manifest-4-n" {[llength $filelist] == 0} ##### Tags Manifest feature extends the manifest setting # Manifest Tags: use letters r, u, and t to select each of manifest, # manifest.uuid, and manifest.tags files. set truths [list r u t ru ut rt rut] foreach v $truths { fossil settings manifest $v test "set-manifest-5-$v" {$RESULT eq ""} fossil settings manifest test "set-manifest-5-$v-a" {[regexp "^manifest\\s+\\(local\\)\\s+$v\\s*$" $RESULT]} set filelist [glob manifest*] test "set-manifest-5-$v-n" {[llength $filelist] == [string length $v]} foreach f $filelist { test "set-manifest-5-$v-f-$f" {[file isfile $f]} } } # Quick check for tags applied in trunk |
︙ | ︙ |
Changes to test/settings-repo.test.
︙ | ︙ | |||
38 39 40 41 42 43 44 | set all_settings [get_all_settings] foreach name $all_settings { # # HACK: Make 100% sure that there are no non-default setting values # present anywhere. # | < < < | < | | | | | | | | | 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 | set all_settings [get_all_settings] foreach name $all_settings { # # HACK: Make 100% sure that there are no non-default setting values # present anywhere. # fossil unset $name --exact --global fossil unset $name --exact # # NOTE: Query for the hard-coded default value of this setting and # save it. # fossil test-th-eval "setting $name" set defaults($name) [normalize_result] } ############################################################################### fossil settings bad-setting some_value test settings-set-bad-local { [normalize_result] eq "no such setting: bad-setting" } fossil settings bad-setting some_value --global test settings-set-bad-global { [normalize_result] eq "no such setting: bad-setting" } ############################################################################### fossil unset bad-setting test settings-unset-bad-local { [normalize_result] eq "no such setting: bad-setting" } fossil unset bad-setting --global test settings-unset-bad-global { [normalize_result] eq "no such setting: bad-setting" } ############################################################################### fossil settings ssl some_value test settings-set-ambiguous-local { [normalize_result] eq "ambiguous setting \"ssl\" - might be: ssl-ca-location ssl-identity" } fossil settings ssl some_value --global test settings-set-ambiguous-global { [normalize_result] eq "ambiguous setting \"ssl\" - might be: ssl-ca-location ssl-identity" } ############################################################################### fossil unset ssl test settings-unset-ambiguous-local { [normalize_result] eq "ambiguous setting \"ssl\" - might be: ssl-ca-location ssl-identity" } fossil unset ssl --global test settings-unset-ambiguous-global { [normalize_result] eq "ambiguous setting \"ssl\" - might be: ssl-ca-location ssl-identity" } ############################################################################### |
︙ | ︙ | |||
244 245 246 247 248 249 250 | [regexp -- [string map [list %name% $name] $pattern(5)] $data] } fossil test-th-eval --open-config "setting $name" set data [normalize_result] test settings-set-check2-versionable-$name { | | | 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 | [regexp -- [string map [list %name% $name] $pattern(5)] $data] } fossil test-th-eval --open-config "setting $name" set data [normalize_result] test settings-set-check2-versionable-$name { $data eq $value } file delete $fileName fossil settings $name --exact set data [normalize_result] |
︙ | ︙ |
Changes to test/settings.test.
︙ | ︙ | |||
90 91 92 93 94 95 96 | set data [normalize_result] test settings-query-local-$name { [regexp -- [string map [list %name% $name] $pattern(1)] $data] || [regexp -- [string map [list %name% $name] $pattern(2)] $data] } | < < < | < | | | 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 | set data [normalize_result] test settings-query-local-$name { [regexp -- [string map [list %name% $name] $pattern(1)] $data] || [regexp -- [string map [list %name% $name] $pattern(2)] $data] } fossil settings $name --exact --global set data [normalize_result] if {$name eq "manifest"} { test settings-query-global-$name { $data eq "cannot set 'manifest' globally" } } else { test settings-query-global-$name { [regexp -- [string map [list %name% $name] $pattern(1)] $data] || [regexp -- [string map [list %name% $name] $pattern(2)] $data] } } } ############################################################################### fossil settings bad-setting test settings-query-bad-local { [normalize_result] eq "no such setting: bad-setting" } fossil settings bad-setting --global test settings-query-bad-global { [normalize_result] eq "no such setting: bad-setting" } ############################################################################### test_cleanup |
Changes to test/stash.test.
︙ | ︙ | |||
139 140 141 142 143 144 145 | test stash-1-list-1 {[regexp {^1: \[[0-9a-z]+\] on } [first_data_line]]} test stash-1-list-2 {[regexp {^\s+stash 1\s*$} [second_data_line]]} set diff_stash_1 {DELETE f1 Index: f1 ================================================================== --- f1 | | | | 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 | test stash-1-list-1 {[regexp {^1: \[[0-9a-z]+\] on } [first_data_line]]} test stash-1-list-2 {[regexp {^\s+stash 1\s*$} [second_data_line]]} set diff_stash_1 {DELETE f1 Index: f1 ================================================================== --- f1 +++ f1 @@ -1,1 +0,0 @@ -f1 CHANGED f2 --- f2 +++ f2 @@ -1,1 +1,1 @@ -f2 +f2.1 CHANGED f3n --- f3n +++ f3n ADDED f0 Index: f0 ================================================================== --- f0 +++ f0 @@ -0,0 +1,1 @@ +f0} ######## # fossil stash show|cat ?STASHID? ?DIFF-OPTIONS? # fossil stash [g]diff ?STASHID? ?DIFF-OPTIONS? |
︙ | ︙ | |||
183 184 185 186 187 188 189 | UPDATE f2 UPDATE f3n ADDED f0 } -changes { ADDED f0 MISSING f1 EDITED f2 | | | | 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 | UPDATE f2 UPDATE f3n ADDED f0 } -changes { ADDED f0 MISSING f1 EDITED f2 RENAMED f3n } -addremove { DELETED f1 } -exists {f0 f2 f3n} -notexists {f1 f3} # Confirm there is no longer a stash saved fossil stash list test stash-2-list {[first_data_line] eq "empty stash"} # Test stashed mv without touching the file system # Issue reported by email to fossil-users # from Warren Young, dated Tue, 9 Feb 2016 01:22:54 -0700 # with checkin [b8c7af5bd9] plus a local patch on CentOS 5 # 64 bit intel, 8-byte pointer, 4-byte integer # Stashed renamed file said: # fossil: ./src/delta.c:231: checksum: Assertion '...' failed. # Should be triggered by this stash-WY-1 test. fossil checkout --force c1 fossil clean fossil mv --soft f1 f1new stash-test WY-1 {save -m "Reported 2016-02-09"} { REVERT f1 DELETE f1new } -changes { } -addremove { } -exists {f1 f2 f3} -notexists {f1new} -knownbugs {-code -result} # TODO: add tests that verify the saved stash is sensible. Possibly # by applying it and checking results. But until the SQLITE_CONSTRAINT |
︙ | ︙ | |||
263 264 265 266 267 268 269 | ADDED f3 } -exists {f1 f2 f3} -notexists {} #fossil status fossil stash show test stash-3-1-show {[normalize_result] eq {ADDED f3 Index: f3 ================================================================== | | | 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 | ADDED f3 } -exists {f1 f2 f3} -notexists {} #fossil status fossil stash show test stash-3-1-show {[normalize_result] eq {ADDED f3 Index: f3 ================================================================== --- f3 +++ f3 @@ -0,0 +1,1 @@ +f3}} stash-test 3-1-pop {pop} { ADDED f3 } -changes { ADDED f3 |
︙ | ︙ | |||
290 291 292 293 294 295 296 | fossil commit -m "baseline" fossil mv --hard f2 f2n test_result_state stash-3-2-mv "mv --hard f2 f2n" [concat { RENAME f2 f2n MOVED_FILE} [file normalize f2] { }] -changes { | | | | 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 311 312 313 314 315 316 317 318 319 | fossil commit -m "baseline" fossil mv --hard f2 f2n test_result_state stash-3-2-mv "mv --hard f2 f2n" [concat { RENAME f2 f2n MOVED_FILE} [file normalize f2] { }] -changes { RENAMED f2n } -addremove { } -exists {f1 f2n} -notexists {f2} stash-test 3-2 {save -m f2n} { REVERT f2 DELETE f2n } -exists {f1 f2} -notexists {f2n} -knownbugs {-result} fossil stash show test stash-3-2-show-1 {![regexp {\sf1} $RESULT]} knownBug test stash-3-2-show-2 {[regexp {\sf2n} $RESULT]} stash-test 3-2-pop {pop} { UPDATE f1 UPDATE f2n } -changes { RENAMED f2n } -addremove { } -exists {f1 f2n} -notexists {f2} ######## # fossil stash snapshot ?-m|--comment COMMENT? ?FILES...? |
︙ | ︙ | |||
366 367 368 369 370 371 372 | file rename -force f3 f3n fossil mv f3 f3n stash-test 4-3 {snapshot -m "snap 3"} { } -changes { ADDED f0 DELETED f1 EDITED f2 | | | 366 367 368 369 370 371 372 373 374 375 376 377 378 379 380 | file rename -force f3 f3n fossil mv f3 f3n stash-test 4-3 {snapshot -m "snap 3"} { } -changes { ADDED f0 DELETED f1 EDITED f2 RENAMED f3n } -addremove { } -exists {f0 f2 f3n} -notexists {f1 f3} fossil stash diff test stash-4-3-diff-CODE {!$::CODE} knownBug fossil stash show test stash-4-3-show-1 {[regexp {DELETE f1} $RESULT]} test stash-4-3-show-2 {[regexp {CHANGED f2} $RESULT]} |
︙ | ︙ |
Changes to test/symlinks.test.
︙ | ︙ | |||
20 21 22 23 24 25 26 27 28 29 30 31 32 33 | set path [file dirname [info script]] if {$is_windows} { puts "Symlinks are not supported on Windows." test_cleanup_then_return } require_no_open_checkout ############################################################################### test_setup; set rootDir [file normalize [pwd]] | > > > > > > > < < < | 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 | set path [file dirname [info script]] if {$is_windows} { puts "Symlinks are not supported on Windows." test_cleanup_then_return } fossil test-th-eval --open-config "setting allow-symlinks" if {![string is true -strict [normalize_result]]} { puts "Symlinks are not enabled." test_cleanup_then_return } require_no_open_checkout ############################################################################### test_setup; set rootDir [file normalize [pwd]] fossil test-th-eval --open-config {repository} set repository [normalize_result] if {[string length $repository] == 0} { puts "Detection of the open repository file failed." test_cleanup_then_return } |
︙ | ︙ | |||
56 57 58 59 60 61 62 | test symlinks-dir-1 {[file exists [file join $rootDir subdirA f1.txt]] eq 1} test symlinks-dir-2 {[file exists [file join $rootDir symdirA f1.txt]] eq 1} test symlinks-dir-3 {[file exists [file join $rootDir subdirA f2.txt]] eq 1} test symlinks-dir-4 {[file exists [file join $rootDir symdirA f2.txt]] eq 1} fossil add [file join $rootDir symdirA f1.txt] | < < < | < < < | | | | | | 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 | test symlinks-dir-1 {[file exists [file join $rootDir subdirA f1.txt]] eq 1} test symlinks-dir-2 {[file exists [file join $rootDir symdirA f1.txt]] eq 1} test symlinks-dir-3 {[file exists [file join $rootDir subdirA f2.txt]] eq 1} test symlinks-dir-4 {[file exists [file join $rootDir symdirA f2.txt]] eq 1} fossil add [file join $rootDir symdirA f1.txt] fossil commit -m "c1" ############################################################################### fossil ls test symlinks-dir-5 {[normalize_result] eq "symdirA/f1.txt"} ############################################################################### fossil extras test symlinks-dir-6 {[normalize_result] eq \ "subdirA/f1.txt\nsubdirA/f2.txt\nsymdirA/f2.txt"} ############################################################################### fossil close file delete [file join $rootDir subdirA f1.txt] test symlinks-dir-7 {[file exists [file join $rootDir subdirA f1.txt]] eq 0} test symlinks-dir-8 {[file exists [file join $rootDir symdirA f1.txt]] eq 0} test symlinks-dir-9 {[file exists [file join $rootDir subdirA f2.txt]] eq 1} test symlinks-dir-10 {[file exists [file join $rootDir symdirA f2.txt]] eq 1} ############################################################################### fossil open $repository set code [catch {file readlink [file join $rootDir symdirA]} result] test symlinks-dir-11 {$code == 0} test symlinks-dir-12 {$result eq [file join $rootDir subdirA]} test symlinks-dir-13 {[file exists [file join $rootDir subdirA f1.txt]] eq 1} test symlinks-dir-14 {[file exists [file join $rootDir symdirA f1.txt]] eq 1} test symlinks-dir-15 {[file exists [file join $rootDir subdirA f2.txt]] eq 1} test symlinks-dir-16 {[file exists [file join $rootDir symdirA f2.txt]] eq 1} ############################################################################### # # TODO: Add tests for symbolic links as files here, including tests with the # "allow-symlinks" setting on and off. # ############################################################################### test_cleanup |
Changes to test/tester.tcl.
︙ | ︙ | |||
18 19 20 21 22 23 24 | # This is the main test script. To run a regression test, do this: # # tclsh ../test/tester.tcl ../bld/fossil # # Where ../test/tester.tcl is the name of this file and ../bld/fossil # is the name of the executable to be tested. # | < < < < < < | 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 | # This is the main test script. To run a regression test, do this: # # tclsh ../test/tester.tcl ../bld/fossil # # Where ../test/tester.tcl is the name of this file and ../bld/fossil # is the name of the executable to be tested. # # We use some things introduced in 8.6 such as lmap. auto.def should # have found us a suitable Tcl installation. package require Tcl 8.6 set testfiledir [file normalize [file dirname [info script]]] set testrundir [pwd] set testdir [file normalize [file dirname $argv0]] set fossilexe [file normalize [lindex $argv 0]] set is_windows [expr {$::tcl_platform(platform) eq "windows"}] if {$::is_windows} { if {[string length [file extension $fossilexe]] == 0} { append fossilexe .exe } set outside_fossil_repo [expr ![file exists "$::testfiledir\\..\\_FOSSIL_"]] } else { |
︙ | ︙ | |||
282 283 284 285 286 287 288 | # set result [list \ access-log \ admin-log \ allow-symlinks \ auto-captcha \ auto-hyperlink \ | < < < < < < < < < | 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324 325 326 327 328 329 330 331 332 333 334 335 336 337 338 339 340 341 342 343 344 345 346 347 348 349 350 351 352 353 354 355 356 | # set result [list \ access-log \ admin-log \ allow-symlinks \ auto-captcha \ auto-hyperlink \ auto-shun \ autosync \ autosync-tries \ backoffice-disable \ backoffice-logfile \ backoffice-nodelay \ binary-glob \ case-sensitive \ chat-alert-sound \ chat-initial-history \ chat-inline-images \ chat-keep-count \ chat-keep-days \ chat-poll-timeout \ clean-glob \ clearsign \ comment-format \ crlf-glob \ crnl-glob \ default-csp \ default-perms \ diff-binary \ diff-command \ dont-push \ dotfiles \ editor \ email-admin \ email-renew-interval \ email-self \ email-send-command \ email-send-db \ email-send-dir \ email-send-method \ email-send-relayhost \ email-subname \ email-url \ empty-dirs \ encoding-glob \ exec-rel-paths \ fileedit-glob \ forbid-delta-manifests \ gdiff-command \ gmerge-command \ hash-digits \ hooks \ http-port \ https-login \ ignore-glob \ keep-glob \ localauth \ lock-timeout \ main-branch \ mainmenu \ manifest \ max-cache-entry \ max-loadavg \ max-upload \ mimetypes \ mtime-changes \ pgp-command \ preferred-diff-type \ proxy \ redirect-to-https \ relative-paths \ repo-cksum \ repolist-skin \ safe-html \ self-register \ sitemap-extra \ ssh-command \ ssl-ca-location \ ssl-identity \ tclsh \ th1-setup \ |
︙ | ︙ | |||
446 447 448 449 450 451 452 | proc require_no_open_checkout {} { if {[info exists ::env(FOSSIL_TEST_DANGEROUS_IGNORE_OPEN_CHECKOUT)] && \ $::env(FOSSIL_TEST_DANGEROUS_IGNORE_OPEN_CHECKOUT) eq "YES_DO_IT"} { return } catch {exec $::fossilexe info} res if {[regexp {local-root:} $res]} { | < < | 431 432 433 434 435 436 437 438 439 440 441 442 443 444 | proc require_no_open_checkout {} { if {[info exists ::env(FOSSIL_TEST_DANGEROUS_IGNORE_OPEN_CHECKOUT)] && \ $::env(FOSSIL_TEST_DANGEROUS_IGNORE_OPEN_CHECKOUT) eq "YES_DO_IT"} { return } catch {exec $::fossilexe info} res if {[regexp {local-root:} $res]} { set projectName <unknown> set localRoot <unknown> regexp -line -- {^project-name: (.*)$} $res dummy projectName set projectName [string trim $projectName] regexp -line -- {^local-root: (.*)$} $res dummy localRoot set localRoot [string trim $localRoot] error "Detected an open checkout of project \"$projectName\",\ |
︙ | ︙ | |||
485 486 487 488 489 490 491 | } after [expr {$try * 100}] } error "Could not delete \"$path\", error: $error" } proc test_cleanup_then_return {} { | < < | < < < < | 468 469 470 471 472 473 474 475 476 477 478 479 480 481 482 483 484 485 486 487 | } after [expr {$try * 100}] } error "Could not delete \"$path\", error: $error" } proc test_cleanup_then_return {} { uplevel 1 [list test_cleanup] return -code return } proc test_cleanup {} { if {$::KEEP} {return}; # All cleanup disabled? if {![info exists ::tempRepoPath]} {return} if {![file exists $::tempRepoPath]} {return} if {![file isdirectory $::tempRepoPath]} {return} set tempPathEnd [expr {[string length $::tempPath] - 1}] if {[string length $::tempPath] == 0 || \ [string range $::tempRepoPath 0 $tempPathEnd] ne $::tempPath} { error "Temporary repository path has wrong parent during cleanup." |
︙ | ︙ | |||
525 526 527 528 529 530 531 | # Finally, attempt to gracefully delete the temporary home directory, # unless forbidden by external forces. if {![info exists ::tempKeepHome]} {delete_temporary_home} } proc delete_temporary_home {} { if {$::KEEP} {return}; # All cleanup disabled? | | | 502 503 504 505 506 507 508 509 510 511 512 513 514 515 516 | # Finally, attempt to gracefully delete the temporary home directory, # unless forbidden by external forces. if {![info exists ::tempKeepHome]} {delete_temporary_home} } proc delete_temporary_home {} { if {$::KEEP} {return}; # All cleanup disabled? if {$::is_windows} { robust_delete [file join $::tempHomePath _fossil] } else { robust_delete [file join $::tempHomePath .fossil] } robust_delete $::tempHomePath } |
︙ | ︙ | |||
852 853 854 855 856 857 858 | lappend bad_test $name if {$::HALT} {exit 1} } } } set bad_test {} set ignored_test {} | < | 829 830 831 832 833 834 835 836 837 838 839 840 841 842 | lappend bad_test $name if {$::HALT} {exit 1} } } } set bad_test {} set ignored_test {} # Return a random string N characters long. # set vocabulary 01234567890abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ" append vocabulary " ()*^!.eeeeeeeeaaaaattiioo " set nvocabulary [string length $vocabulary] proc rand_str {N} { |
︙ | ︙ | |||
1017 1018 1019 1020 1021 1022 1023 | set inFileName [file join $::tempPath [appendArgs test-http-in- $suffix]] set outFileName [file join $::tempPath [appendArgs test-http-out- $suffix]] set data [subst [read_file $dataFileName]] write_file $inFileName $data fossil http --in $inFileName --out $outFileName --ipaddr 127.0.0.1 \ | | | 993 994 995 996 997 998 999 1000 1001 1002 1003 1004 1005 1006 1007 | set inFileName [file join $::tempPath [appendArgs test-http-in- $suffix]] set outFileName [file join $::tempPath [appendArgs test-http-out- $suffix]] set data [subst [read_file $dataFileName]] write_file $inFileName $data fossil http --in $inFileName --out $outFileName --ipaddr 127.0.0.1 \ $repository --localauth --th-trace set result [expr {[file exists $outFileName] ? [read_file $outFileName] : ""}] if {1} { catch {file delete $inFileName} catch {file delete $outFileName} } |
︙ | ︙ | |||
1094 1095 1096 1097 1098 1099 1100 | } error] != 0} { error "Could not write file \"$tempFile\" in directory \"$tempPath\",\ please set TEMP variable in environment, error: $error" } set tempHomePath [file join $tempPath home_[pid]] | < < < < < < < < < < < < < < < < < < < < < < < < < < | 1070 1071 1072 1073 1074 1075 1076 1077 1078 1079 1080 1081 1082 1083 1084 1085 1086 1087 1088 1089 1090 1091 1092 1093 1094 1095 1096 1097 1098 1099 1100 1101 1102 1103 1104 1105 1106 1107 1108 1109 1110 1111 1112 | } error] != 0} { error "Could not write file \"$tempFile\" in directory \"$tempPath\",\ please set TEMP variable in environment, error: $error" } set tempHomePath [file join $tempPath home_[pid]] if {[catch { file mkdir $tempHomePath } error] != 0} { error "Could not make directory \"$tempHomePath\",\ please set TEMP variable in environment, error: $error" } protInit $fossilexe set ::tempKeepHome 1 foreach testfile $argv { protOut "***** $testfile ******" if { [catch {source $testdir/$testfile.test} testerror testopts] } { test test-framework-$testfile 0 protOut "!!!!! $testfile: $testerror" protOutDict $testopts" } else { test test-framework-$testfile 1 } protOut "***** End of $testfile: [llength $bad_test] errors so far ******" } unset ::tempKeepHome; delete_temporary_home set nErr [llength $bad_test] if {$nErr>0 || !$::QUIET} { protOut "***** Final results: $nErr errors out of $test_count tests" 1 } if {$nErr>0} { protOut "***** Considered failures: $bad_test" 1 } set nErr [llength $ignored_test] if {$nErr>0 || !$::QUIET} { protOut "***** Ignored results: $nErr ignored errors out of $test_count tests" 1 } if {$nErr>0} { protOut "***** Ignored failures: $ignored_test" 1 } |
Changes to test/th1-docs.test.
︙ | ︙ | |||
27 28 29 30 31 32 33 34 35 36 | fossil test-th-eval "hasfeature tcl" if {[normalize_result] ne "1"} { puts "Fossil was not compiled with Tcl support." test_cleanup_then_return } ############################################################################### | > > > > > > > > | > | > < < < < < < < < < > | | | > | | 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 | fossil test-th-eval "hasfeature tcl" if {[normalize_result] ne "1"} { puts "Fossil was not compiled with Tcl support." test_cleanup_then_return } if {$::outside_fossil_repo} { puts "Skipping th1-docs-* tests: not in Fossil repo checkout." test_cleanup_then_return } elseif ($::dirty_ckout) { puts "Skipping th1-docs-* tests: uncommitted changes in Fossil checkout." test_cleanup_then_return } ############################################################################### test_setup "" ############################################################################### set env(TH1_ENABLE_DOCS) 1; # TH1 docs must be enabled for this test. set env(TH1_ENABLE_TCL) 1; # Tcl integration must be enabled for this test. ############################################################################### run_in_checkout { set data [fossil info] } regexp -line -- {^repository: (.*)$} $data dummy repository if {[string length $repository] == 0 || ![file exists $repository]} { error "unable to locate repository" } set dataFileName [file join $::testdir th1-docs-input.txt] ############################################################################### run_in_checkout { set RESULT [test_fossil_http \ $repository $dataFileName /doc/trunk/test/fileStat.th1] } test th1-docs-1a {[regexp {<title>Fossil: test/fileStat.th1</title>} $RESULT]} test th1-docs-1b {[regexp {>\[[0-9a-f]{40,64}\]<} $RESULT]} test th1-docs-1c {[regexp { contains \d+ files\.} $RESULT]} ############################################################################### test_cleanup |
Changes to test/th1-hooks.test.
︙ | ︙ | |||
144 145 146 147 148 149 150 | test th1-cmd-hooks-1b {[normalize_result] eq \ {<h1><b>command_hook timeline</b></h1> +++ some stuff here +++ <h1><b>command_hook timeline command_notify timeline</b></h1>}} ############################################################################### | | | 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 | test th1-cmd-hooks-1b {[normalize_result] eq \ {<h1><b>command_hook timeline</b></h1> +++ some stuff here +++ <h1><b>command_hook timeline command_notify timeline</b></h1>}} ############################################################################### fossil timeline custom3; # NOTE: Bad "WHEN" argument. test th1-cmd-hooks-1c {[normalize_result] eq \ {<h1><b>command_hook timeline</b></h1> unknown check-in or invalid date: custom3}} ############################################################################### |
︙ | ︙ | |||
194 195 196 197 198 199 200 | fossil test3 test th1-custom-cmd-3a {[string trim $RESULT] eq \ {<h1><b>command_hook test3</b></h1>}} ############################################################################### | | | 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 | fossil test3 test th1-custom-cmd-3a {[string trim $RESULT] eq \ {<h1><b>command_hook test3</b></h1>}} ############################################################################### fossil test4 test th1-custom-cmd-4a {[first_data_line] eq \ {<h1><b>command_hook test4</b></h1>}} test th1-custom-cmd-4b {[regexp -- \ {: unknown command: test4$} [second_data_line]]} |
︙ | ︙ |
Changes to test/th1-tcl.test.
︙ | ︙ | |||
75 76 77 78 79 80 81 | } ############################################################################### fossil test-th-render --open-config \ [file nativename [file join $path th1-tcl3.txt]] | | | | | | | | | | < < < < < < < | | 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 | } ############################################################################### fossil test-th-render --open-config \ [file nativename [file join $path th1-tcl3.txt]] test th1-tcl-3 {$RESULT eq {<hr /><p class="thmainError">ERROR:\ invalid command name "bad_command"</p>}} ############################################################################### fossil test-th-render --open-config \ [file nativename [file join $path th1-tcl4.txt]] test th1-tcl-4 {$RESULT eq {<hr /><p class="thmainError">ERROR:\ divide by zero</p>}} ############################################################################### fossil test-th-render --open-config \ [file nativename [file join $path th1-tcl5.txt]] test th1-tcl-5 {$RESULT eq {<hr /><p class="thmainError">ERROR:\ Tcl command not found: bad_command</p>} || $RESULT eq {<hr /><p\ class="thmainError">ERROR: invalid command name "bad_command"</p>}} ############################################################################### fossil test-th-render --open-config \ [file nativename [file join $path th1-tcl6.txt]] test th1-tcl-6 {$RESULT eq {<hr /><p class="thmainError">ERROR:\ no such command: bad_command</p>}} ############################################################################### fossil test-th-render --open-config \ [file nativename [file join $path th1-tcl7.txt]] test th1-tcl-7 {$RESULT eq {<hr /><p class="thmainError">ERROR:\ syntax error in expression: "2**0"</p>}} ############################################################################### fossil test-th-render --open-config \ [file nativename [file join $path th1-tcl8.txt]] test th1-tcl-8 {$RESULT eq {<hr /><p class="thmainError">ERROR:\ cannot invoke Tcl command: tailcall</p>} || $RESULT eq {<hr /><p\ class="thmainError">ERROR: tailcall can only be called from a proc or\ lambda</p>} || $RESULT eq {<hr /><p class="thmainError">ERROR: This test\ requires Tcl 8.6 or higher.</p>}} ############################################################################### fossil test-th-render --open-config \ [file nativename [file join $path th1-tcl9.txt]] test th1-tcl-9 {[string trim $RESULT] eq [list [file tail $fossilexe] 3 \ [list test-th-render --open-config [file nativename [file join $path \ th1-tcl9.txt]]]]} ############################################################################### fossil test-th-eval "tclMakeSafe a" test th1-tcl-10 {[normalize_result] eq \ |
︙ | ︙ |
Changes to test/th1.test.
︙ | ︙ | |||
728 729 730 731 732 733 734 | ############################################################################### fossil test-th-eval "trace {}" test th1-trace-1 {$RESULT eq {}} ############################################################################### | | | | | | | | | | < < | 728 729 730 731 732 733 734 735 736 737 738 739 740 741 742 743 744 745 746 747 748 749 750 751 752 753 754 755 756 757 758 759 760 761 762 763 764 765 766 767 768 769 770 771 772 773 774 775 776 777 778 779 780 781 782 783 784 785 786 787 788 789 790 791 792 793 794 795 796 797 798 799 800 801 | ############################################################################### fossil test-th-eval "trace {}" test th1-trace-1 {$RESULT eq {}} ############################################################################### fossil test-th-eval --th-trace "trace {}" set normalized_result [normalize_result] regsub -- {\n/\*\*\*\*\* Subprocess \d+ exit\(\d+\) \*\*\*\*\*/} \ $normalized_result {} normalized_result if {$th1Hooks} { test th1-trace-2 {$normalized_result eq \ {------------------ BEGIN TRACE LOG ------------------ th1-init 0x0 => 0x0<br /> ------------------- END TRACE LOG -------------------}} } else { test th1-trace-2 {$normalized_result eq \ {------------------ BEGIN TRACE LOG ------------------ th1-init 0x0 => 0x0<br /> th1-setup {} => TH_OK<br /> ------------------- END TRACE LOG -------------------}} } ############################################################################### fossil test-th-eval "trace {this is a trace message.}" test th1-trace-3 {$RESULT eq {}} ############################################################################### fossil test-th-eval --th-trace "trace {this is a trace message.}" set normalized_result [normalize_result] regsub -- {\n/\*\*\*\*\* Subprocess \d+ exit\(\d+\) \*\*\*\*\*/} \ $normalized_result {} normalized_result if {$th1Hooks} { test th1-trace-4 {$normalized_result eq \ {------------------ BEGIN TRACE LOG ------------------ th1-init 0x0 => 0x0<br /> this is a trace message. ------------------- END TRACE LOG -------------------}} } else { test th1-trace-4 {$normalized_result eq \ {------------------ BEGIN TRACE LOG ------------------ th1-init 0x0 => 0x0<br /> th1-setup {} => TH_OK<br /> this is a trace message. ------------------- END TRACE LOG -------------------}} } ############################################################################### fossil test-th-eval "defHeader {Page Title Here}" test th1-defHeader-1 {$RESULT eq \ {TH_ERROR: wrong # args: should be "defHeader"}} ############################################################################### fossil test-th-eval "defHeader" test th1-defHeader-2 {[string match *<body> [normalize_result]] || \ [string match "*<body class=\"\$current_feature\">" [normalize_result]]} ############################################################################### fossil test-th-eval "styleHeader {Page Title Here}" test th1-header-1 {$RESULT eq {TH_ERROR: repository unavailable}} ############################################################################### |
︙ | ︙ | |||
1021 1022 1023 1024 1025 1026 1027 | ############################################################################### fossil test-th-eval "globalState vfs" test th1-globalState-14 {[string length $RESULT] == 0} ############################################################################### | | | 1019 1020 1021 1022 1023 1024 1025 1026 1027 1028 1029 1030 1031 1032 1033 | ############################################################################### fossil test-th-eval "globalState vfs" test th1-globalState-14 {[string length $RESULT] == 0} ############################################################################### if {$is_windows} { set altVfs win32-longpath } else { set altVfs unix-dotfile } ############################################################################### |
︙ | ︙ | |||
1062 1063 1064 1065 1066 1067 1068 | set sorted_result [lsort $RESULT] protOut "Sorted: $sorted_result" set base_commands {anoncap anycap array artifact break breakpoint \ builtin_request_js capexpr captureTh1 catch cgiHeaderLine checkout \ combobox continue copybtn date decorate defHeader dir enable_htmlify \ enable_output encode64 error expr for foreach getParameter glob_match \ globalState hascap hasfeature html htmlize http httpize if info \ | | | | | | 1060 1061 1062 1063 1064 1065 1066 1067 1068 1069 1070 1071 1072 1073 1074 1075 1076 1077 | set sorted_result [lsort $RESULT] protOut "Sorted: $sorted_result" set base_commands {anoncap anycap array artifact break breakpoint \ builtin_request_js capexpr captureTh1 catch cgiHeaderLine checkout \ combobox continue copybtn date decorate defHeader dir enable_htmlify \ enable_output encode64 error expr for foreach getParameter glob_match \ globalState hascap hasfeature html htmlize http httpize if info \ insertCsrf lappend lindex linecount list llength lsearch markdown \ nonce proc puts query randhex redirect regexp reinitialize rename \ render repository return searchable set setParameter setting stime \ string styleFooter styleHeader styleScript tclReady trace unset \ unversioned uplevel upvar utime verifyCsrf verifyLogin wiki} set tcl_commands {tclEval tclExpr tclInvoke tclIsSafe tclMakeSafe} if {$th1Tcl} { test th1-info-commands-1 {$sorted_result eq [lsort "$base_commands $tcl_commands"]} } else { test th1-info-commands-1 {$sorted_result eq [lsort "$base_commands"]} } |
︙ | ︙ |
Changes to test/unversioned.test.
︙ | ︙ | |||
25 26 27 28 29 30 31 | test_cleanup_then_return } require_no_open_checkout test_setup; set rootDir [file normalize [pwd]] | < < < < < < < < < < | | | 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 | test_cleanup_then_return } require_no_open_checkout test_setup; set rootDir [file normalize [pwd]] fossil test-th-eval --open-config {repository} set repository [normalize_result] if {[string length $repository] == 0} { puts "Detection of the open repository file failed." test_cleanup_then_return } write_file unversioned1.txt "This is unversioned file #1." write_file unversioned2.txt " This is unversioned file #2. " write_file "unversioned space.txt" "\nThis is unversioned file #3.\n" write_file unversioned4.txt "This is unversioned file #4." write_file unversioned5.txt "This is unversioned file #5." set env(VISUAL) [appendArgs \ [info nameofexecutable] " " [file join $path fake-editor.tcl]] ############################################################################### fossil unversioned test unversioned-1 {[normalize_result] eq \ [string map [list %fossil% [file nativename $fossilexe]] {Usage: %fossil%\ unversioned add|cat|edit|export|list|revert|remove|sync|touch}]} ############################################################################### fossil unversioned list test unversioned-2 {[normalize_result] eq {}} |
︙ | ︙ | |||
320 321 322 323 324 325 326 | fossil user new uvtester "Unversioned Test User" $password fossil user capabilities uvtester oy ############################################################################### foreach {pid port outTmpFile} [test_start_server $repository stopArg] {} | < | < < | < | | | 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324 325 326 327 328 329 330 331 332 333 334 335 336 337 338 339 340 341 342 343 344 345 346 347 348 349 350 351 352 353 354 355 | fossil user new uvtester "Unversioned Test User" $password fossil user capabilities uvtester oy ############################################################################### foreach {pid port outTmpFile} [test_start_server $repository stopArg] {} puts [appendArgs "Started Fossil server, pid \"" $pid \" ", port \"" $port \".] set remote [appendArgs http://uvtester: $password @localhost: $port /] ############################################################################### set clientDir [file join $tempPath [appendArgs \ uvtest_ [string trim [clock seconds] -] _ [getSeqNo]]] set savedPwd [pwd] file mkdir $clientDir; cd $clientDir puts [appendArgs "Now in client directory \"" [pwd] \".] write_file unversioned-client1.txt "This is unversioned client file #1." ############################################################################### fossil_maybe_answer y clone $remote uvrepo.fossil fossil open -f uvrepo.fossil ############################################################################### fossil unversioned list test unversioned-45 {[normalize_result] eq {}} ############################################################################### fossil_maybe_answer y unversioned sync $remote test unversioned-46 {[regexp \ {Round-trips: 1 Artifacts sent: 0 received: 0 Round-trips: 1 Artifacts sent: 0 received: 0 Round-trips: 2 Artifacts sent: 0 received: 0 Round-trips: 2 Artifacts sent: 0 received: 2 \n? done, sent: \d+ received: \d+ ip: (?:127\.0\.0\.1|::1)} \ [normalize_result]]} ############################################################################### fossil unversioned ls test unversioned-47 {[normalize_result] eq {unversioned2.txt unversioned5.txt}} |
︙ | ︙ | |||
401 402 403 404 405 406 407 | fossil_maybe_answer y unversioned revert $remote test unversioned-52 {[regexp \ {Round-trips: 1 Artifacts sent: 0 received: 0 Round-trips: 1 Artifacts sent: 0 received: 0 Round-trips: 2 Artifacts sent: 0 received: 0 Round-trips: 2 Artifacts sent: 0 received: 2 | | | 387 388 389 390 391 392 393 394 395 396 397 398 399 400 401 | fossil_maybe_answer y unversioned revert $remote test unversioned-52 {[regexp \ {Round-trips: 1 Artifacts sent: 0 received: 0 Round-trips: 1 Artifacts sent: 0 received: 0 Round-trips: 2 Artifacts sent: 0 received: 0 Round-trips: 2 Artifacts sent: 0 received: 2 \n? done, sent: \d+ received: \d+ ip: (?:127\.0\.0\.1|::1)} \ [normalize_result]]} ############################################################################### fossil unversioned list test unversioned-53 {[regexp \ {^[0-9a-f]{12} 2016-10-01 00:00:00 30 30\ |
︙ | ︙ | |||
426 427 428 429 430 431 432 | fossil_maybe_answer y unversioned sync $remote test unversioned-55 {[regexp \ {Round-trips: 1 Artifacts sent: 0 received: 0 Round-trips: 1 Artifacts sent: 0 received: 0 Round-trips: 2 Artifacts sent: 1 received: 0 Round-trips: 2 Artifacts sent: 1 received: 0 | | < | < < | < | 412 413 414 415 416 417 418 419 420 421 422 423 424 425 426 427 428 429 430 431 432 433 434 435 436 437 438 439 440 441 442 443 444 445 446 447 | fossil_maybe_answer y unversioned sync $remote test unversioned-55 {[regexp \ {Round-trips: 1 Artifacts sent: 0 received: 0 Round-trips: 1 Artifacts sent: 0 received: 0 Round-trips: 2 Artifacts sent: 1 received: 0 Round-trips: 2 Artifacts sent: 1 received: 0 \n? done, sent: \d+ received: \d+ ip: (?:127\.0\.0\.1|::1)} \ [normalize_result]]} ############################################################################### fossil close test unversioned-56 {[normalize_result] eq {}} ############################################################################### cd $savedPwd; unset savedPwd file delete -force $clientDir puts [appendArgs "Now in server directory \"" [pwd] \".] ############################################################################### set stopped [test_stop_server $stopArg $pid $outTmpFile] puts [appendArgs \ [expr {$stopped ? "Stopped" : "Could not stop"}] \ " Fossil server, pid \"" $pid "\", using argument \"" \ $stopArg \".] ############################################################################### fossil unversioned list test unversioned-57 {[regexp \ {^[0-9a-f]{12} \d{4}-\d{2}-\d{2} \d{2}:\d{2}:\d{2} 35 35\ unversioned-client1\.txt |
︙ | ︙ |
Deleted test/update.test.
|
| < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < |
Changes to tools/codecheck1.c.
︙ | ︙ | |||
602 603 604 605 606 607 608 | if( (acType[i]=='s' || acType[i]=='z' || acType[i]=='b') ){ const char *zExpr = azArg[fmtArg+i]; if( never_safe(zExpr) ){ printf("%s:%d: Argument %d to %.*s() is not safe for" " a query parameter\n", zFilename, lnFCall, i+fmtArg, szFName, zFCall); nErr++; | | | 602 603 604 605 606 607 608 609 610 611 612 613 614 615 616 | if( (acType[i]=='s' || acType[i]=='z' || acType[i]=='b') ){ const char *zExpr = azArg[fmtArg+i]; if( never_safe(zExpr) ){ printf("%s:%d: Argument %d to %.*s() is not safe for" " a query parameter\n", zFilename, lnFCall, i+fmtArg, szFName, zFCall); nErr++; }else if( (fmtFlags & FMT_SQL)!=0 && !is_sql_safe(zExpr) ){ printf("%s:%d: Argument %d to %.*s() not safe for SQL\n", zFilename, lnFCall, i+fmtArg, szFName, zFCall); nErr++; } } } |
︙ | ︙ |
Changes to tools/makeheaders.c.
︙ | ︙ | |||
36 37 38 39 40 41 42 | #include <stdlib.h> #include <ctype.h> #include <memory.h> #include <sys/stat.h> #include <assert.h> #include <string.h> | | < | 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 | #include <stdlib.h> #include <ctype.h> #include <memory.h> #include <sys/stat.h> #include <assert.h> #include <string.h> #if defined( __MINGW32__) || defined(__DMC__) || defined(_MSC_VER) || defined(__POCC__) # ifndef WIN32 # define WIN32 # endif #else # include <unistd.h> #endif |
︙ | ︙ | |||
2225 2226 2227 2228 2229 2230 2231 | if (pToken->zText[pToken->nText-1] == '\r') { nArg--; } if( nArg==9 && strncmp(zArg,"INTERFACE",9)==0 ){ PushIfMacro(0,0,0,pToken->nLine,PS_Interface); }else if( nArg==16 && strncmp(zArg,"EXPORT_INTERFACE",16)==0 ){ PushIfMacro(0,0,0,pToken->nLine,PS_Export); }else if( nArg==15 && strncmp(zArg,"LOCAL_INTERFACE",15)==0 ){ PushIfMacro(0,0,0,pToken->nLine,PS_Local); | < | | 2224 2225 2226 2227 2228 2229 2230 2231 2232 2233 2234 2235 2236 2237 2238 | if (pToken->zText[pToken->nText-1] == '\r') { nArg--; } if( nArg==9 && strncmp(zArg,"INTERFACE",9)==0 ){ PushIfMacro(0,0,0,pToken->nLine,PS_Interface); }else if( nArg==16 && strncmp(zArg,"EXPORT_INTERFACE",16)==0 ){ PushIfMacro(0,0,0,pToken->nLine,PS_Export); }else if( nArg==15 && strncmp(zArg,"LOCAL_INTERFACE",15)==0 ){ PushIfMacro(0,0,0,pToken->nLine,PS_Local); }else if( nArg==15 && strncmp(zArg,"MAKEHEADERS_STOPLOCAL_INTERFACE",15)==0 ){ PushIfMacro(0,0,0,pToken->nLine,PS_Local); }else{ PushIfMacro(0,zArg,nArg,pToken->nLine,0); } }else if( nCmd==5 && strncmp(zCmd,"ifdef",5)==0 ){ /* ** Push an #ifdef. |
︙ | ︙ |
Changes to tools/mkindex.c.
︙ | ︙ | |||
36 37 38 39 40 41 42 | ** legacy commands. Test commands are unsupported commands used for testing ** and analysis only. ** ** Commands are 1st-tier by default. If the command name begins with ** "test-" or if the command name has a "test" argument, then it becomes ** a test command. If the command name has a "2nd-tier" argument or ends ** with a "*" character, it is second tier. If the command name has an "alias" | | | 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 | ** legacy commands. Test commands are unsupported commands used for testing ** and analysis only. ** ** Commands are 1st-tier by default. If the command name begins with ** "test-" or if the command name has a "test" argument, then it becomes ** a test command. If the command name has a "2nd-tier" argument or ends ** with a "*" character, it is second tier. If the command name has an "alias" ** argument or ends with a "#" character, it is an alias: another name ** (a one-to-one replacement) for a command. Examples: ** ** COMMAND: abcde* ** COMMAND: fghij 2nd-tier ** COMMAND: mnopq# ** COMMAND: rstuv alias ** COMMAND: test-xyzzy |
︙ | ︙ |
Changes to tools/skintxt2config.c.
|
| | | 1 2 3 4 5 6 7 8 | /* -*- Mode: C; tab-width: 4; indent-tabs-mode: nil; c-basic-offset: 2 -*- */ /* vim: set ts=2 et sw=2 tw=80: */ /* ** Copyright (c) 2021 Stephan Beal (https://wanderinghorse.net/home/stephan/) ** ** This program is free software; you can redistribute it and/or ** modify it under the terms of the Simplified BSD License (also ** known as the "2-Clause License" or "FreeBSD License".) |
︙ | ︙ | |||
100 101 102 103 104 105 106 | end: fclose(f); if(rc){ free(zMem); }else{ *zContent = zMem; *nContent = fpos; | | | 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 | end: fclose(f); if(rc){ free(zMem); }else{ *zContent = zMem; *nContent = fpos; } return rc; } /* ** Expects zFilename to be one of the conventional skin filename ** parts. This routine converts it to config format and emits it to ** App.ostr. |
︙ | ︙ |
Changes to tools/sqlcompattest.c.
︙ | ︙ | |||
51 52 53 54 55 56 57 | #error "Must set -DMINIMUM_SQLITE_VERSION=nn.nn.nn in auto.def" #endif #define QUOTE(VAL) #VAL #define STR(MACRO_VAL) QUOTE(MACRO_VAL) char zMinimumVersionNumber[8]="nn.nn.nn"; | | < | < | 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 | #error "Must set -DMINIMUM_SQLITE_VERSION=nn.nn.nn in auto.def" #endif #define QUOTE(VAL) #VAL #define STR(MACRO_VAL) QUOTE(MACRO_VAL) char zMinimumVersionNumber[8]="nn.nn.nn"; strncpy((char *)&zMinimumVersionNumber,STR(MINIMUM_SQLITE_VERSION),sizeof(zMinimumVersionNumber)); long major, minor, release, version; sscanf(zMinimumVersionNumber, "%li.%li.%li", &major, &minor, &release); version=(major*1000000)+(minor*1000)+release; int i; static const char *zRequiredOpts[] = { "ENABLE_FTS4", /* Required for repository search */ "ENABLE_DBSTAT_VTAB", /* Required by /repo-tabsize page */ }; /* Check minimum SQLite version number */ if( sqlite3_libversion_number()<version ){ printf("found system SQLite version %s but need %s or later, consider removing --disable-internal-sqlite\n", sqlite3_libversion(),STR(MINIMUM_SQLITE_VERSION)); return 1; } for(i=0; i<sizeof(zRequiredOpts)/sizeof(zRequiredOpts[0]); i++){ if( !sqlite3_compileoption_used(zRequiredOpts[i]) ){ printf("system SQLite library omits required build option -DSQLITE_%s\n", |
︙ | ︙ |
Changes to win/Makefile.mingw.
︙ | ︙ | |||
575 576 577 578 579 580 581 | $(SRCDIR)/../skins/default/details.txt \ $(SRCDIR)/../skins/default/footer.txt \ $(SRCDIR)/../skins/default/header.txt \ $(SRCDIR)/../skins/eagle/css.txt \ $(SRCDIR)/../skins/eagle/details.txt \ $(SRCDIR)/../skins/eagle/footer.txt \ $(SRCDIR)/../skins/eagle/header.txt \ | < < < < | 575 576 577 578 579 580 581 582 583 584 585 586 587 588 | $(SRCDIR)/../skins/default/details.txt \ $(SRCDIR)/../skins/default/footer.txt \ $(SRCDIR)/../skins/default/header.txt \ $(SRCDIR)/../skins/eagle/css.txt \ $(SRCDIR)/../skins/eagle/details.txt \ $(SRCDIR)/../skins/eagle/footer.txt \ $(SRCDIR)/../skins/eagle/header.txt \ $(SRCDIR)/../skins/khaki/css.txt \ $(SRCDIR)/../skins/khaki/details.txt \ $(SRCDIR)/../skins/khaki/footer.txt \ $(SRCDIR)/../skins/khaki/header.txt \ $(SRCDIR)/../skins/original/css.txt \ $(SRCDIR)/../skins/original/details.txt \ $(SRCDIR)/../skins/original/footer.txt \ |
︙ | ︙ |
Changes to win/Makefile.msc.
︙ | ︙ | |||
533 534 535 536 537 538 539 | "$(SRCDIR)\..\skins\default\details.txt" \ "$(SRCDIR)\..\skins\default\footer.txt" \ "$(SRCDIR)\..\skins\default\header.txt" \ "$(SRCDIR)\..\skins\eagle\css.txt" \ "$(SRCDIR)\..\skins\eagle\details.txt" \ "$(SRCDIR)\..\skins\eagle\footer.txt" \ "$(SRCDIR)\..\skins\eagle\header.txt" \ | < < < < | 533 534 535 536 537 538 539 540 541 542 543 544 545 546 | "$(SRCDIR)\..\skins\default\details.txt" \ "$(SRCDIR)\..\skins\default\footer.txt" \ "$(SRCDIR)\..\skins\default\header.txt" \ "$(SRCDIR)\..\skins\eagle\css.txt" \ "$(SRCDIR)\..\skins\eagle\details.txt" \ "$(SRCDIR)\..\skins\eagle\footer.txt" \ "$(SRCDIR)\..\skins\eagle\header.txt" \ "$(SRCDIR)\..\skins\khaki\css.txt" \ "$(SRCDIR)\..\skins\khaki\details.txt" \ "$(SRCDIR)\..\skins\khaki\footer.txt" \ "$(SRCDIR)\..\skins\khaki\header.txt" \ "$(SRCDIR)\..\skins\original\css.txt" \ "$(SRCDIR)\..\skins\original\details.txt" \ "$(SRCDIR)\..\skins\original\footer.txt" \ |
︙ | ︙ | |||
1162 1163 1164 1165 1166 1167 1168 | echo "$(SRCDIR)\../skins/default/details.txt" >> $@ echo "$(SRCDIR)\../skins/default/footer.txt" >> $@ echo "$(SRCDIR)\../skins/default/header.txt" >> $@ echo "$(SRCDIR)\../skins/eagle/css.txt" >> $@ echo "$(SRCDIR)\../skins/eagle/details.txt" >> $@ echo "$(SRCDIR)\../skins/eagle/footer.txt" >> $@ echo "$(SRCDIR)\../skins/eagle/header.txt" >> $@ | < < < < | 1158 1159 1160 1161 1162 1163 1164 1165 1166 1167 1168 1169 1170 1171 | echo "$(SRCDIR)\../skins/default/details.txt" >> $@ echo "$(SRCDIR)\../skins/default/footer.txt" >> $@ echo "$(SRCDIR)\../skins/default/header.txt" >> $@ echo "$(SRCDIR)\../skins/eagle/css.txt" >> $@ echo "$(SRCDIR)\../skins/eagle/details.txt" >> $@ echo "$(SRCDIR)\../skins/eagle/footer.txt" >> $@ echo "$(SRCDIR)\../skins/eagle/header.txt" >> $@ echo "$(SRCDIR)\../skins/khaki/css.txt" >> $@ echo "$(SRCDIR)\../skins/khaki/details.txt" >> $@ echo "$(SRCDIR)\../skins/khaki/footer.txt" >> $@ echo "$(SRCDIR)\../skins/khaki/header.txt" >> $@ echo "$(SRCDIR)\../skins/original/css.txt" >> $@ echo "$(SRCDIR)\../skins/original/details.txt" >> $@ echo "$(SRCDIR)\../skins/original/footer.txt" >> $@ |
︙ | ︙ |
Changes to win/buildmsvc.bat.
1 2 3 4 5 | @ECHO OFF :: :: buildmsvc.bat -- :: | | < < | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 | @ECHO OFF :: :: buildmsvc.bat -- :: :: This batch file attempts to build Fossil using the latest version :: Microsoft Visual Studio installed on this machine. :: :: SETLOCAL REM SET __ECHO=ECHO REM SET __ECHO2=ECHO IF NOT DEFINED _AECHO (SET _AECHO=REM) |
︙ | ︙ | |||
50 51 52 53 54 55 56 | ) REM REM Visual Studio 2017 / 2019 / 2022 REM CALL :fn_TryUseVsWhereExe IF NOT DEFINED VSWHEREINSTALLDIR GOTO skip_detectVisualStudio2017 | | < < < | 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 | ) REM REM Visual Studio 2017 / 2019 / 2022 REM CALL :fn_TryUseVsWhereExe IF NOT DEFINED VSWHEREINSTALLDIR GOTO skip_detectVisualStudio2017 SET VSVARS32=%VSWHEREINSTALLDIR%\Common7\Tools\VsDevCmd.bat IF EXIST "%VSVARS32%" ( %_AECHO% Using Visual Studio 2017 / 2019 / 2022... GOTO skip_detectVisualStudio ) :skip_detectVisualStudio2017 REM |
︙ | ︙ |
Changes to www/aboutcgi.wiki.
1 | <title>How CGI Works In Fossil</title> | < | < | | < | < | | | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 | <title>How CGI Works In Fossil</title> <h2>Introduction</h2><blockquote> CGI or "Common Gateway Interface" is a venerable yet reliable technique for generating dynamic web content. This article gives a quick background on how CGI works and describes how Fossil can act as a CGI service. This is a "how it works" guide. This document provides background information on the CGI protocol so that you can better understand what is going on behind the scenes. If you just want to set up Fossil as a CGI server, see the [./server/ | Fossil Server Setup] page. Or if you want to development CGI-based extensions to Fossil, see the [./serverext.wiki|CGI Server Extensions] page. </blockquote> <h2>A Quick Review Of CGI</h2><blockquote> An HTTP request is a block of text that is sent by a client application (usually a web browser) and arrives at the web server over a network connection. The HTTP request contains a URL that describes the information being requested. The URL in the HTTP request is typically the same URL that appears in the URL bar at the top of the web browser that is making the request. The URL might contain a "?" character followed query parameters. The HTTP will usually also contain other information such as the name of the application that made the request, whether or not the requesting application can accept a compressed reply, POST parameters from forms, and so forth. The job of the web server is to interpret the HTTP request and formulate an appropriate reply. The web server is free to interpret the HTTP request in any way it wants. But most web servers follow a similar pattern, described below. (Note: details may vary from one web server to another.) Suppose the filename component of the URL in the HTTP request looks like this: <blockquote><b>/one/two/timeline/four</b></blockquote> Most web servers will search their content area for files that match some prefix of the URL. The search starts with <b>/one</b>, then goes to <b>/one/two</b>, then <b>/one/two/timeline</b>, and finally <b>/one/two/timeline/four</b> is checked. The search stops at the first match. Suppose the first match is <b>/one/two</b>. If <b>/one/two</b> is an ordinary file in the content area, then that file is returned as static content. The "<b>/timeline/four</b>" suffix is silently ignored. If <b>/one/two</b> is a CGI script (or program), then the web server executes the <b>/one/two</b> script. The output generated by the script is collected and repackaged as the HTTP reply. Before executing the CGI script, the web server will set up various environment variables with information useful to the CGI script: <table border=1 cellpadding=5> <tr><th>Environment<br>Variable<th>Meaning <tr><td>GATEWAY_INTERFACE<td>Always set to "CGI/1.0" <tr><td>REQUEST_URI <td>The input URL from the HTTP request. <tr><td>SCRIPT_NAME <td>The prefix of the input URL that matches the CGI script name. In this example: "/one/two". <tr><td>PATH_INFO |
︙ | ︙ | |||
87 88 89 90 91 92 93 | The CGI script exits as soon as it generates a single reply. The web server will (usually) persist and handle multiple HTTP requests, but a CGI script handles just one HTTP request and then exits. The above is a rough outline of how CGI works. There are many details omitted from this brief discussion. See other on-line CGI tutorials for further information. | | | < | | < | 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 | The CGI script exits as soon as it generates a single reply. The web server will (usually) persist and handle multiple HTTP requests, but a CGI script handles just one HTTP request and then exits. The above is a rough outline of how CGI works. There are many details omitted from this brief discussion. See other on-line CGI tutorials for further information. </blockquote> <h2>How Fossil Acts As A CGI Program</h2> <blockquote> An appropriate CGI script for running Fossil will look something like the following: <blockquote><pre> #!/usr/bin/fossil repository: /home/www/repos/project.fossil </pre></blockquote> The first line of the script is a "[https://en.wikipedia.org/wiki/Shebang_%28Unix%29|shebang]" that tells the operating system what program to use as the interpreter for this script. On unix, when you execute a script that starts with a shebang, the operating system runs the program identified by the shebang with a single argument that is the full pathname of the script itself. |
︙ | ︙ | |||
138 139 140 141 142 143 144 | With Fossil, terms of PATH_INFO beyond the webpage name are converted into the "name" query parameter. Hence, the following two URLs mean exactly the same thing to Fossil: <ol type='A'> <li> [https://fossil-scm.org/home/info/c14ecc43] <li> [https://fossil-scm.org/home/info?name=c14ecc43] </ol> | < | | < | | < | < | < | < | < < < | < < < < < | < | < < < < < | < | < < < < < < < < < < < < < < < < > | < < < < < < < < < < < < < < < < < < | | > > | | 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 | With Fossil, terms of PATH_INFO beyond the webpage name are converted into the "name" query parameter. Hence, the following two URLs mean exactly the same thing to Fossil: <ol type='A'> <li> [https://fossil-scm.org/home/info/c14ecc43] <li> [https://fossil-scm.org/home/info?name=c14ecc43] </ol> In both cases, the CGI script is called "/fossil". For case (A), the PATH_INFO variable will be "info/c14ecc43" and so the "[/help?cmd=/info|/info]" webpage will be generated and the suffix of PATH_INFO will be converted into the "name" query parameter, which identifies the artifact about which information is requested. In case (B), the PATH_INFO is just "info", but the same "name" query parameter is set explicitly by the URL itself. </blockquote> <h2>Serving Multiple Fossil Repositories From One CGI Script</h2> <blockquote> The previous example showed how to serve a single Fossil repository using a single CGI script. On a website that wants to serve multiple repositories, one could simply create multiple CGI scripts, one script for each repository. But it is also possible to serve multiple Fossil repositories from a single CGI script. If the CGI script for Fossil contains a "directory:" line instead of a "repository:" line, then the argument to "directory:" is the name of a directory that contains multiple repository files, each ending with ".fossil". For example: <blockquote><pre> #!/usr/bin/fossil directory: /home/www/repos </pre></blockquote> Suppose the /home/www/repos directory contains files named <b>one.fossil</b>, <b>two.fossil</b>, and <b>subdir/three.fossil</b>. Further suppose that the name of the CGI script (relative to the root of the webserver document area) is "cgis/example2". Then to see the timeline for the "three.fossil" repository, the URL would be: <blockquote> <b>http://example.com/cgis/example2/subdir/three/timeline</b> </blockquote> Here is what happens: <ol> <li> The input URI on the HTTP request is <b>/cgis/example2/subdir/three/timeline</b> <li> The web server searches prefixes of the input URI until it finds the "cgis/example2" script. The web server then sets PATH_INFO to the "subdir/three/timeline" suffix and invokes the "cgis/example2" script. <li> Fossil runs and sees the "directory:" line pointing to "/home/www/repos". Fossil then starts pulling terms off the front of the PATH_INFO looking for a repository. It first looks at "/home/www/resps/subdir.fossil" but there is no such repository. So then it looks at "/home/www/repos/subdir/three.fossil" and finds a repository. The PATH_INFO is shortened by removing "subdir/three/" leaving it at just "timeline". <li> Fossil looks at the rest of PATH_INFO to see that the webpage requested is "timeline". </ol> <a id="cgivar"></a> The web server sets many environment variables in step 2 in addition to just PATH_INFO. The following diagram shows a few of these variables and their relationship to the request URL: <pre> REQUEST_URI ___________________|_______________________ / \ http://example.com/cgis/example2/subdir/three/timeline?c=55d7e1 \_________/\____________/\____________________/ \______/ | | | | HTTP_HOST SCRIPT_NAME PATH_INFO QUERY_STRING </pre> </blockquote> <h2>Additional CGI Script Options</h2> <blockquote> The CGI script can have additional options used to fine-tune Fossil's behavior. See the [./cgi.wiki|CGI script documentation] for details. </blockquote> <h2>Additional Observations</h2> <blockquote><ol type="I"> <li><p> Fossil does not distinguish between the various HTTP methods (GET, PUT, DELETE, etc). Fossil figures out what it needs to do purely from the webpage term of the URI.</p></li> <li><p> Fossil does not distinguish between query parameters that are part of the URI, application/x-www-form-urlencoded or multipart/form-data encoded |
︙ | ︙ | |||
295 296 297 298 299 300 301 | converted into CGI, then Fossil creates a separate child Fossil process to handle each CGI request.</p></li> <li><p> Fossil is itself often launched using CGI. But Fossil can also then turn around and launch [./serverext.wiki|sub-CGI scripts to implement extensions].</p></li> </ol> | > | 237 238 239 240 241 242 243 244 | converted into CGI, then Fossil creates a separate child Fossil process to handle each CGI request.</p></li> <li><p> Fossil is itself often launched using CGI. But Fossil can also then turn around and launch [./serverext.wiki|sub-CGI scripts to implement extensions].</p></li> </ol> </blockquote> |
Changes to www/aboutdownload.wiki.
1 2 3 4 5 6 7 8 | <title>How The Fossil Download Page Works</title> <h2>1.0 Overview</h2> The [/uv/download.html|Download] page for the Fossil self-hosting repository is implemented using [./unvers.wiki|unversioned files]. The "download.html" screen itself, and the various build products are all stored as unversioned content. The download.html page | > | 1 2 3 4 5 6 7 8 9 | <title>How The Fossil Download Page Works</title> <h1 align="center">How The Download Page Works</h1> <h2>1.0 Overview</h2> The [/uv/download.html|Download] page for the Fossil self-hosting repository is implemented using [./unvers.wiki|unversioned files]. The "download.html" screen itself, and the various build products are all stored as unversioned content. The download.html page |
︙ | ︙ | |||
40 41 42 43 44 45 46 | Notice how the hyperlinks above use the "mimetype=text/plain" query parameter in order to display the file as plain text instead of the usual HTML or Javascript. The default mimetype for "download.html" is text/html. But because the entire page is enclosed within | | | | | 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 | Notice how the hyperlinks above use the "mimetype=text/plain" query parameter in order to display the file as plain text instead of the usual HTML or Javascript. The default mimetype for "download.html" is text/html. But because the entire page is enclosed within <b><div class='fossil-doc' data-title='Download Page'>...</div></b> Fossil knows to add its standard header and footer information to the document, making it look just like any other page. See "[./embeddeddoc.wiki|embedded documentation]" for further details on how <div class='fossil-doc'> this works. With each new release, the "releases" variable in the javascript on the [/uv/download.js?mimetype=text/plain|download.js] page is edited (using "[/help?cmd=uv|fossil uv edit download.js]") to add details of the release. When the JavaScript in the "download.js" file runs, it requests a listing of all unversioned content using the /juvlist URL. ([/juvlist|sample /juvlist output]). The content of the download page is constructed by matching unversioned files against regular expressions in the "releases" variable. Build products need to be constructed on different machines. The precompiled binary for Linux is compiled on Linux, the precompiled binary for Windows is compiled on Windows10, and so forth. After a new release is tagged, the release manager goes around to each of the target platforms, checks out the release and compiles it, then runs [/help?cmd=uv|fossil uv add] for the build product followed by [/help?cmd=uv|fossil uv sync] to push the new build product to the [./selfhost.wiki|various servers]. This process is repeated for each build product. |
︙ | ︙ |
Changes to www/adding_code.wiki.
︙ | ︙ | |||
48 49 50 51 52 53 54 | source tree. Suppose one wants to add a new source code file named "xyzzy.c". The first step is to add this file to the various makefiles. Do so by editing the file tools/makemake.tcl and adding "xyzzy" (without the final ".c") to the list of source modules at the top of that script. Save the result and then run the makemake.tcl script using a TCL interpreter. The command to run the makemake.tcl script is: | < | < | | | 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 | source tree. Suppose one wants to add a new source code file named "xyzzy.c". The first step is to add this file to the various makefiles. Do so by editing the file tools/makemake.tcl and adding "xyzzy" (without the final ".c") to the list of source modules at the top of that script. Save the result and then run the makemake.tcl script using a TCL interpreter. The command to run the makemake.tcl script is: <b>tclsh makemake.tcl</b> The working directory must be src/ when the command above is run. Note that TCL is not normally required to build Fossil, but it is required for this step. If you do not have a TCL interpreter on your system already, they are easy to install. A popular choice is the [http://www.activestate.com/activetcl|Active Tcl] installation from ActiveState. After the makefiles have been updated, create the xyzzy.c source file from the following template: <blockquote><verbatim> /* ** Copyright boilerplate goes here. ***************************************************** ** High-level description of what this module goes ** here. */ #include "config.h" #include "xyzzy.h" #if INTERFACE /* Exported object (structure) definitions or #defines ** go here */ #endif /* INTERFACE */ /* New code goes here */ </verbatim></blockquote> Note in particular the <b>#include "xyzzy.h"</b> line near the top. The "xyzzy.h" file is automatically generated by makeheaders. Every normal Fossil source file must have a #include at the top that imports its private header file. (Some source files, such as "sqlite3.c" are exceptions to this rule. Don't worry about those exceptions. The files you write will require this #include line.) |
︙ | ︙ | |||
107 108 109 110 111 112 113 | Fossil repository and then [/help/commit|commit] your changes! <h2 id="newcmd">4.0 Creating A New Command</h2> By "commands" we mean the keywords that follow "fossil" when invoking Fossil from the command-line. So, for example, in | < | < | | | | < | | | < | 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 | Fossil repository and then [/help/commit|commit] your changes! <h2 id="newcmd">4.0 Creating A New Command</h2> By "commands" we mean the keywords that follow "fossil" when invoking Fossil from the command-line. So, for example, in <b>fossil diff xyzzy.c</b> The "command" is "diff". Commands may optionally be followed by arguments and/or options. To create new commands in Fossil, add code (either to an existing source file, or to a new source file created as described above) according to the following template: <blockquote><verbatim> /* ** COMMAND: xyzzy ** ** Help text goes here. Backslashes must be escaped. */ void xyzzy_cmd(void){ /* Implement the command here */ fossil_print("Hello, World!\n"); } </verbatim></blockquote> The example above creates a new command named "xyzzy" that prints the message "Hello, World!" on the console. This command is a normal command that will show up in the list of command from [/help/help|fossil help]. If you add an asterisk to the end of the command name, like this: <blockquote><verbatim> ** COMMAND: xyzzy* </verbatim></blockquote> Then the command will only show up if you add the "--all" option to [/help/help|fossil help]. Or, if the command name starts with "test" then the command will be considered experimental and will only show up when the --test option is used with [/help/help|fossil help]. The example above is a fully functioning Fossil command. You can add the text shown to an existing Fossil source file, recompiling then test it out by typing: <b>./fossil xyzzy<br> ./fossil help xyzzy<br> ./fossil xyzzy --help</b> The name of the C function that implements the command can be anything you like (as long as it does not collide with some other symbol in the Fossil code) but it is traditional to name the function "<i>commandname</i><b>_cmd</b>", as is done in the example. You could also use "printf()" instead of "fossil_print()" to generate |
︙ | ︙ | |||
177 178 179 180 181 182 183 | <h2 id="newpage">5.0 Creating A New Web Page</h2> As with commands, new webpages can be added simply by inserting a function that generates the webpage together with a special header comment. A template follows: | | | | 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 | <h2 id="newpage">5.0 Creating A New Web Page</h2> As with commands, new webpages can be added simply by inserting a function that generates the webpage together with a special header comment. A template follows: <blockquote><verbatim> /* ** WEBPAGE: helloworld */ void helloworld_page(void){ style_header("Hello World!"); @ <p>Hello, World!</p> style_footer(); } </verbatim></blockquote> Add the code above to a new or existing Fossil source code file, then recompile fossil and run [/help/ui|fossil ui] then enter "http://localhost:8080/helloworld" in your web browser and the routine above will generate a web page that says "Hello World." It really is that simple. |
︙ | ︙ |
Changes to www/alerts.md.
︙ | ︙ | |||
89 90 91 92 93 94 95 | the "From" address above, or it could be a different value like `admin@example.com`. Save your changes. At the command line, say | | | | | | | 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 | the "From" address above, or it could be a different value like `admin@example.com`. Save your changes. At the command line, say $ fossil set email-send-command If that gives a blank value instead of `sendmail -ti`, say $ fossil set email-send-command "sendmail -ti" to force the setting. That works around a [known bug](https://fossil-scm.org/forum/forumpost/840b676410) which may be squished by the time you read this. If you're running Postfix or Exim, you might think that command is wrong, since you aren't running Sendmail. These mail servers provide a `sendmail` command for compatibility with software like Fossil that has no good reason to care exactly which SMTP server implementation is running at a given site. There may be other SMTP servers that also provide a compatible `sendmail` command, in which case they may work with Fossil using the same steps as above. <a id="status"></a> If you reload the Admin → Notification page, the Status section at the top should show: Outgoing Email: Piped to command "sendmail -ti" Pending Alerts: 0 normal, 0 digest Subscribers: 0 active, 0 total Before you move on to the next section, you might like to read up on [some subtleties](#pipe) with the "pipe to a command" method that we did not cover above. <a id="usage"></a> |
︙ | ︙ | |||
153 154 155 156 157 158 159 | by the way: a user can be signed up for email alerts without having a full-fledged Fossil user account. Only when both user names are the same are the two records tied together under the hood. For more on this, see [Users vs Subscribers below](#uvs). If you are seeing the following complaint from Fossil: | > | > > | 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 | by the way: a user can be signed up for email alerts without having a full-fledged Fossil user account. Only when both user names are the same are the two records tied together under the hood. For more on this, see [Users vs Subscribers below](#uvs). If you are seeing the following complaint from Fossil: <blockquote> Use a different login with greater privilege than FOO to access /subscribe </blockquote> ...then the repository's administrator forgot to give the [**EmailAlert** capability][cap7] to that user or to a user category that the user is a member of. After a subscriber signs up for alerts for the first time, a single verification email is sent to that subscriber's given email address. |
︙ | ︙ | |||
209 210 211 212 213 214 215 | Announcement](/announce)" link at the top of the "Email Notification Setup" page. Put your email address in the "To:" line and a test message below, then press "Send Message" to verify that outgoing email is working. Another method is from the command line: | | | 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 | Announcement](/announce)" link at the top of the "Email Notification Setup" page. Put your email address in the "To:" line and a test message below, then press "Send Message" to verify that outgoing email is working. Another method is from the command line: $ fossil alerts test-message you@example.com --body README.md --subject Test That should send you an email with "Test" in the subject line and the contents of your project's `README.md` file in the body. That command assumes that your project contains a "readme" file, but of course it does, because you have followed the [Programming Style Guide Checklist][cl], right? Right. |
︙ | ︙ | |||
259 260 261 262 263 264 265 | ### Troubleshooting If email alerts aren't working, there are several useful commands you can give to figure out why. (Be sure to [`cd` into a repo checkout directory](#cd) first!) | | | | | | | | | 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 311 312 313 314 | ### Troubleshooting If email alerts aren't working, there are several useful commands you can give to figure out why. (Be sure to [`cd` into a repo checkout directory](#cd) first!) $ fossil alerts status This should give much the same information as you saw [above](#status). One difference is that, since you've created a forum post, the `pending-alerts` value should only be zero if you did in fact get the requested email alert. If it's zero, check your mailer's spam folder. If it's nonzero, continue with these troubleshooting steps. $ fossil backoffice That forces Fossil to run its ["back office" process](./backoffice.md). Its only purpose at the time of this writing is to push out alert emails, but it might do other things later. Sometimes it can get stuck and needs to be kicked. For that reason, you might want to set up a crontab entry to make sure it runs occasionally. $ fossil alerts send This should also kick off the backoffice processing, if there are any pending alerts to send out. $ fossil alert pending Show any pending alerts. The number of lines output here should equal the [status output above](#status). $ fossil test-add-alerts f5900 $ fossil alert send Manually create an email alert and push it out immediately. The `f` in the first command's final parameter means you're scheduling a "forum" alert. The integer is the ID of a forum post, which you can find by visiting `/timeline?showid` on your Fossil instance. The second command above is necessary because the `test-add-alerts` command doesn't kick off a backoffice run. $ fossil ale send This only does the same thing as the final command above, rather than send you an ale, as you might be hoping. Sorry. <a id="advanced"></a> ## Advanced Email Setups |
︙ | ︙ | |||
419 420 421 422 423 424 425 | corruption][rdbc] if used with a file sharing technology that doesn't use proper file locking. You can start this Tcl script as a daemon automatically on most Unix and Unix-like systems by adding the following line to the `/etc/rc.local` file of the server that hosts the repository sending email alerts: | | | 422 423 424 425 426 427 428 429 430 431 432 433 434 435 436 | corruption][rdbc] if used with a file sharing technology that doesn't use proper file locking. You can start this Tcl script as a daemon automatically on most Unix and Unix-like systems by adding the following line to the `/etc/rc.local` file of the server that hosts the repository sending email alerts: /usr/bin/tclsh /home/www/fossil/email-sender.tcl & [cj]: https://en.wikipedia.org/wiki/Chroot [rdbc]: https://www.sqlite.org/howtocorrupt.html#_filesystems_with_broken_or_missing_lock_implementations <a id="dir"></a> ### Method 3: Store in a Directory |
︙ | ︙ | |||
575 576 577 578 579 580 581 582 583 584 585 586 587 588 589 590 591 592 593 594 595 596 597 598 599 600 601 602 603 604 605 606 607 | <a id="datades"></a> ### Data Design There are two new tables in the repository database, starting with Fossil 2.7. These tables are not created in new repositories by default. The tables only come into existence as needed when email alerts are configured and used. * <b>SUBSCRIBER</b> → The subscriber table records the email address for people who want to receive email notifications. Each subscriber has a `subscriberCode` which is a random 32-byte blob that uniquely identifies the subscriber. There are also fields to indicate what kinds of notifications the subscriber wishes to receive, whether or not the email address of the subscriber has been verified, etc. * <b>PENDING\_ALERT</b> → The PENDING\_ALERT table contains records that define events about which alert emails might need to be sent. A pending\_alert always refers to an entry in the EVENT table. The EVENT table is part of the standard schema and records timeline entries. In other words, there is one row in the EVENT table for each possible timeline entry. The PENDING\_ALERT table refers to EVENT table entries for which we might need to send alert emails. As pointed out above, ["subscribers" are distinct from "users"](#uvs). The SUBSCRIBER.SUNAME field is the optional linkage between users and subscribers. <a id="stdout"></a> | > > > > > > | 578 579 580 581 582 583 584 585 586 587 588 589 590 591 592 593 594 595 596 597 598 599 600 601 602 603 604 605 606 607 608 609 610 611 612 613 614 615 616 | <a id="datades"></a> ### Data Design There are two new tables in the repository database, starting with Fossil 2.7. These tables are not created in new repositories by default. The tables only come into existence as needed when email alerts are configured and used. * <b>SUBSCRIBER</b> → The subscriber table records the email address for people who want to receive email notifications. Each subscriber has a `subscriberCode` which is a random 32-byte blob that uniquely identifies the subscriber. There are also fields to indicate what kinds of notifications the subscriber wishes to receive, whether or not the email address of the subscriber has been verified, etc. * <b>PENDING\_ALERT</b> → The PENDING\_ALERT table contains records that define events about which alert emails might need to be sent. A pending\_alert always refers to an entry in the EVENT table. The EVENT table is part of the standard schema and records timeline entries. In other words, there is one row in the EVENT table for each possible timeline entry. The PENDING\_ALERT table refers to EVENT table entries for which we might need to send alert emails. There was a third table "EMAIL_BOUNCE" in Fossil versions 2.7 through 2.14. That table was intended to record email bounce history so that subscribers with excessive bounces can be turned off. But that feature was never implemented and the table was removed in Fossil 2.15. As pointed out above, ["subscribers" are distinct from "users"](#uvs). The SUBSCRIBER.SUNAME field is the optional linkage between users and subscribers. <a id="stdout"></a> |
︙ | ︙ | |||
672 673 674 675 676 677 678 | attacker with the `subscriberCode`. Nor can knowledge of the `subscriberCode` lead to an email flood or other annoyance attack, as far as I can see. If the `subscriberCodes` for a Fossil repository are ever compromised, new ones can be generated as follows: | | | 681 682 683 684 685 686 687 688 689 690 691 692 693 694 695 | attacker with the `subscriberCode`. Nor can knowledge of the `subscriberCode` lead to an email flood or other annoyance attack, as far as I can see. If the `subscriberCodes` for a Fossil repository are ever compromised, new ones can be generated as follows: UPDATE subscriber SET subscriberCode=randomblob(32); Since this then affects all new email alerts going out from Fossil, your end users may never even realize that they're getting new codes, as long as they don't click on the URLs in the footer of old alert messages. With that in mind, a Fossil server administrator could choose to randomize the `subscriberCodes` periodically, such as just before the |
︙ | ︙ |
Changes to www/backoffice.md.
︙ | ︙ | |||
77 78 79 80 81 82 83 | However, the daily digest of email notifications is handled by the backoffice. If a Fossil server can sometimes go more than a day without being accessed, then the automatic backoffice will never run, and the daily digest might not go out until somebody does visit a webpage. If this is a problem, an administrator can set up a cron job to periodically run: | | | 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 | However, the daily digest of email notifications is handled by the backoffice. If a Fossil server can sometimes go more than a day without being accessed, then the automatic backoffice will never run, and the daily digest might not go out until somebody does visit a webpage. If this is a problem, an administrator can set up a cron job to periodically run: > fossil backoffice _REPOSITORY_ That command will cause backoffice processing to occur immediately. Note that this is almost never necessary for an internet-facing Fossil repository, since most repositories will get multiple accesses per day from random robots, which will be sufficient to kick off the daily digest emails. And even for a private server, if there is very little traffic, then the daily digests are probably a no-op anyhow |
︙ | ︙ | |||
100 101 102 103 104 105 106 | [Fossil Forum](https://fossil-scm.org/forum) so that we can perhaps fix the problem.) For now, the backoffice must be run manually on OpenBSD systems. To set up fully-manual backoffice, first disable the automatic backoffice using the "[backoffice-disable](/help?cmd=backoffice-disable)" setting. | | | | 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 | [Fossil Forum](https://fossil-scm.org/forum) so that we can perhaps fix the problem.) For now, the backoffice must be run manually on OpenBSD systems. To set up fully-manual backoffice, first disable the automatic backoffice using the "[backoffice-disable](/help?cmd=backoffice-disable)" setting. > fossil setting backoffice-disable on Then arrange to invoke the backoffice separately using a command like this: > fossil backoffice --poll 30 _REPOSITORY-LIST_ Multiple repositories can be named. This one command will handle launching the backoffice for all of them. There are additional useful command-line options. See the "[fossil backoffice](/help?cmd=backoffice)" documentation for details. The backoffice processes run manually using the "fossil backoffice" |
︙ | ︙ | |||
145 146 147 148 149 150 151 | "no process". Sometimes the process id will be non-zero even if there is no corresponding process. Fossil knows how to figure out whether or not a process still exists. You can print out a decoded copy of the current backoffice lease using this command: | | | 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 | "no process". Sometimes the process id will be non-zero even if there is no corresponding process. Fossil knows how to figure out whether or not a process still exists. You can print out a decoded copy of the current backoffice lease using this command: > fossil test-backoffice-lease -R _REPOSITORY_ If a system has been idle for a long time, then there will be no backoffice processes. (Either the process id entries in the lease will be zero, or there will exist no process associated with the process id.) When a new web request comes in, the system sees that no backoffice process is active and so it kicks off a separate process to run backoffice. |
︙ | ︙ | |||
195 196 197 198 199 200 201 | The backoffice should "just work". It should not require administrator attention. However, if you suspect that something is not working right, there are some debugging aids. We have already mentioned the command that shows the backoffice lease for a repository: | | | 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 | The backoffice should "just work". It should not require administrator attention. However, if you suspect that something is not working right, there are some debugging aids. We have already mentioned the command that shows the backoffice lease for a repository: > fossil test-backoffice-lease -R _REPOSITORY_ Running that command every few seconds should show what is going on with backoffice processing in a particular repository. There are also settings that control backoffice behavior. The "backoffice-nodelay" setting prevents the "next" process from taking a lease and sleeping. If "backoffice-nodelay" is set, that causes all |
︙ | ︙ |
Changes to www/backup.md.
︙ | ︙ | |||
133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 | # <a id="sync-solution"></a> Solution 1: Explicit Pulls The following script solves most of the above problems for the use case where you want a *nearly-complete* clone of the remote repository using nothing but the normal Fossil sync protocol. It only does so if you are logged into the remote as a user with Setup capability, however. ``` shell #!/bin/sh fossil sync --unversioned fossil configuration pull all fossil rebuild ``` The last step is needed to ensure that shunned artifacts on the remote are removed from the local clone. The second step includes `fossil conf pull shun`, but until those artifacts are actually rebuilt out of existence, your backup will be “more than complete” in the sense that it will continue to have information that the remote says should not exist any more. That would be not so much a “backup” as an “archive,” which might not be what you want. # <a id="sql-solution"></a> Solution 2: SQL-Level Backup The first method doesn’t get you a copy of the remote’s [private branches][pbr], on purpose. It may also miss other info on the remote, such as SQL-level customizations that the sync protocol can’t see. (Some [ticket system customization][tkt] schemes rely on this ability, for example.) You can solve such problems if you have access to the remote server, which | > > > > | | | > > > > > > > > | 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 | # <a id="sync-solution"></a> Solution 1: Explicit Pulls The following script solves most of the above problems for the use case where you want a *nearly-complete* clone of the remote repository using nothing but the normal Fossil sync protocol. It only does so if you are logged into the remote as a user with Setup capability, however. ---- ``` shell #!/bin/sh fossil sync --unversioned fossil configuration pull all fossil rebuild ``` ---- The last step is needed to ensure that shunned artifacts on the remote are removed from the local clone. The second step includes `fossil conf pull shun`, but until those artifacts are actually rebuilt out of existence, your backup will be “more than complete” in the sense that it will continue to have information that the remote says should not exist any more. That would be not so much a “backup” as an “archive,” which might not be what you want. # <a id="sql-solution"></a> Solution 2: SQL-Level Backup The first method doesn’t get you a copy of the remote’s [private branches][pbr], on purpose. It may also miss other info on the remote, such as SQL-level customizations that the sync protocol can’t see. (Some [ticket system customization][tkt] schemes rely on this ability, for example.) You can solve such problems if you have access to the remote server, which allows you to get a SQL-level backup. This requires Fossil 2.12 or newer, which added [the `backup` command][bu] to take care of locking and transaction isolation, allowing the user to safely back up an in-use repository. If you have SSH access to the remote server, something like this will work: ---- ``` shell #!/bin/bash bf=repo-$(date +%Y-%m-%d).fossil ssh example.com "cd museum ; fossil backup -R repo.fossil backups/$bf" && scp example.com:museum/backups/$bf ~/museum/backups ``` ---- Beware that this method does not solve [the intransitive sync problem](#ait), in and of itself: if you do a SQL-level backup of a stale repo DB, you have a *stale backup!* You should therefore run this on every node that may need to serve as a backup so that at least *one* of the backups is also up-to-date. # <a id="enc"></a> Encrypted Off-Site Backups A useful refinement that you can apply to both methods above is encrypted off-site backups. You may wish to store backups of your repositories off-site on a service such as Dropbox, Google Drive, iCloud, or Microsoft OneDrive, where you don’t fully trust the service not to leak your information. This addition to the prior scripts will encrypt the resulting backup in such a way that the cloud copy is a useless blob of noise to anyone without the key: ---- ```shell iter=152830 pass="h8TixP6Mt6edJ3d6COaexiiFlvAM54auF2AjT7ZYYn" gd="$HOME/Google Drive/Fossil Backups/$bf.xz.enc" fossil sql -R ~/museum/backups/"$bf" .dump | xz -9 | openssl enc -e -aes-256-cbc -pbkdf2 -iter $iter -pass pass:"$pass" -out "$gd" ``` ---- If you’re adding this to the first script above, remove the “`-R repo-name`” bit so you get a dump of the repository backing the current working directory. Change the `pass` value to some other long random string, and change the `iter` value to something in the hundreds of thousands range. A good source for |
︙ | ︙ | |||
248 249 250 251 252 253 254 | lacked this capability until Ventura (13.0). If you’re on Monterey (12) or older, we recommend use of the [Homebrew][hb] OpenSSL package rather than give up on the security afforded by use of configurable-iteration PBKDF2. To avoid a conflict with the platform’s `openssl` binary, Homebrew’s installation is [unlinked][hbul] by default, so you have to give an explicit path to it, one of: | | | | 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 | lacked this capability until Ventura (13.0). If you’re on Monterey (12) or older, we recommend use of the [Homebrew][hb] OpenSSL package rather than give up on the security afforded by use of configurable-iteration PBKDF2. To avoid a conflict with the platform’s `openssl` binary, Homebrew’s installation is [unlinked][hbul] by default, so you have to give an explicit path to it, one of: /usr/local/opt/openssl/bin/openssl ... # Intel x86 Macs /opt/homebrew/opt/openssl/bin/openssl ... # ARM Macs (“Apple silicon”) [lssl]: https://www.libressl.org/ ## <a id="rest"></a> Restoring From An Encrypted Backup The “restore” script for the above fragment is basically an inverse of |
︙ | ︙ |
Changes to www/branching.wiki.
︙ | ︙ | |||
244 245 246 247 248 249 250 | branches identified only by the commit ID currently at its tip, being a long string of hex digits. Therefore, Fossil conflates two concepts: branching as intentional forking and the naming of forks as branches. They are in fact separate concepts, but since Fossil is intended to be used primarily by humans, we combine them in Fossil's human user interfaces. | | | | | | | | 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 | branches identified only by the commit ID currently at its tip, being a long string of hex digits. Therefore, Fossil conflates two concepts: branching as intentional forking and the naming of forks as branches. They are in fact separate concepts, but since Fossil is intended to be used primarily by humans, we combine them in Fossil's human user interfaces. <blockquote> <b>Key Distinction:</b> A branch is a <i>named, intentional</i> fork. </blockquote> Unnamed forks <i>may</i> be intentional, but most of the time, they're accidental and left unnamed. Fossil offers two primary ways to create named, intentional forks, a.k.a. branches. First: <pre> $ fossil commit --branch my-new-branch-name </pre> This is the method we recommend for most cases: it creates a branch as part of a check-in using the version in the current checkout directory as its basis. (This is normally the tip of the current branch, though it doesn't have to be. You can create a branch from an ancestor check-in on a branch as well.) After making this branch-creating check-in, your local working directory is switched to that branch, so that further check-ins occur on that branch as well, as children of the tip check-in on that branch. The second, more complicated option is: <pre> $ fossil branch new my-new-branch-name trunk $ fossil update my-new-branch-name $ fossil commit </pre> Not only is this three commands instead of one, the first of which is longer than the entire simpler command above, you must give the second command before creating any check-ins, because until you do, your local working directory remains on the same branch it was on at the time you issued the command, so that the commit would otherwise put the new material on |
︙ | ︙ | |||
375 376 377 378 379 380 381 | <h2 id="fix">Fixing Forks</h2> If your local checkout is on a forked branch, you can usually fix a fork automatically with: <pre> | | | 375 376 377 378 379 380 381 382 383 384 385 386 387 388 389 | <h2 id="fix">Fixing Forks</h2> If your local checkout is on a forked branch, you can usually fix a fork automatically with: <pre> $ fossil merge </pre> Normally you need to pass arguments to <b>fossil merge</b> to tell it what you want to merge into the current basis view of the repository, but without arguments, the command seeks out and fixes forks. |
︙ | ︙ | |||
489 490 491 492 493 494 495 | <h2 id="bad-fork">How Can Forks Divide Development Effort?</h2> [#dist-clone|Above], we stated that forks carry a risk that development effort on a branch can be divided among the forks. It might not be immediately obvious why this is so. To see it, consider this swim lane diagram: | | | 489 490 491 492 493 494 495 496 497 498 499 500 501 502 503 | <h2 id="bad-fork">How Can Forks Divide Development Effort?</h2> [#dist-clone|Above], we stated that forks carry a risk that development effort on a branch can be divided among the forks. It might not be immediately obvious why this is so. To see it, consider this swim lane diagram: <verbatim type="pikchr center toggle toggle"> $laneh = 0.75 ALL: [ # Draw the lanes down box width 3.5in height $laneh fill 0xacc9e3 box same fill 0xc5d8ef |
︙ | ︙ | |||
693 694 695 696 697 698 699 | bad, which is why [./concepts.wiki#workflow|Fossil tries so hard to avoid them], why it warns you about it when they do occur, and why it makes it relatively [#fix|quick and painless to fix them] when they do occur. <h2>Review Of Terminology</h2> | | | | 693 694 695 696 697 698 699 700 701 702 703 704 705 706 707 708 709 710 711 712 713 714 715 716 717 718 719 720 721 722 723 724 725 726 | bad, which is why [./concepts.wiki#workflow|Fossil tries so hard to avoid them], why it warns you about it when they do occur, and why it makes it relatively [#fix|quick and painless to fix them] when they do occur. <h2>Review Of Terminology</h2> <blockquote><dl> <dt><b>Branch</b></dt> <dd><p>A branch is a set of check-ins with the same value for their "branch" property.</p></dd> <dt><b>Leaf</b></dt> <dd><p>A leaf is a check-in with no children in the same branch.</p></dd> <dt><b>Closed Leaf</b></dt> <dd><p>A closed leaf is any leaf with the <b>closed</b> tag. These leaves are intended to never be extended with descendants and hence are omitted from lists of leaves in the command-line and web interface.</p></dd> <dt><b>Open Leaf</b></dt> <dd><p>A open leaf is a leaf that is not closed.</p></dd> <dt><b>Fork</b></dt> <dd><p>A fork is when a check-in has two or more direct (non-merge) children in the same branch.</p></dd> <dt><b>Branch Point</b></dt> <dd><p>A branch point occurs when a check-in has two or more direct (non-merge) children in different branches. A branch point is similar to a fork, except that the children are in different branches.</p></dd> </dl></blockquote> Check-in 4 of Figure 3 is not a leaf because it has a child (check-in 5) in the same branch. Check-in 9 of Figure 5 also has a child (check-in 10) but that child is in a different branch, so check-in 9 is a leaf. Because of the <b>closed</b> tag on check-in 9, it is a closed leaf. Check-in 2 of Figure 3 is considered a "fork" |
︙ | ︙ |
Changes to www/build.wiki.
︙ | ︙ | |||
38 39 40 41 42 43 44 | <ol> <li>Point your web browser to [https://fossil-scm.org/]</li> <li>Click on the [/timeline|Timeline] link at the top of the page.</li> | | | | 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 | <ol> <li>Point your web browser to [https://fossil-scm.org/]</li> <li>Click on the [/timeline|Timeline] link at the top of the page.</li> <li>Select a version of of Fossil you want to download. The latest version on the trunk branch is usually a good choice. Click on its link.</li> <li>Finally, click on one of the "Zip Archive" or "Tarball" links, according to your preference. These link will build a ZIP archive or a gzip-compressed tarball of the complete source code and download it to your computer.</li> </ol> <h2>Aside: Is it really safe to use an unreleased development version of the Fossil source code?</h2> Yes! Any check-in on the |
︙ | ︙ | |||
174 175 176 177 178 179 180 | Alternatively, running <b>./configure</b> under MSYS should give a suitable top-level Makefile. However, options passed to configure that are not applicable on Windows may cause the configuration or compilation to fail (e.g. fusefs, internal-sqlite, etc). <li><i>MSVC</i> → Use the MSVC makefile.</li> | < | < < > | | | | | | | | | | 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 | Alternatively, running <b>./configure</b> under MSYS should give a suitable top-level Makefile. However, options passed to configure that are not applicable on Windows may cause the configuration or compilation to fail (e.g. fusefs, internal-sqlite, etc). <li><i>MSVC</i> → Use the MSVC makefile.</li> Run all of the following from a "x64 Native Tools Command Prompt". First change to the "win/" subdirectory ("<b>cd win</b>") then run "<b>nmake /f Makefile.msc</b>".<br><br>Alternatively, the batch file "<b>win\buildmsvc.bat</b>" may be used and it will attempt to detect and use the latest installed version of MSVC.<br><br>To enable the optional <a href="https://www.openssl.org/">OpenSSL</a> support, first <a href="https://www.openssl.org/source/">download the official source code for OpenSSL</a> and extract it to an appropriately named "<b>openssl</b>" subdirectory within the local [/tree?ci=trunk&name=compat | compat] directory then make sure that some recent <a href="http://www.perl.org/">Perl</a> binaries are installed locally, and finally run one of the following commands: <blockquote><pre> nmake /f Makefile.msc FOSSIL_ENABLE_SSL=1 FOSSIL_BUILD_SSL=1 PERLDIR=C:\full\path\to\Perl\bin </pre></blockquote> <blockquote><pre> buildmsvc.bat FOSSIL_ENABLE_SSL=1 FOSSIL_BUILD_SSL=1 PERLDIR=C:\full\path\to\Perl\bin </pre></blockquote> To enable the optional native [./th1.md#tclEval | Tcl integration feature], run one of the following commands or add the "FOSSIL_ENABLE_TCL=1" argument to one of the other NMAKE command lines: <blockquote><pre> nmake /f Makefile.msc FOSSIL_ENABLE_TCL=1 </pre></blockquote> <blockquote><pre> buildmsvc.bat FOSSIL_ENABLE_TCL=1 </pre></blockquote> <li><i>Cygwin</i> → The same as other Unix-like systems. It is recommended to configure using: "<b>configure --disable-internal-sqlite</b>", making sure you have the "libsqlite3-devel" , "zlib-devel" and "openssl-devel" packages installed first.</li> </ol> </ol> |
︙ | ︙ | |||
251 252 253 254 255 256 257 | be installed on the local machine. You can get Tcl/Tk from [http://www.activestate.com/activetcl|ActiveState]. </li> <li> To build on older Macs (circa 2002, MacOS 10.2) edit the Makefile generated by configure to add the following lines: | | | | 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 | be installed on the local machine. You can get Tcl/Tk from [http://www.activestate.com/activetcl|ActiveState]. </li> <li> To build on older Macs (circa 2002, MacOS 10.2) edit the Makefile generated by configure to add the following lines: <blockquote><pre> TCC += -DSQLITE_WITHOUT_ZONEMALLOC TCC += -D_BSD_SOURCE TCC += -DWITHOUT_ICONV TCC += -Dsocketlen_t=int TCC += -DSQLITE_MAX_MMAP_SIZE=0 </pre></blockquote> </li> </ul> <h2 id="docker" name="oci">5.0 Building a Docker Container</h2> The information on building Fossil inside an |
︙ | ︙ | |||
409 410 411 412 413 414 415 | along with <tt>--fuzztype</tt>, be sure to check your system's process list to ensure that your <tt>--fuzztype</tt> flag is there. <a id='wasm'></a> <h2>8.0 Building WebAssembly Components</h2> | | | 407 408 409 410 411 412 413 414 415 416 417 418 419 420 421 | along with <tt>--fuzztype</tt>, be sure to check your system's process list to ensure that your <tt>--fuzztype</tt> flag is there. <a id='wasm'></a> <h2>8.0 Building WebAssembly Components</h2> As of version 2.19, fossil uses one component built as [https://developer.mozilla.org/en-US/docs/WebAssembly | WebAssembly] a.k.a. WASM. Because compiling WASM code requires non-trivial client-side tooling, the repository includes compiled copies of these pieces. Most Fossil hackers should never need to concern themselves with the WASM parts, but this section describes how to for those who want or need to do so. |
︙ | ︙ | |||
438 439 440 441 442 443 444 | [https://emscripten.org/docs/getting_started/downloads.html] For instructions on keeping the SDK up to date, see: [https://emscripten.org/docs/tools_reference/emsdk.html] | | | | | | | | | | < | 436 437 438 439 440 441 442 443 444 445 446 447 448 449 450 451 452 453 454 455 456 457 458 459 460 461 462 | [https://emscripten.org/docs/getting_started/downloads.html] For instructions on keeping the SDK up to date, see: [https://emscripten.org/docs/tools_reference/emsdk.html] Sidebar: getting Emscripten up and running is trivial and painless, at least on Linux systems, but the installer downloads many hundreds of megabytes of tools and dependencies, all of which will be installed under the single SDK directory (as opposed to being installed at the system level). It does, however, require that python3 be installed at the system level and it can optionally make use of a system-level cmake for certain tasks unrelated to how fossil uses the SDK. After installing the SDK, configure the fossil tree with emsdk support: <pre><code>$ ./configure --with-emsdk=/path/to/emsdk ...other options... </code></pre> If the <tt>--with-emsdk</tt> flag is not provided, the configure script will check for the environment variable <tt>EMSDK</tt>, which is one of the standard variables the SDK environment uses. If that variable is found, its value will implicitly be used in place of the missing <tt>--with-emsdk</tt> flag. Thus, if the <tt>emsdk_env.sh</tt> |
︙ | ︙ | |||
481 482 483 484 485 486 487 | build cycle. They are instead explicitly built as described below. From the top of the source tree, all WASM-related components can be built with: <pre><code>$ make wasm</code></pre> | < < < < < < < < < < > > > > > > > > > > | > | > | | | | | | 478 479 480 481 482 483 484 485 486 487 488 489 490 491 492 493 494 495 496 497 498 499 500 501 502 503 504 505 506 507 508 509 510 511 512 513 514 515 516 517 518 519 520 521 522 523 524 525 526 527 528 529 | build cycle. They are instead explicitly built as described below. From the top of the source tree, all WASM-related components can be built with: <pre><code>$ make wasm</code></pre> As of this writing, those parts include: * <tt>extsrc/pikchr.wasm</tt> is a WASM-compiled form of <tt>extsrc/pikchr.c</tt>. * <tt>extsrc/pikchr.js</tt> is JS/WASM glue code generated by Emscripten to give JS code access to the API exported by the WASM file. Sidebar: The file <tt>[/file/extsrc/pikcher-worker.js|extsrc/pikcher-worker.js]</tt> is hand-coded and intended to be loaded as a "Worker" in JavaScript. That file loads the main module and provides an interface via which a main JavaScript thread can communicate with pikchr running in a Worker thread. The file <tt>[/file/src/fossil.page.pikchrshowasm.js|src/fossil.page.pikchrshowasm.js]</tt> implements the [/pikchrshow] app and demonstrates how <tt>pikchr-worker.js</tt> is used. When a new version of <tt>extsrc/pikchr.c</tt> is installed, the files <tt>pikchr.{js,wasm}</tt> will need to be recompiled to account for that. Running <tt>make wasm</tt> will, if the build is set up for the emsdk, recompile those: <pre><code>$ make wasm ./tools/emcc.sh -o extsrc/pikchr.js ... $ ls -la extsrc/pikchr.{js,wasm} -rw-rw-r-- 1 stephan stephan 17263 Jun 8 03:59 extsrc/pikchr.js -rw-rw-r-- 1 stephan stephan 97578 Jun 8 03:59 extsrc/pikchr.wasm </code></pre> <blockquote>Sidebar: if that fails with a message along the lines of: <pre><code>setting `EXPORTED_RUNTIME_METHODS` expects `<class 'list'>` but got `<class 'str'>`</code></pre> then the emcc being invoked is too old: emcc changed the format of list-type arguments at some point. The required minimum version is unknown, but any SDK version from May 2022 or later "should" (as of this writing) suffice. Any older version may or may not work. </blockquote> After that succeeds, we need to run the normal build so that those generated files can be compiled in to the fossil binary, accessible via the [/help?cmd=/builtin|/builtin page]: <pre><code>$ make</code></pre> |
︙ | ︙ |
Changes to www/caps/login-groups.md.
︙ | ︙ | |||
103 104 105 106 107 108 109 | Login groups have names. A repo can be in only one of these named login groups at a time. Trust in login groups is transitive within a single server. Consider this sequence: | > | | | | > | 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 | Login groups have names. A repo can be in only one of these named login groups at a time. Trust in login groups is transitive within a single server. Consider this sequence: ``` $ cd /path/to/A/checkout $ fossil login-group join --name G ~/museum/B.fossil $ cd /path/to/C/checkout $ fossil login-group join ~/museum/B.fossil ``` That creates login group G joining repo A to B, then joins C to B. Although we didn’t explicitly tie C to A, a successful login on C gets you into both A and B, within the restrictions set out above. Changes are transitive in the same way, provided you check that “apply to all” box on the user edit screen. |
︙ | ︙ |
Changes to www/caps/ref.html.
︙ | ︙ | |||
79 80 81 82 83 84 85 | <tr id="d"> <th>d</th> <th>n/a</th> <td> Legacy capability letter from Fossil's forebear <a href="http://cvstrac.org/">CVSTrac</a>, which has no useful | | > > | | 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 | <tr id="d"> <th>d</th> <th>n/a</th> <td> Legacy capability letter from Fossil's forebear <a href="http://cvstrac.org/">CVSTrac</a>, which has no useful meaning in Fossil due to the nature of its durable Merkle tree design. This letter was assigned by default to Developer in repos created with Fossil 2.10 or earlier, but it has no effect in current or past versions of Fossil; we recommend that you remove it in case we ever reuse this letter for another purpose. See <a href="https://fossil-scm.org/forum/forumpost/43c78f4bef">this post</a> for details. </td> </tr> <tr id="e"> |
︙ | ︙ |
Changes to www/cgi.wiki.
︙ | ︙ | |||
21 22 23 24 25 26 27 | those options. <h1>CGI Script Options</h1> The CGI script used to launch a Fossil server will usually look something like this: | | | | 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 | those options. <h1>CGI Script Options</h1> The CGI script used to launch a Fossil server will usually look something like this: <blockquote><verbatim> #!/usr/bin/fossil repository: /home/www/fossils/myproject.fossil </verbatim></blockquote> Of course, pathnames will likely be different. The first line (the "[wikipedia:/wiki/Shebang_(Unix)|shebang]") always gives the name of the Fossil executable. Subsequent lines are of the form "<b>property: argument ...</b>". The remainder of this document describes the available properties and their arguments. |
︙ | ︙ |
Changes to www/changes.wiki.
1 2 | <title>Change Log</title> | < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < | 1 2 3 4 5 6 7 8 9 | <title>Change Log</title> <h2 id='v2_23'>Changes for version 2.23 (2023-11-01)</h2> * Add ability to "close" forum threads, such that unprivileged users may no longer respond to them. Only administrators can close threads or respond to them by default, and the [/help?cmd=forum-close-policy|forum-close-policy setting] can be used to add that capability to moderators. |
︙ | ︙ |
Changes to www/chat.md.
1 2 3 4 | # Fossil Chat ## Introduction | > | | 1 2 3 4 5 6 7 8 9 10 11 12 13 | # Fossil Chat ## Introduction As of version 2.14, Fossil supports a developer chatroom feature. The chatroom provides an ephemeral discussion venue for insiders. Design goals include: * **Simple but functional** → Fossil chat is designed to provide a convenient real-time communication mechanism for geographically dispersed developers. Fossil chat is *not* intended as a replacement or competitor for IRC, Slack, Discord, Telegram, Google Hangouts, etc. |
︙ | ︙ | |||
59 60 61 62 63 64 65 | For users with appropriate permissions, simply browse to the [/chat](/help?cmd=/chat) to start up a chat session. The default skin includes a "Chat" entry on the menu bar on wide screens for people with chat privilege. There is also a "Chat" option on the [Sitemap page](/sitemap), which means that chat will appear as an option under the hamburger menu for many [skins](./customskin.md). | | | < < < < < < < < < | 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 | For users with appropriate permissions, simply browse to the [/chat](/help?cmd=/chat) to start up a chat session. The default skin includes a "Chat" entry on the menu bar on wide screens for people with chat privilege. There is also a "Chat" option on the [Sitemap page](/sitemap), which means that chat will appear as an option under the hamburger menu for many [skins](./customskin.md). As of version 2.17, chat messages are subject to [fossil's full range of markdown processing](/md_rules). Because chat messages are stored as-is when they arrive from a client, this change applies retroactively to messages stored by previous fossil versions. Files may be sent via chat using the file selection element at the bottom of the page. If the desktop environment system supports it, files may be dragged and dropped onto that element. Files are not automatically sent - selection of a file can be cancelled using the Cancel button which appears only when a file is selected. When the Send button is pressed, any pending text is submitted along with the selected file. Image files sent this way will, by default, appear inline in messages, but each user may toggle that via the settings popup menu, such that images instead appear as downloadable links. Non-image files always appear in messages as download links. ### Deletion of Messages Any user may *locally* delete a given message by clicking on the "tab" at the top of the message and clicking the button which appears. Such deletions are local-only, and the messages will reappear if the page is reloaded. The user who posted a given message, or any Admin users, may additionally choose to globally delete a message from the chat record, which deletes it not only from their own browser but also propagates the removal to all connected clients the next time they |
︙ | ︙ | |||
107 108 109 110 111 112 113 | will be included in that list. To switch sounds, tap the "settings" button. ### <a id='connection'></a> Who's Online? Because the chat app has to be able to work over transient CGI-based connections, as opposed to a stable socket connection to the server, | | | > > > > > > > > > | 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 | will be included in that list. To switch sounds, tap the "settings" button. ### <a id='connection'></a> Who's Online? Because the chat app has to be able to work over transient CGI-based connections, as opposed to a stable socket connection to the server, real-time tracking of "who's online" is not feasible. As of version 2.17, chat offers an optional feature, toggleable in the settings, which can list users who have posted messages in the client's current list of loaded messages. This is not the same thing as tracking who's online, but it gives an overview of which users have been active most recently, noting that "lurkers" (people who post no messages) will not show up in that list, nor does the chat infrastructure have a way to track and present those. That list can be used to filter messages on a specific user by tapping on that user's name, tapping a second time to remove the filter. Sidebar: message deletion is a type of message and deletions count towards updates in the recent activity list (counted for the person who performed the deletion, not the author of the deleted comment). That can potentially lead to odd corner cases where a user shows up in the list but has no messages which are currently visible because they were deleted, or an admin user who has not posted anything but deleted a message. That is a known minor cosmetic-only bug with a resolution of "will not fix." ### <a id="cli"></a> The `fossil chat` Command Type [fossil chat](/help?cmd=chat) from within any open check-out to bring up a chatroom for the project that is in that checkout. The new chat window will attempt to connect to the default sync target for that check-out (the server whose URL is shown by the |
︙ | ︙ | |||
143 144 145 146 147 148 149 | The recommended way to allow robots to send chat messages is to create a new user on the server for each robot. Give each such robot account the "C" privilege only. That means that the robot user account will be able to send chat messages, but not do anything else. Then, in the program or script that runs the robot, when it wants to send a chat message, have it run a command like this: | | | 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 | The recommended way to allow robots to send chat messages is to create a new user on the server for each robot. Give each such robot account the "C" privilege only. That means that the robot user account will be able to send chat messages, but not do anything else. Then, in the program or script that runs the robot, when it wants to send a chat message, have it run a command like this: > ~~~~ fossil chat send --remote https://robot:PASSWORD@project.org/fossil \ --message 'MESSAGE TEXT' --file file-to-attach.txt ~~~~ Substitute the appropriate project URL, robot account name and password, message text and file attachment, of course. |
︙ | ︙ | |||
209 210 211 212 213 214 215 | Fetches the file content associated with a post (one file per post, maximum). In the UI, this is accessed via links to uploaded files and via inlined image tags. Chat messages are stored on the server-side in the CHAT table of the repository. | | | 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 | Fetches the file content associated with a post (one file per post, maximum). In the UI, this is accessed via links to uploaded files and via inlined image tags. Chat messages are stored on the server-side in the CHAT table of the repository. > ~~~ CREATE TABLE repository.chat( msgid INTEGER PRIMARY KEY AUTOINCREMENT, mtime JULIANDAY, -- Time for this entry - Julianday Zulu lmtime TEXT, -- Client YYYY-MM-DDZHH:MM:SS when message originally sent xfrom TEXT, -- Login of the sender xmsg TEXT, -- Raw, unformatted text of the message fname TEXT, -- Filename of the uploaded file, or NULL |
︙ | ︙ |
Changes to www/checkin_names.wiki.
1 2 | <title>Check-in Names</title> | | > | | | | | | | | | | | < | | | | | | < | | | | | | | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 | <title>Check-in Names</title> <table align="right" border="1" width="33%" cellpadding="10"> <tr><td> <h3>Quick Reference</h3> <ul> <li> Hash prefix <li> Branch name <li> Tag name <li> Timestamp: <i>YYYY-MM-DD HH:MM:SS</i> <li> <i>tag-name</i> <big><b>:</b></big> <i>timestamp</i> <li> <b>root <big>:</big></b> <i>branchname</i> <li> <b>start <big>:</big></b> <i>branchname</i> <li> <b>merge-in <big>:</big></b> <i>branchname</i> <li> Special names: <ul> <li> <b>tip</b> <li> <b>current</b> <li> <b>next</b> <li> <b>previous</b> or <b>prev</b> <li> <b>ckout</b> (<a href='./embeddeddoc.wiki'>embedded docs</a> only) </ul> </ul> </td></tr> </table> Many Fossil [/help|commands] and [./webui.wiki | web interface] URLs accept check-in names as an argument. For example, the "[/help/info|info]" command accepts an optional check-in name to identify the specific check-in about which information is desired: <blockquote> <tt>fossil info</tt> <i>checkin-name</i> </blockquote> You are perhaps reading this page from the following URL: <blockquote> https://fossil-scm.org/home/doc/<b>trunk</b>/www/checkin_names.wiki </blockquote> The URL above is an example of an [./embeddeddoc.wiki | embedded documentation] page in Fossil. The bold term of the pathname is a check-in name that determines which version of the documentation to display. Fossil provides a variety of ways to specify a check-in. This document describes the various methods. <h2 id="canonical">Canonical Check-in Name</h2> The canonical name of a check-in is the hash of its [./fileformat.wiki#manifest | manifest] expressed as a [./hashes.md | long lowercase hexadecimal number]. For example: <blockquote><pre> fossil info e5a734a19a9826973e1d073b49dc2a16aa2308f9 </pre></blockquote> The full 40 or 64 character hash is unwieldy to remember and type, though, so Fossil also accepts a unique prefix of the hash, using any combination of upper and lower case letters, as long as the prefix is at least 4 characters long. Hence the following commands all accomplish the same thing as the above: <blockquote><pre> fossil info e5a734a19a9 fossil info E5a734A fossil info e5a7 </blockquote> Many web interface screens identify check-ins by 10- or 16-character prefix of canonical name. <h2 id="tags">Tags And Branch Names</h2> Using a tag or branch name where a check-in name is expected causes Fossil to choose the most recent check-in with that tag or branch name. So for example, the most recent check-in that is tagged with "release" as of this writing is [b98ce23d4fc]. The command: <blockquote><pre> fossil info release </pre></blockquote> …results in the following output: <blockquote><pre> hash: b98ce23d4fc3b734cdc058ee8a67e6dad675ca13 2020-08-20 13:27:04 UTC parent: 40feec329163103293d98dfcc2d119d1a16b227a 2020-08-20 13:01:51 UTC tags: release, branch-2.12, version-2.12.1 comment: Version 2.12.1 (user: drh) </pre></blockquote> There are multiple check-ins that are tagged with "release" but (as of this writing) the [b98ce23d4fc] check-in is the most recent so it is the one that is selected. Note that unlike some other version control systems, a "branch" in Fossil is not anything special: it is simply a sequence of check-ins that share a common tag, so the same mechanism that resolves tag names also resolves branch names. <a id="tagpfx"></a> Note also that there can — in theory, if rarely in practice — be an ambiguity between tag names and canonical names. Suppose, for example, you had a check-in with the canonical name deed28aa99… and you also happened to have tagged a different check-in with "deed2". If you use the "deed2" name, does it choose the canonical name or the tag name? In such cases, you can prefix the tag name with "tag:". For example: <blockquote><tt> fossil info tag:deed2 </tt></blockquote> The "tag:deed2" name will refer to the most recent check-in tagged with "deed2" rather than the check-in whose canonical name begins with "deed2". <h2 id="whole-branches">Whole Branches</h2> |
︙ | ︙ | |||
179 180 181 182 183 184 185 | repo could have release tags like “2020-04-01”, the date the release was cut, but you could force Fossil to interpret that string as a date rather than as a tag by passing “date:2020-04-01”. For an example of how timestamps are useful, consider the homepage for the Fossil website itself: | | | | | | | | | | | | | | 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 | repo could have release tags like “2020-04-01”, the date the release was cut, but you could force Fossil to interpret that string as a date rather than as a tag by passing “date:2020-04-01”. For an example of how timestamps are useful, consider the homepage for the Fossil website itself: <blockquote> https://fossil-scm.org/home/doc/<b>trunk</b>/www/index.wiki </blockquote> The bold component of that URL is a check-in name. To see the stored content of the Fossil website repository as of January 1, 2009, one has merely to change the URL to the following: <blockquote> https://fossil-scm.org/home/doc/<b>2009-01-01</b>/www/index.wiki </blockquote> (Note that this won't roll you back to the <i>skin</i> and other cosmetic configurations as of that date. It also won't change screens like the timeline, which has an independent date selector.) <h2 id="tag-ts">Tag And Timestamp</h2> A check-in name can also take the form of a tag or branch name followed by a colon and then a timestamp. The combination means to take the most recent check-in with the given tag or branch which is not more recent than the timestamp. So, for example: <blockquote><tt> fossil update trunk:2010-07-01T14:30 </tt></blockquote> Would cause Fossil to update the working check-out to be the most recent check-in on the trunk that is not more recent than 14:30 (UTC) on July 1, 2010. <h2 id="root">Root Of A Branch</h2> A branch name that begins with the "<tt>root:</tt>" prefix refers to the last check-in on the parent branch prior to the beginning of the branch. Such a label is useful, for example, in computing all diffs for a single branch. The following example will show all changes in the hypothetical branch "xyzzy": <blockquote><tt> fossil diff --from root:xyzzy --to xyzzy </tt></blockquote> <a id="merge-in"></a> That doesn't do what you might expect after you merge the parent branch's changes into the child branch: the above command will include changes made on the parent branch as well. You can solve this by using the prefix "<tt>merge-in:</tt>" instead of "<tt>root:</tt>" to tell Fossil to find the most recent merge-in point for that branch. The resulting diff will then show only the changes in the branch itself, omitting any changes that have already been merged in from the parent branch. <a id="start"></a> The prefix "<tt>start:</tt>" gives the first check-in of the named branch. The prefixes "<tt>root:</tt>", "<tt>start:</tt>", and "<tt>merge-in:</tt>" can be chained: one can say for example <blockquote><tt> fossil info merge-in:xyzzy:2022-03-01 </tt></blockquote> to get informations about the most recent merge-in point on the branch "xyzzy" that happened on or before March 1, 2022. <h2 id="special">Special Tags</h2> The tag "tip" means the most recent check-in. The "tip" tag is practically equivalent to the timestamp "9999-12-31". This special name works anywhere you can pass a "NAME", such as with <tt>/info</tt> URLs: <blockquote><pre> http://localhost:8080/info/tip </pre></blockquote> There are several other special names, but they only work from within a check-out directory because they are relative to the current checked-out version: * "current": the current checked-out version * "next": the youngest child of the current checked-out version |
︙ | ︙ | |||
281 282 283 284 285 286 287 | <h2 id="examples">Additional Examples</h2> To view the changes in the most recent check-in prior to the version currently checked out: | | | | | | 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 | <h2 id="examples">Additional Examples</h2> To view the changes in the most recent check-in prior to the version currently checked out: <blockquote><pre> fossil diff --from previous --to current </pre></blockquote> Suppose you are of the habit of tagging each release with a "release" tag. Then to see everything that has changed on the trunk since the last release: <blockquote><pre> fossil diff --from release --to trunk </pre></blockquote> <h2 id="order">Resolution Order</h2> Fossil currently resolves name strings to artifact hashes in the following order: |
︙ | ︙ |
Changes to www/childprojects.wiki.
︙ | ︙ | |||
26 27 28 29 30 31 32 | at the request of the child. <h2>Creating a Child Project</h2> To create a new child project, first clone the parent. Then make manual SQL changes to the child repository as follows: | | | | 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 | at the request of the child. <h2>Creating a Child Project</h2> To create a new child project, first clone the parent. Then make manual SQL changes to the child repository as follows: <blockquote><verbatim> UPDATE config SET name='parent-project-code' WHERE name='project-code'; UPDATE config SET name='parent-project-name' WHERE name='project-name'; INSERT INTO config(name,value) VALUES('project-code',lower(hex(randomblob(20)))); INSERT INTO config(name,value) VALUES('project-name','CHILD-PROJECT-NAME'); </verbatim></blockquote> Modify the CHILD-PROJECT-NAME in the last statement to be the name of the child project, of course. The repository is now a separate project, independent from its parent. Clone the new project to the developers as needed. |
︙ | ︙ |
Changes to www/ckout-workflows.md.
︙ | ︙ | |||
8 9 10 11 12 13 14 | ## <a id="mcw"></a> Multiple-Checkout Workflow With Fossil, it is routine to have multiple check-outs from the same repository: | | | | | | | | | | | | | | | | | | | 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 | ## <a id="mcw"></a> Multiple-Checkout Workflow With Fossil, it is routine to have multiple check-outs from the same repository: fossil clone https://example.com/repo /path/to/repo.fossil mkdir -p ~/src/my-project/trunk cd ~/src/my-project/trunk fossil open /path/to/repo.fossil # implicitly opens “trunk” mkdir ../release cd ../release fossil open /path/to/repo.fossil release mkdir ../my-other-branch cd ../my-other-branch fossil open /path/to/repo.fossil my-other-branch mkdir ../scratch cd ../scratch fossil open /path/to/repo.fossil abcd1234 mkdir ../test cd ../test fossil open /path/to/repo.fossil 2019-04-01 Now you have five separate check-out directories: one each for: * trunk * the latest tagged public release * an alternate branch you’re working on * a “scratch” directory for experiments you don’t want to do in the other check-out directories; and |
︙ | ︙ | |||
71 72 73 74 75 76 77 | Nevertheless, it is possible to work in a more typical Git sort of style, switching between versions in a single check-out directory. #### <a id="idiomatic"></a> The Idiomatic Fossil Way The most idiomatic way is as follows: | | | | | | | | | > | | | | | | | | | 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 | Nevertheless, it is possible to work in a more typical Git sort of style, switching between versions in a single check-out directory. #### <a id="idiomatic"></a> The Idiomatic Fossil Way The most idiomatic way is as follows: fossil clone https://example.com/repo /path/to/repo.fossil mkdir work-dir cd work-dir fossil open /path/to/repo.fossil ...work on trunk... fossil update my-other-branch ...work on your other branch in the same directory... Basically, you replace the `cd` commands in the multiple checkouts workflow above with `fossil up` commands. #### <a id="open"></a> Opening a Repository by URI In Fossil 2.12, we added a feature to simplify the single-worktree use case: mkdir work-dir cd work-dir fossil open https://example.com/repo Now you have “trunk” open in `work-dir`, with the repo file stored as `repo.fossil` in that same directory. Users of Git may be surprised that it doesn’t create a directory for you and that you `cd` into it *before* the clone-and-open step, not after. This is because we’re overloading the “open” command, which already had the behavior of opening into the current working directory. Changing it to behave like `git clone` would therefore make the behavior surprising to Fossil users. (See [our discussions][caod] if you want the full details.) #### <a id="clone"></a> Git-Like Clone-and-Open In Fossil 2.14, we added a more Git-like alternative: fossil clone https://fossil-scm.org/fossil cd fossil This results in a `fossil.fossil` repo DB file and a `fossil/` working directory. Note that our `clone URI` behavior does not commingle the repo and check-out, solving our major problem with the Git design. If you want the repo to be named something else, adjust the URL: fossil clone https://fossil-scm.org/fossil/fsl That gets you `fsl.fossil` checked out into `fsl/`. For sites where the repo isn’t served from a subdirectory like this, you might need another form of the URL. For example, you might have your repo served from `dev.example.com` and want it cloned as `my-project`: fossil clone https://dev.example.com/repo/my-project The `/repo` addition is the key: whatever comes after is used as the repository name. [See the docs][clone] for more details. [caod]: https://fossil-scm.org/forum/forumpost/3f143cec74 [clone]: /help?cmd=clone <div style="height:50em" id="this-space-intentionally-left-blank"></div> |
Changes to www/concepts.wiki.
1 2 3 4 5 6 7 8 | <title>Fossil Concepts</title> <h2>1.0 Introduction</h2> [./index.wiki | Fossil] is a [http://en.wikipedia.org/wiki/Software_configuration_management | software configuration management] system. Fossil is software that is designed to control and track the development of a software project and to record the history | > | 1 2 3 4 5 6 7 8 9 | <title>Fossil Concepts</title> <h1 align="center">Fossil Concepts</h1> <h2>1.0 Introduction</h2> [./index.wiki | Fossil] is a [http://en.wikipedia.org/wiki/Software_configuration_management | software configuration management] system. Fossil is software that is designed to control and track the development of a software project and to record the history |
︙ | ︙ | |||
112 113 114 115 116 117 118 | identifier for a blob of data, such as a file. Given any file, it is simple to find the artifact ID for that file. But given an artifact ID, it is computationally intractable to generate a file that will have that same artifact ID. Artifact IDs look something like this: | | | | | | | | 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 | identifier for a blob of data, such as a file. Given any file, it is simple to find the artifact ID for that file. But given an artifact ID, it is computationally intractable to generate a file that will have that same artifact ID. Artifact IDs look something like this: <blockquote><b> 6089f0b563a9db0a6d90682fe47fd7161ff867c8<br> 59712614a1b3ccfd84078a37fa5b606e28434326<br> 19dbf73078be9779edd6a0156195e610f81c94f9<br> b4104959a67175f02d6b415480be22a239f1f077<br> 997c9d6ae03ad114b2b57f04e9eeef17dcb82788 </b></blockquote> When referring to an artifact using Fossil, you can use a unique prefix of the artifact ID that is four characters or longer. This saves a lot of typing. When displaying artifact IDs, Fossil will usually only show the first 10 digits since that is normally enough to uniquely identify a file. |
︙ | ︙ | |||
236 237 238 239 240 241 242 | an upgrade. Running "all rebuild" never hurts, so when upgrading it is a good policy to run it even if it is not strictly necessary. To use Fossil, simply type the name of the executable in your shell, followed by one of the various built-in commands and arguments appropriate for that command. For example: | > | > | 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 | an upgrade. Running "all rebuild" never hurts, so when upgrading it is a good policy to run it even if it is not strictly necessary. To use Fossil, simply type the name of the executable in your shell, followed by one of the various built-in commands and arguments appropriate for that command. For example: <blockquote><b> fossil help </b></blockquote> In the next section, when we say things like "use the <b>help</b> command" we mean to use the command name "help" as the first token after the name of the Fossil executable, as shown above. <h2 id="workflow">4.0 Workflow</h2> |
︙ | ︙ | |||
277 278 279 280 281 282 283 | An interesting feature of Fossil is that it supports both autosync and manual-merge work flows. The default setting for Fossil is to be in autosync mode. You can change the autosync setting or check the current autosync setting using commands like: | | | | | | | 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 | An interesting feature of Fossil is that it supports both autosync and manual-merge work flows. The default setting for Fossil is to be in autosync mode. You can change the autosync setting or check the current autosync setting using commands like: <blockquote> <b>fossil setting autosync on<br> fossil setting autosync off<br> <b>fossil settings</b> </blockquote> By default, Fossil runs with autosync mode turned on. The authors finds that projects run more smoothly in autosync mode since autosync helps to prevent pointless forking and merging and helps keeps all collaborators working on exactly the same code rather than on their own personal forks of the code. In the author's view, manual-merge mode should be reserved for disconnected operation. |
︙ | ︙ |
Changes to www/containers.md.
︙ | ︙ | |||
11 12 13 14 15 16 17 | ## 1. Quick Start Fossil ships a `Dockerfile` at the top of its source tree, [here][DF], which you can build like so: | > | > > | > | 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 | ## 1. Quick Start Fossil ships a `Dockerfile` at the top of its source tree, [here][DF], which you can build like so: ``` $ docker build -t fossil . ``` If the image built successfully, you can create a container from it and test that it runs: ``` $ docker run --name fossil -p 9999:8080/tcp fossil ``` This shows us remapping the internal TCP listening port as 9999 on the host. This feature of OCI runtimes means there’s little point to using the “`fossil server --port`” feature inside the container. We can let Fossil default to 8080 internally, then remap it to wherever we want it on the host instead. |
︙ | ︙ | |||
38 39 40 41 42 43 44 | fresh container based on that image. You can pass extra arguments to the first command via the Makefile’s `DBFLAGS` variable and to the second with the `DCFLAGS` variable. (DB is short for “`docker build`”, and DC is short for “`docker create`”, a sub-step of the “run” target.) To get the custom port setting as in second command above, say: | > | > | 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 | fresh container based on that image. You can pass extra arguments to the first command via the Makefile’s `DBFLAGS` variable and to the second with the `DCFLAGS` variable. (DB is short for “`docker build`”, and DC is short for “`docker create`”, a sub-step of the “run” target.) To get the custom port setting as in second command above, say: ``` $ make container-run DCFLAGS='-p 9999:8080/tcp' ``` Contrast the raw “`docker`” commands above, which create an _unversioned_ image called `fossil:latest` and from that a container simply called `fossil`. The unversioned names are more convenient for interactive use, while the versioned ones are good for CI/CD type applications since they avoid a conflict with past versions; it lets you keep old containers around for quick roll-backs while replacing them |
︙ | ︙ | |||
73 74 75 76 77 78 79 | ### <a id="repo-inside"></a> 2.1 Storing the Repo Inside the Container The simplest method is to stop the container if it was running, then say: | > | | | > | 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 | ### <a id="repo-inside"></a> 2.1 Storing the Repo Inside the Container The simplest method is to stop the container if it was running, then say: ``` $ docker cp /path/to/my-project.fossil fossil:/museum/repo.fossil $ docker start fossil $ docker exec fossil chown -R 499 /museum ``` That copies the local Fossil repo into the container where the server expects to find it, so that the “start” command causes it to serve from that copied-in file instead. Since it lives atop the immutable base layers, it persists as part of the container proper, surviving restarts. Notice that the copy command changes the name of the repository |
︙ | ︙ | |||
110 111 112 113 114 115 116 | The simple storage method above has a problem: containers are designed to be killed off at the slightest cause, rebuilt, and redeployed. If you do that with the repo inside the container, it gets destroyed, too. The solution is to replace the “run” command above with the following: | > | | | | | > | 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 | The simple storage method above has a problem: containers are designed to be killed off at the slightest cause, rebuilt, and redeployed. If you do that with the repo inside the container, it gets destroyed, too. The solution is to replace the “run” command above with the following: ``` $ docker run \ --publish 9999:8080 \ --name fossil-bind-mount \ --volume ~/museum:/museum \ fossil ``` Because this bind mount maps a host-side directory (`~/museum`) into the container, you don’t need to `docker cp` the repo into the container at all. It still expects to find the repository as `repo.fossil` under that directory, but now both the host and the container can see that repo DB. Instead of a bind mount, you could instead set up a separate |
︙ | ︙ | |||
139 140 141 142 143 144 145 | #### 2.2.1 <a id="wal-mode"></a>WAL Mode Interactions You might be aware that OCI containers allow mapping a single file into the repository rather than a whole directory. Since Fossil repositories are specially-formatted SQLite databases, you might be wondering why we don’t say things like: | > | > | 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 | #### 2.2.1 <a id="wal-mode"></a>WAL Mode Interactions You might be aware that OCI containers allow mapping a single file into the repository rather than a whole directory. Since Fossil repositories are specially-formatted SQLite databases, you might be wondering why we don’t say things like: ``` --volume ~/museum/my-project.fossil:/museum/repo.fossil ``` That lets us have a convenient file name for the project outside the container while letting the configuration inside the container refer to the generic “`/museum/repo.fossil`” name. Why should we have to name the repo generically on the outside merely to placate the container? The reason is, you might be serving that repo with [WAL mode][wal] |
︙ | ︙ | |||
278 279 280 281 282 283 284 | granularity beyond the classic Unix ones inside the container, so we drop root’s ability to change them. All together, we recommend adding the following options to your “`docker run`” commands, as well as to any “`docker create`” command that will be followed by “`docker start`”: | > | | | | | | | | | > | 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 311 312 313 314 | granularity beyond the classic Unix ones inside the container, so we drop root’s ability to change them. All together, we recommend adding the following options to your “`docker run`” commands, as well as to any “`docker create`” command that will be followed by “`docker start`”: ``` --cap-drop AUDIT_WRITE \ --cap-drop CHOWN \ --cap-drop FSETID \ --cap-drop KILL \ --cap-drop MKNOD \ --cap-drop NET_BIND_SERVICE \ --cap-drop NET_RAW \ --cap-drop SETFCAP \ --cap-drop SETPCAP ``` In the next section, we’ll show a case where you create a container without ever running it, making these options pointless. [backoffice]: ./backoffice.md [defcap]: https://docs.docker.com/engine/security/#linux-kernel-capabilities [capchg]: https://stackoverflow.com/a/45752205/142454 |
︙ | ︙ | |||
310 311 312 313 314 315 316 | A secondary benefit falls out of this process for free: it’s arguably the easiest way to build a purely static Fossil binary for Linux. Most modern Linux distros make this [surprisingly difficult][lsl], but Alpine’s back-to-basics nature makes static builds work the way they used to, back in the day. If that’s all you’re after, you can do so as easily as this: | > | | | | > > | > > | > | 324 325 326 327 328 329 330 331 332 333 334 335 336 337 338 339 340 341 342 343 344 345 346 347 348 349 350 351 352 353 354 355 356 357 358 359 360 361 362 363 364 365 366 367 | A secondary benefit falls out of this process for free: it’s arguably the easiest way to build a purely static Fossil binary for Linux. Most modern Linux distros make this [surprisingly difficult][lsl], but Alpine’s back-to-basics nature makes static builds work the way they used to, back in the day. If that’s all you’re after, you can do so as easily as this: ``` $ docker build -t fossil . $ docker create --name fossil-static-tmp fossil $ docker cp fossil-static-tmp:/bin/fossil . $ docker container rm fossil-static-tmp ``` The result is six or seven megs, depending on the CPU architecture you build for. It’s built stripped. [lsl]: https://stackoverflow.com/questions/3430400/linux-static-linking-is-dead ## 5. <a id="custom" name="args"></a>Customization Points ### <a id="pkg-vers"></a> 5.1 Fossil Version The default version of Fossil fetched in the build is the version in the checkout directory at the time you run it. You could override it to get a release build like so: ``` $ docker build -t fossil --build-arg FSLVER=version-2.20 . ``` Or equivalently, using Fossil’s `Makefile` convenience target: ``` $ make container-image DBFLAGS='--build-arg FSLVER=version-2.20' ``` While you could instead use the generic “`release`” tag here, it’s better to use a specific version number since container builders cache downloaded files, hoping to reuse them across builds. If you ask for “`release`” before a new version is tagged and then immediately after, you might expect to get two different tarballs, but because the underlying source tarball URL |
︙ | ︙ | |||
362 363 364 365 366 367 368 | leaving those below it for system users like this Fossil daemon owner. Since it’s typical for these to start at 0 and go upward, we started at 500 and went *down* one instead to reduce the chance of a conflict to as close to zero as we can manage. To change it to something else, say: | > | > > > > | | > | | | | 382 383 384 385 386 387 388 389 390 391 392 393 394 395 396 397 398 399 400 401 402 403 404 405 406 407 408 409 410 411 412 413 414 415 416 417 418 419 420 421 422 423 424 425 426 427 428 429 430 431 432 433 434 435 436 437 438 439 440 441 442 443 444 445 446 447 448 449 450 | leaving those below it for system users like this Fossil daemon owner. Since it’s typical for these to start at 0 and go upward, we started at 500 and went *down* one instead to reduce the chance of a conflict to as close to zero as we can manage. To change it to something else, say: ``` $ make container-image DBFLAGS='--build-arg UID=501' ``` This is particularly useful if you’re putting your repository on a separate volume since the IDs “leak” out into the host environment via file permissions. You may therefore wish them to mean something on both sides of the container barrier rather than have “499” appear on the host in “`ls -l`” output. ### 5.3 <a id="cengine"></a>Container Engine Although the Fossil container build system defaults to Docker, we allow for use of any OCI container system that implements the same interfaces. We go into more details about this [below](#light), but for now, it suffices to point out that you can switch to Podman while using our `Makefile` convenience targets unchanged by saying: ``` $ make CENGINE=podman container-run ``` ### 5.4 <a id="config"></a>Fossil Configuration Options You can use this same mechanism to enable non-default Fossil configuration options in your build. For instance, to turn on the JSON API and the TH1 docs extension: ``` $ make container-image \ DBFLAGS='--build-arg FSLCFG="--json --with-th1-docs"' ``` If you also wanted [the Tcl evaluation extension](./th1.md#tclEval), that brings us to [the next point](#run). ### 5.5 <a id="run"></a>Elaborating the Run Layer If you want a basic shell environment for temporary debugging of the running container, that’s easily added. Simply change this line in the `Dockerfile`… FROM scratch AS run …to this: FROM busybox AS run Rebuild and redeploy to give your Fossil container a [BusyBox]-based shell environment that you can get into via: $ docker exec -it -u fossil $(make container-version) sh That command assumes you built it via “`make container`” and are therefore using its versioning scheme. You will likely want to remove the `PATH` override in the “RUN” stage when doing this since it’s written for the case where everything is in `/bin`, and that will no longer be the case with a more full-featured |
︙ | ︙ | |||
435 436 437 438 439 440 441 442 443 444 445 446 447 448 449 450 451 452 453 454 455 456 457 458 459 460 461 462 463 464 465 466 467 468 469 470 471 472 473 | Let’s say the extension is written in Python. Because this is one of the most popular programming languages in the world, we have many options for achieving this. For instance, there is a whole class of “[distroless]” images that will do this efficiently by changing “`STAGE 2`” in the `Dockefile` to this: ## --------------------------------------------------------------------- ## STAGE 2: Pare that back to the bare essentials, plus Python. ## --------------------------------------------------------------------- FROM cgr.dev/chainguard/python:latest USER root ARG UID=499 ENV PATH "/sbin:/usr/sbin:/bin:/usr/bin" COPY --from=builder /tmp/fossil /bin/ COPY --from=builder /bin/busybox.static /bin/busybox RUN [ "/bin/busybox", "--install", "/bin" ] RUN set -x \ && echo "fossil:x:${UID}:${UID}:User:/museum:/false" >> /etc/passwd \ && echo "fossil:x:${UID}:fossil" >> /etc/group \ && install -d -m 700 -o fossil -g fossil log museum You will also have to add `busybox-static` to the APK package list in STAGE 1 for the `RUN` script at the end of that stage to work, since the [Chainguard Python image][cgimgs] lacks a shell, on purpose. The need to install root-level binaries is why we change `USER` temporarily here. Build it and test that it works like so: $ make container-run && docker exec -i $(make container-version) python --version 3.11.2 The compensation for the hassle of using Chainguard over something more general purpose like changing the `run` layer to Alpine and then adding a “`apk add python`” command to the `Dockerfile` is huge: we no longer leave a package manager sitting around inside the container, waiting for some malefactor to figure out how to abuse it. | > > > > | 461 462 463 464 465 466 467 468 469 470 471 472 473 474 475 476 477 478 479 480 481 482 483 484 485 486 487 488 489 490 491 492 493 494 495 496 497 498 499 500 501 502 503 | Let’s say the extension is written in Python. Because this is one of the most popular programming languages in the world, we have many options for achieving this. For instance, there is a whole class of “[distroless]” images that will do this efficiently by changing “`STAGE 2`” in the `Dockefile` to this: ``` ## --------------------------------------------------------------------- ## STAGE 2: Pare that back to the bare essentials, plus Python. ## --------------------------------------------------------------------- FROM cgr.dev/chainguard/python:latest USER root ARG UID=499 ENV PATH "/sbin:/usr/sbin:/bin:/usr/bin" COPY --from=builder /tmp/fossil /bin/ COPY --from=builder /bin/busybox.static /bin/busybox RUN [ "/bin/busybox", "--install", "/bin" ] RUN set -x \ && echo "fossil:x:${UID}:${UID}:User:/museum:/false" >> /etc/passwd \ && echo "fossil:x:${UID}:fossil" >> /etc/group \ && install -d -m 700 -o fossil -g fossil log museum ``` You will also have to add `busybox-static` to the APK package list in STAGE 1 for the `RUN` script at the end of that stage to work, since the [Chainguard Python image][cgimgs] lacks a shell, on purpose. The need to install root-level binaries is why we change `USER` temporarily here. Build it and test that it works like so: ``` $ make container-run && docker exec -i $(make container-version) python --version 3.11.2 ``` The compensation for the hassle of using Chainguard over something more general purpose like changing the `run` layer to Alpine and then adding a “`apk add python`” command to the `Dockerfile` is huge: we no longer leave a package manager sitting around inside the container, waiting for some malefactor to figure out how to abuse it. |
︙ | ︙ | |||
523 524 525 526 527 528 529 530 531 532 533 534 535 536 537 538 539 540 541 542 543 544 545 546 547 548 549 550 551 552 553 554 | default under the theory that you don’t want those services to run until you’ve logged into the GUI as that user. If you find yourself running into this, [enable linger mode](https://www.freedesktop.org/software/systemd/man/loginctl.html).) so I was able to create a unit file called `~/.local/share/systemd/user/alert-sender@.service` with these contents: [Unit] Description=Fossil email alert sender for %I [Service] WorkingDirectory=/home/fossil/museum ExecStart=/home/fossil/bin/alert-sender %I/mail.db Restart=always RestartSec=3 [Install] WantedBy=default.target I was then able to enable email alert forwarding for select repositories after configuring them per [the docs](./alerts.md) by saying: $ systemctl --user daemon-reload $ systemctl --user enable alert-sender@myproject $ systemctl --user start alert-sender@myproject Because this is a parameterized script and we’ve set our repository paths predictably, you can do this for as many repositories as you need to by passing their names after the “`@`” sign in the commands above. ## 6. <a id="light"></a>Lightweight Alternatives to Docker | > > > > | 553 554 555 556 557 558 559 560 561 562 563 564 565 566 567 568 569 570 571 572 573 574 575 576 577 578 579 580 581 582 583 584 585 586 587 588 | default under the theory that you don’t want those services to run until you’ve logged into the GUI as that user. If you find yourself running into this, [enable linger mode](https://www.freedesktop.org/software/systemd/man/loginctl.html).) so I was able to create a unit file called `~/.local/share/systemd/user/alert-sender@.service` with these contents: ``` [Unit] Description=Fossil email alert sender for %I [Service] WorkingDirectory=/home/fossil/museum ExecStart=/home/fossil/bin/alert-sender %I/mail.db Restart=always RestartSec=3 [Install] WantedBy=default.target ``` I was then able to enable email alert forwarding for select repositories after configuring them per [the docs](./alerts.md) by saying: ``` $ systemctl --user daemon-reload $ systemctl --user enable alert-sender@myproject $ systemctl --user start alert-sender@myproject ``` Because this is a parameterized script and we’ve set our repository paths predictably, you can do this for as many repositories as you need to by passing their names after the “`@`” sign in the commands above. ## 6. <a id="light"></a>Lightweight Alternatives to Docker |
︙ | ︙ | |||
570 571 572 573 574 575 576 | leaving the benefits of containerization to those with bigger budgets. For the sake of simple examples in this section, we’ll assume you’re integrating Fossil into a larger web site, such as with our [Debian + nginx + TLS][DNT] plan. This is why all of the examples below create the container with this option: | > | > | 604 605 606 607 608 609 610 611 612 613 614 615 616 617 618 619 620 | leaving the benefits of containerization to those with bigger budgets. For the sake of simple examples in this section, we’ll assume you’re integrating Fossil into a larger web site, such as with our [Debian + nginx + TLS][DNT] plan. This is why all of the examples below create the container with this option: ``` --publish 127.0.0.1:9999:8080 ``` The assumption is that there’s a reverse proxy running somewhere that redirects public web hits to localhost port 9999, which in turn goes to port 8080 inside the container. This use of port publishing effectively replaces the use of the “`fossil server --localhost`” option. |
︙ | ︙ | |||
640 641 642 643 644 645 646 | On Ubuntu 22.04, the installation size is about 38 MiB, roughly a tenth the size of Docker Engine. For our purposes here, the only thing that changes relative to the examples at the top of this document are the initial command: | > | | > > | | | | | | | | | | | > > | > > | > > | | | > > | | | | | | | | | | | | | | | | | | | | | | | | | | | | > | 676 677 678 679 680 681 682 683 684 685 686 687 688 689 690 691 692 693 694 695 696 697 698 699 700 701 702 703 704 705 706 707 708 709 710 711 712 713 714 715 716 717 718 719 720 721 722 723 724 725 726 727 728 729 730 731 732 733 734 735 736 737 738 739 740 741 742 743 744 745 746 747 748 749 750 751 752 753 754 755 756 757 758 759 760 761 762 763 764 765 766 767 768 769 770 771 772 773 774 775 776 777 778 779 780 781 782 783 784 785 786 787 788 789 790 791 792 793 794 795 796 797 798 799 800 801 802 803 | On Ubuntu 22.04, the installation size is about 38 MiB, roughly a tenth the size of Docker Engine. For our purposes here, the only thing that changes relative to the examples at the top of this document are the initial command: ``` $ podman build -t fossil . $ podman run --name fossil -p 9999:8080/tcp fossil ``` Your Linux package repo may have a `podman-docker` package which provides a “`docker`” script that calls “`podman`” for you, eliminating even the command name difference. With that installed, the `make` commands above will work with Podman as-is. The only difference that matters here is that Podman doesn’t have the same [default Linux kernel capability set](#caps) as Docker, which affects the `--cap-drop` flags recommended above to: ``` $ podman create \ --name fossil \ --cap-drop CHOWN \ --cap-drop FSETID \ --cap-drop KILL \ --cap-drop NET_BIND_SERVICE \ --cap-drop SETFCAP \ --cap-drop SETPCAP \ --publish 127.0.0.1:9999:8080 \ localhost/fossil $ podman start fossil ``` [pmmac]: https://podman.io/getting-started/installation.html#macos [pmwin]: https://github.com/containers/podman/blob/main/docs/tutorials/podman-for-windows.md [Podman]: https://podman.io/ [rl]: https://github.com/containers/podman/blob/main/docs/tutorials/rootless_tutorial.md [whatis]: https://podman.io/whatis.html ### 6.3 <a id="nspawn"></a>`systemd-container` If even the Podman stack is too big for you, the next-best option I’m aware of is the `systemd-container` infrastructure on modern Linuxes, available since version 239 or so. Its runtime tooling requires only about 1.4 MiB of disk space: ``` $ sudo apt install systemd-container btrfs-tools ``` That command assumes the primary test environment for this guide, Ubuntu 22.04 LTS with `systemd` 249. For best results, `/var/lib/machines` should be a btrfs volume, because [`$REASONS`][mcfad]. For CentOS Stream 9 and other Red Hattish systems, you will have to make several adjustments, which we’ve collected [below](#nspawn-centos) to keep these examples clear. We’ll assume your Fossil repository stores something called “`myproject`” within `~/museum/myproject/repo.fossil`, named according to the reasons given [above](#repo-inside). We’ll make consistent use of this naming scheme in the examples below so that you will be able to replace the “`myproject`” element of the various file and path names. If you use [the stock `Dockerfile`][DF] to generate your base image, `nspawn` won’t recognize it as containing an OS unless you change the “`FROM scratch AS os`” line at the top of the second stage to something like this: ``` FROM gcr.io/distroless/static-debian11 AS os ``` Using that as a base image provides all the files `nspawn` checks for to determine whether the container is sufficiently close to a Linux VM for the following step to proceed: ``` $ make container $ docker container export $(make container-version) | machinectl import-tar - myproject ``` Next, create `/etc/systemd/nspawn/myproject.nspawn`: ---- ``` [Exec] WorkingDirectory=/ Parameters=bin/fossil server \ --baseurl https://example.com/myproject \ --create \ --jsmode bundled \ --localhost \ --port 9000 \ --scgi \ --user admin \ museum/repo.fossil DropCapability= \ CAP_AUDIT_WRITE \ CAP_CHOWN \ CAP_FSETID \ CAP_KILL \ CAP_MKNOD \ CAP_NET_BIND_SERVICE \ CAP_NET_RAW \ CAP_SETFCAP \ CAP_SETPCAP ProcessTwo=yes LinkJournal=no Timezone=no [Files] Bind=/home/fossil/museum/myproject:/museum [Network] VirtualEthernet=no ``` ---- If you recognize most of that from the `Dockerfile` discussion above, congratulations, you’ve been paying attention. The rest should also be clear from context. |
︙ | ︙ | |||
769 770 771 772 773 774 775 | on the host for the reasons given [above](#bind-mount). That being done, we also need a generic `systemd` unit file called `/etc/systemd/system/fossil@.service`, containing: ---- | > | | | | | | | | > > | | > > | | | | | | | > > | > > | > > | | | > | 817 818 819 820 821 822 823 824 825 826 827 828 829 830 831 832 833 834 835 836 837 838 839 840 841 842 843 844 845 846 847 848 849 850 851 852 853 854 855 856 857 858 859 860 861 862 863 864 865 866 867 868 869 870 871 872 873 874 875 876 877 878 879 880 881 882 883 884 885 886 887 888 889 890 891 892 893 894 895 896 897 898 899 900 901 902 903 904 905 906 907 908 909 910 911 912 913 914 915 916 917 | on the host for the reasons given [above](#bind-mount). That being done, we also need a generic `systemd` unit file called `/etc/systemd/system/fossil@.service`, containing: ---- ``` [Unit] Description=Fossil %i Repo Service Wants=modprobe@tun.service modprobe@loop.service After=network.target systemd-resolved.service modprobe@tun.service modprobe@loop.service [Service] ExecStart=systemd-nspawn --settings=override --read-only --machine=%i bin/fossil [Install] WantedBy=multi-user.target ``` ---- You shouldn’t have to change any of this because we’ve given the `--setting=override` flag, meaning any setting in the nspawn file overrides the setting passed to `systemd-nspawn`. This arrangement not only keeps the unit file simple, it allows multiple services to share the base configuration, varying on a per-repo level through adjustments to their individual `*.nspawn` files. You may then start the service in the normal way: ``` $ sudo systemctl enable fossil@myproject $ sudo systemctl start fossil@myproject ``` You should then find it running on localhost port 9000 per the nspawn configuration file above, suitable for proxying Fossil out to the public using nginx via SCGI. If you aren’t using a front-end proxy and want Fossil exposed to the world via HTTPS, you might say this instead in the `*.nspawn` file: ``` Parameters=bin/fossil server \ --cert /path/to/cert.pem \ --create \ --jsmode bundled \ --port 443 \ --user admin \ museum/repo.fossil ``` You would also need to un-drop the `CAP_NET_BIND_SERVICE` capability to allow Fossil to bind to this low-numbered port. We use the `systemd` template file feature to allow multiple Fossil servers running on a single machine, each on a different TCP port, as when proxying them out as subdirectories of a larger site. To add another project, you must first clone the base “machine” layer: ``` $ sudo machinectl clone myproject otherthing ``` That will not only create a clone of `/var/lib/machines/myproject` as `../otherthing`, it will create a matching `otherthing.nspawn` file for you as a copy of the first one. Adjust its contents to suit, then enable and start it as above. [mcfad]: https://www.freedesktop.org/software/systemd/man/machinectl.html#Files%20and%20Directories ### 6.3.1 <a id="nspawn-rhel"></a>Getting It Working on a RHEL Clone The biggest difference between doing this on OSes like CentOS versus Ubuntu is that RHEL (thus also its clones) doesn’t ship btrfs in its kernel, thus ships with no package repositories containing `mkfs.btrfs`, which [`machinectl`][mctl] depends on for achieving its various purposes. Fortunately, there are workarounds. First, the `apt install` command above becomes: ``` $ sudo dnf install systemd-container ``` Second, you have to hack around the lack of `machinectl import-tar`: ``` $ rootfs=/var/lib/machines/fossil $ sudo mkdir -p $rootfs $ docker container export fossil | sudo tar -xf -C $rootfs - ``` The parent directory path in the `rootfs` variable is important, because although we aren’t able to use `machinectl` on such systems, the `systemd-nspawn` developers assume you’re using them together; when you give `--machine`, it assumes the `machinectl` directory scheme. You could instead use `--directory`, allowing you to store the rootfs wherever you like, but why make things difficult? It’s a perfectly sensible |
︙ | ︙ |
Changes to www/contribute.wiki.
︙ | ︙ | |||
24 25 26 27 28 29 30 | definition of that term is up to the project leader. <h2>2.0 Submitting Patches</h2> Suggested changes or bug fixes can be submitted by creating a patch against the current source tree: | | | | 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 | definition of that term is up to the project leader. <h2>2.0 Submitting Patches</h2> Suggested changes or bug fixes can be submitted by creating a patch against the current source tree: <tt>fossil diff -i > my-change.patch</tt> Alternatively, you can create a binary patch: <tt>fossil patch create my-change.db</tt> Post patches to [https://fossil-scm.org/forum | the forum] or email them to <a href="mailto:drh@sqlite.org">drh@sqlite.org</a>. Be sure to describe in detail what the patch does and which version of Fossil it is written against. It's best to make patches against tip-of-trunk rather than against past releases. |
︙ | ︙ |
Changes to www/custom_ticket.wiki.
1 | <title>Customizing The Ticket System</title> | | > < > > < > > | < | | | | | | < | | < | | | < | | < | | | | < | > > | | | | | | | | < | > > | < < | < | | | < < | < < | < | > | | | > | < | > > | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 | <title>Customizing The Ticket System</title> <nowiki> <h2>Introduction</h2> This guide will explain how to add the "assigned_to" and "submitted_by" fields to the ticket system in Fossil, as well as making the system more useful. You must have "admin" access to the repository to implement these instructions. <h2>First modify the TICKET table</h2> <blockquote> Click on the "Admin" menu, then "Tickets", then "Table". After the other fields and before the final ")", insert: <pre> , assigned_to TEXT, opened_by TEXT </pre> And "Apply Changes". You have just added two more fields to the ticket database! NOTE: I won't tell you to "Apply Changes" after each step from here on out. Now, how do you use these fields? </blockquote> <h2>Next add assignees</h2> <blockquote> Back to the "Tickets" admin page, and click "Common". Add something like this: <pre> set assigned_choices { unassigned tom dick harriet } </pre> Obviously, choose names corresponding to the logins on your system. The 'unassigned' entry is important, as it prevents you from having a NULL in that field (which causes problems later when editing). </blockquote> <h2>Now modify the 'new ticket' page</h2> <blockquote> Back to the "Tickets" admin page, and click "New Ticket Page". This is a little more tricky. Edit the top part: <pre> if {[info exists submit]} { set status Open set opened_by $login set assigned_to "unassigned" submit_ticket } </pre> Note the "set opened_by" bit -- that will automatically set the "opened_by" field to the login name of the bug reporter. Now, skip to the part with "EMail" and modify it like so: <pre> <th1>enable_output [expr { "$login" eq "anonymous"}]</th1> <tr> <td align="right">EMail: <input type="text" name="private_contact" value="$<private_contact>" size="30"> </td> <td><u>Not publicly visible</u>. Used by developers to contact you with questions.</td> </tr> <th1>enable_output 1</th1> </pre> This bit of code will get rid of the "email" field entry for logged-in users. Since we know the user's information, we don't have to ask for it. NOTE: it might be good to automatically scoop up the user's email and put it here. You might also want to enable people to actually assign the ticket to a specific person during creation. For this to work, you need to add the code for "assigned_to" as shown below under the heading "Modify the 'edit ticket' page". This will give you an additional combobox where you can choose a person during ticket creation. </blockquote> <h2>Modify the 'view ticket' page</h2> <blockquote> Look for the text "Contact:" (about halfway through). Then insert these lines after the closing tr tag and before the "enable_output" line: <pre> <tr> <td align="right">Assigned to:</td><td bgcolor="#d0d0d0"> $<assigned_to> </td> <td align="right">Opened by:</td><td bgcolor="#d0d0d0"> $<opened_by> </td> </pre> This will add a row which displays these two fields, in the event the user has <a href="./caps/ref.html#w">ticket "edit" capability</a>. </blockquote> <h2>Modify the 'edit ticket' page</h2> <blockquote> Before the "Severity:" line, add this: <pre> <tr><td align="right">Assigned to:</td><td> <th1>combobox assigned_to $assigned_choices 1</th1> </td></tr> </pre> That will give you a drop-down list of assignees. The first argument to the TH1 command 'combobox' is the database field which the combobox is associated to. The next argument is the list of choices you want to show in the combobox (and that you specified in the second step above. The last argument should be 1 for a true combobox (see the <a href="th1.md#combobox">TH1 documentation</a> for details). Now, similar to the previous section, look for "Contact:" and add this: <pre> <tr><td align="right">Reported by:</td><td> <input type="text" name="opened_by" size="40" value="$<opened_by>"> </td></tr> </pre> </blockquote> <h2>What next?</h2> <blockquote> Now you can add custom reports which select based on the person to whom the ticket is assigned. For example, an "Assigned to me" report could be: <pre> SELECT CASE WHEN status IN ('Open','Verified') THEN '#f2dcdc' WHEN status='Review' THEN '#e8e8e8' WHEN status='Fixed' THEN '#cfe8bd' WHEN status='Tested' THEN '#bde5d6' WHEN status='Deferred' THEN '#cacae5' ELSE '#c8c8c8' END AS 'bgcolor', substr(tkt_uuid,1,10) AS '#', datetime(tkt_mtime) AS 'mtime', type, status, subsystem, title FROM ticket WHERE assigned_to=user() </pre> </blockquote> </nowiki> |
Changes to www/customgraph.md.
1 2 3 4 5 6 7 8 9 10 11 | # Customizing the Timeline Graph Beginning with version 1.33, Fossil gives users and skin authors significantly more control over the look and feel of the timeline graph. ## <a id="basic-style"></a>Basic Style Options Fossil includes several options for changing the graph's style without having to delve into CSS. These can be found in the details.txt file of your skin or under Admin/Skins/Details in the web UI. | | | | | | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 | # Customizing the Timeline Graph Beginning with version 1.33, Fossil gives users and skin authors significantly more control over the look and feel of the timeline graph. ## <a id="basic-style"></a>Basic Style Options Fossil includes several options for changing the graph's style without having to delve into CSS. These can be found in the details.txt file of your skin or under Admin/Skins/Details in the web UI. * ### `timeline-arrowheads` Set this to `0` to hide arrowheads on primary child lines. * ### `timeline-circle-nodes` Set this to `1` to make check-in nodes circular instead of square. * ### `timeline-color-graph-lines` Set this to `1` to colorize primary child lines. * ### `white-foreground` Set this to `1` if your skin uses white (or any light color) text. This tells Fossil to generate darker background colors for branches. ## <a id="adv-style"></a>Advanced Styling |
︙ | ︙ | |||
40 41 42 43 44 45 46 | latter, less obvious type. ## <a id="pos-elems"></a>Positioning Elements These elements aren't intended to be seen. They're only used to help position the graph and its visible elements. | | | | | | | | | | | | 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 | latter, less obvious type. ## <a id="pos-elems"></a>Positioning Elements These elements aren't intended to be seen. They're only used to help position the graph and its visible elements. * ### <a id="tl-canvas"></a>`.tl-canvas` Set the left and right margins on this class to give the desired amount of space between the graph and its adjacent columns in the timeline. #### Additional Classes * `.sel`: See [`.tl-node`](#tl-node) for more information. * ### <a id="tl-rail"></a>`.tl-rail` Think of rails as invisible vertical lines on which check-in nodes are placed. The more simultaneous branches in a graph, the more rails required to draw it. Setting the `width` property on this class determines the maximum spacing between rails. This spacing is automatically reduced as the number of rails increases. If you change the `width` of `.tl-node` elements, you'll probably need to change this value, too. * ### <a id="tl-mergeoffset"></a>`.tl-mergeoffset` A merge line often runs vertically right beside a primary child line. This class's `width` property specifies the maximum spacing between the two. Setting this value to `0` will eliminate the vertical merge lines. Instead, the merge arrow will extend directly off the primary child line. As with rail spacing, this is also adjusted automatically as needed. * ### <a id="tl-nodemark"></a>`.tl-nodemark` In the timeline table, the second cell in each check-in row contains an invisible div with this class. These divs are used to determine the vertical position of the nodes. By setting the `margin-top` property, you can adjust this position. ## <a id="vis-elems"></a>Visible Elements These are the elements you can actually see on the timeline graph: the nodes, arrows, and lines. Each of these elements may also have additional classes attached to them, depending on their context. * ### <a id="tl-node"></a>`.tl-node` A node exists for each check-in in the timeline. #### Additional Classes * `.leaf`: Specifies that the check-in is a leaf (i.e. that it has no children in the same branch). * `.merge`: Specifies that the check-in contains a merge. * `.sel`: When the user clicks a node to designate it as the beginning of a diff, this class is added to both the node itself and the [`.tl-canvas`](#tl-canvas) element. The class is removed from both elements when the node is clicked again. * ### <a id="tl-arrow"></a>`.tl-arrow` Arrows point from parent nodes to their children. Technically, this class is just for the arrowhead. The rest of the arrow is composed of [`.tl-line`](#tl-line) elements. There are six additional classes that are used to distinguish the different types of arrows. However, only these combinations are valid: * `.u`: Up arrow that points to a child from its primary parent. * `.u.sm`: Smaller up arrow, used when there is limited space between parent and child nodes. * `.merge.l` or `.merge.r`: Merge arrow pointing either to the left or right. * `.warp`: A timewarped arrow (always points to the right), used when a misconfigured clock makes a check-in appear to have occurred before its parent ([example](https://www.sqlite.org/src/timeline?c=2010-09-29&nd)). * ### <a id="tl-line"></a>`.tl-line` Along with arrows, lines connect parent and child nodes. Line thickness is determined by the `width` property, regardless of whether the line is horizontal or vertical. You can also use borders to create special line styles. Here's a CSS snippet for making dotted merge lines: .tl-line.merge { width: 0; background: transparent; border: 0 dotted #000; } .tl-line.merge.h { border-top-width: 1px; } .tl-line.merge.v { border-left-width: 1px; } #### Additional Classes * `.merge`: A merge line. * `.h` or `.v`: Horizontal or vertical. * `.warp`: A timewarped line. |
︙ | ︙ |
Changes to www/customskin.md.
︙ | ︙ | |||
55 56 57 58 59 60 61 | When cloning a repository, the skin of the new repository is initialized to the skin of the repository from which it was cloned. # Structure Of A Fossil Web Page Every HTML page generated by Fossil has the same basic structure: | > > | | > | | > | > > > > | > > > | | | | | | | | | < < < | < | < | | < < | | | | | < < < < < < < < < < < > > | < | | | < < < | < < < > > > > | | | | | | | | | | | | | | | | 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 | When cloning a repository, the skin of the new repository is initialized to the skin of the repository from which it was cloned. # Structure Of A Fossil Web Page Every HTML page generated by Fossil has the same basic structure: <blockquote><table border=1 cellpadding=10><tbody> <tr><td style='background-color:lightgreen;text-align:center;'> Fossil-Generated HTML Header</td></tr> <tr><td style='background-color:lightblue;text-align:center;'>Content Header</td></tr> <tr><td style='background-color:lightgreen;text-align:center;'> Fossil-Generated Content</td></tr> <tr><td style='background-color:lightblue;text-align:center;'>Content Footer</td></tr> <tr><td style='background-color:lightgreen;text-align:center;'> Fossil-Generated HTML Footer</td></tr> </tbody></table></blockquote> The green parts are *usually* generated by Fossil. The blue parts are things that you, the administrator, get to modify in order to customize the skin. Fossil *usually* (but not always - [see below](#override)) generates the initial HTML Header section of a page. The generated HTML Header will look something like this: <html> <head> <base href="..."> <meta http-equiv="Content-Security-Policy" content="...."> <meta name="viewport" content="width=device-width, initial-scale=1.0"> <title>....</title> <link rel="stylesheet" href="..." type="text/css"> </head> <body class="FEATURE"> …where `FEATURE` is either the top-level URL element (e.g. `doc`) or a feature class that groups multiple URLs under a single name such as `forum` to contain `/forummain`, `/forumpost`, `/forume2`, etc. This allows per-feature CSS such as body.forum div.markdown blockquote { margin-left: 10px; } That is, affect HTML `<blockquote>` tags specially only for forum posts written in Markdown, leaving all other block quotes alone. In most cases, it is best to leave the Fossil-generated HTML Header alone. (One exception is when the administrator needs to include links to additional CSS files.) The configurable part of the skin begins with the Content Header section which should follow this template: <div class="header"> ... top banner and menu bar ... </div> Note that `<div class="header">` and `</div>` tags must be included in the Content Header text of the skin. In other words, you, the administrator, need to supply that text as part of your skin customization. The Fossil-generated Content section immediately follows the Content Header. The Content section will looks like this: <div class="content"> ... Fossil-generated content here ... </div> After the Content is the custom Content Footer section which should follow this template: <div class="footer"> ... skin-specific stuff here ... </div> As with the Content Header, the template elements of the Content Footer should appear exactly as they are shown. Finally, Fossil always adds its own footer (unless overridden) to close out the generated HTML: </body> </html> ## <a id="mainmenu"></a>Changing the Main Menu Contents As of Fossil 2.15, the actual text content of the skin’s main menu is no longer part of the skin proper if you’re using one of the stock skins. If you look at the Header section of the skin, you’ll find a `<div class="mainmenu">` element whose contents are set by a short [TH1](./th1.md) script from the contents of the **Main Menu** section of the Setup → Configuration screen. This feature allows the main menu contents to stay the same across different skins, so you no longer have to reapply menu customizations |
︙ | ︙ | |||
166 167 168 169 170 171 172 | Notice that the `<html>`, `<head>`, and opening `<body>` elements at the beginning of the document, and the closing `</body>` and `</html>` elements at the end are automatically generated by Fossil. This is recommended. However, for maximum design flexibility, Fossil allows those elements to be | | | | | | | | | | | | | | | | | | | | > > | | | | 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 | Notice that the `<html>`, `<head>`, and opening `<body>` elements at the beginning of the document, and the closing `</body>` and `</html>` elements at the end are automatically generated by Fossil. This is recommended. However, for maximum design flexibility, Fossil allows those elements to be supplied as part of the configurable Content Header and Content Footer. If the Content Header contains the text "`<body`", then Fossil assumes that the Content Header and Content Footer will handle all of the `<html>`, `<head>`, and `<body>` text itself, and the Fossil-generated header and footer will be blank. When overriding the HTML Header in this way, you will probably want to use some of the [TH1 variables documented below](#vars) such as `$stylesheet_url` to avoid hand-writing code that Fossil can generate for you. # Designing, Debugging, and Installing A Custom Skin It is possible to develop a new skin from scratch. But a better and easier approach is to use one of the existing built-in skins as a baseline and make incremental modifications, testing after each step, to obtain the desired result. The skin is controlled by five files: <blockquote><dl> <dt><b>css.txt</b></dt> <dd>The css.txt file is the text of the CSS for Fossil. Fossil might add additional CSS elements after the css.txt file, if it sees that the css.txt omits some CSS components that Fossil needs. But for the most part, the content of the css.txt is the CSS for the page.</dd> <dt><b>details.txt</b><dt> <dd>The details.txt file is short list of settings that control the look and feel, mostly of the timeline. The default details.txt file looks like this: <blockquote><pre> pikchr-background: "" pikchr-fontscale: "" pikchr-foreground: "" pikchr-scale: "" timeline-arrowheads: 1 timeline-circle-nodes: 1 timeline-color-graph-lines: 1 white-foreground: 0 </pre></blockquote> The three "timeline-" settings in details.txt control the appearance of certain aspects of the timeline graph. The number on the right is a boolean - "1" to activate the feature and "0" to disable it. The "white-foreground:" setting should be set to "1" if the page color has light-color text on a darker background, and "0" if the page has dark text on a light-colored background. If the "pikchr-foreground" setting (added in Fossil 2.14) is defined and is not an empty string then it specifies a foreground color to use for [pikchr diagrams](./pikchr.md). The default pikchr foreground color is black, or white if the "white-foreground" boolean is set. The "pikchr-background" settings does the same for the pikchr diagram background color. If the "pikchr-fontscale" and "pikchr-scale" values are not empty strings, then they should be floating point values (close to 1.0) that specify relative scaling of the fonts in pikchr diagrams and other elements of the diagrams, respectively. </dd> <dt><b>footer.txt</b> and <b>header.txt</b></dt> <dd>The footer.txt and header.txt files contain the Content Footer and Content Header respectively. Of these, the Content Header is the most important, as it contains the markup used to generate the banner and menu bar for each page. Both the footer.txt and header.txt file are [processed using TH1](#headfoot) prior to being output as part of the overall web page.</dd> <dt><b>js.txt</b></dt> <dd>The js.txt file is optional. It is intended to be javascript. The complete text of this javascript might be inserted into the Content Footer, after being processed using TH1, using code like the following in the "footer.txt" file: <blockquote><pre> <script nonce="$nonce"> <th1>styleScript</th1> </script> </pre></blockquote> The js.txt file was originally used to insert javascript that controls the hamburger menu in the default skin. More recently, the javascript for the hamburger menu was moved into a separate built-in file. Skins that use the hamburger menu typically cause the javascript to be loaded by including the following TH1 code in the "header.txt" file: <blockquote><pre> <th1>builtin_request_js hbmenu.js</th1> </pre></blockquote> The difference between styleScript and builtin_request_js is that the styleScript command interprets the file using TH1 and injects the content directly into the output stream, whereas the builtin_request_js command inserts the javascript verbatim and does so at some unspecified future time down inside the Fossil-generated footer. The built-in skins of Fossil originally used the styleScript command to load the hamburger menu javascript, but as of version 2.15 switched to using the builtin_request_js method. You can use either approach in custom skins that you right yourself. Note that the "js.txt" file is *not* automatically inserted into the generate HTML for a page. You, the skin designer, must cause the javascript to be inserted by issuing appropriate TH1 commands in the "header.txt" or "footer.txt" files.</dd> </dl></blockquote> Developing a new skin is simply a matter of creating appropriate versions of these five control files. ### Skin Development Using The Web Interface Users with admin privileges can use the Admin/Skin configuration page |
︙ | ︙ | |||
323 324 325 326 327 328 329 | did not change. After you have finished work your skin, the caches should synchronize with your new design and you can reactivate your web browser's cache and take it out of developer mode. ## <a id="headfoot"></a>Header and Footer Processing The `header.txt` and `footer.txt` control files of a skin are the HTML text | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | 317 318 319 320 321 322 323 324 325 326 327 328 329 330 331 332 333 334 335 336 337 338 339 340 341 342 343 344 345 346 347 348 349 350 351 352 353 354 355 356 357 358 359 360 361 362 363 364 365 366 367 368 369 370 371 372 373 374 375 376 377 378 379 380 381 382 383 384 385 386 387 388 389 390 391 392 393 394 395 396 397 398 399 400 401 402 403 404 405 406 407 408 409 410 411 412 413 414 415 416 417 418 419 420 421 422 423 424 425 426 427 428 429 430 431 432 433 434 435 436 437 438 439 440 441 442 443 444 445 446 447 448 449 450 451 452 453 454 455 456 457 458 459 460 461 462 463 464 465 466 467 468 469 470 471 472 473 474 475 476 477 478 479 480 481 482 483 484 485 486 487 488 489 490 491 492 | did not change. After you have finished work your skin, the caches should synchronize with your new design and you can reactivate your web browser's cache and take it out of developer mode. ## <a id="headfoot"></a>Header and Footer Processing The `header.txt` and `footer.txt` control files of a skin are the HTML text of the Content Header and Content Footer, except that before being inserted into the output stream, the text is run through a [TH1 interpreter](./th1.md) that might adjust the text as follows: * All text within <th1>...</th1> is omitted from the output and is instead run as a TH1 script. That TH1 script has the opportunity to insert new text in place of itself, or to inhibit or enable the output of subsequent text. * Text of the form "$NAME" or "$<NAME>" is replaced with the value of the TH1 variable NAME. For example, first few lines of a typical Content Header will look like this: <div class="header"> <div class="title"><h1>$<project_name></h1>$<title>/div> After variables are substituted by TH1, that will look more like this: <div class="header"> <div class="title"><h1>Project Name</h1>Page Title</div> As you can see, two TH1 variable substitutions were done. The same TH1 interpreter is used for both the header and the footer and for all scripts contained within them both. Hence, any global TH1 variables that are set by the header are available to the footer. ## <a id="menu"></a>Customizing the ≡ Hamburger Menu The menu bar of the default skin has an entry to open a drop-down menu with additional navigation links, represented by the ≡ button (hence the name "hamburger menu"). The Javascript logic to open and close the hamburger menu when the button is clicked is usually handled by a script named "hbmenu.js" that is one of the [built-in resource files](/test-builtin-files) that are part of Fossil. The ≡ button for the hamburger menu is added to the menu bar by the following TH1 commands in the `header.txt` file, right before the menu bar links: html "<a id='hbbtn' href='$home/sitemap'>☰</a>" builtin_request_js hbmenu.js The hamburger button can be repositioned between the other menu links (but the drop-down menu is always left-aligned with the menu bar), or it can be removed by deleting the above statements. The "html" statement inserts the appropriate `<a>` for the hamburger menu button (some skins require something slightly different - for example the ardoise skins wants "`<li><a>`"). The "builtin_request_js hbmenu.js" asks Fossil to include the "hbmenu.js" resource files in the Fossil-generated footer. The hbmenu.js script requires the following `<div>` element somewhere in your header, in which to build the hamburger menu. <div id='hbdrop'></div> Out of the box, the contents of the panel is populated with the [Site Map](/sitemap), but only if the panel does not already contain any HTML elements (that is, not just comments, plain text or non-presentational white space). So the hamburger menu can be customized by replacing the empty `<div id='hbdrop'></div>` element with a menu structure knitted according to the following template: <div id="hbdrop" data-anim-ms="400"> <ul class="columns" style="column-width: 20em; column-count: auto"> <!-- NEW GROUP WITH HEADING LINK --> <li> <a href="$home$index_page">Link: Home</a> <ul> <li><a href="$home/timeline">Link: Timeline</a></li> <li><a href="$home/dir?ci=tip">Link: File List</a></li> </ul> </li> <!-- NEW GROUP WITH HEADING TEXT --> <li> Heading Text <ul> <li><a href="$home/doc/trunk/www/customskin.md">Link: Theming</a></li> <li><a href="$home/doc/trunk/www/th1.md">Link: TH1 Scripts</a></li> </ul> </li> <!-- NEXT GROUP GOES HERE --> </ul> </div> The custom `data-anim-ms` attribute can be added to the panel element to direct the Javascript logic to override the default menu animation duration of 400 ms. A faster animation duration of 80-200 ms may be preferred for smaller menus. The animation is disabled by setting the attribute to `"0"`. ## <a id="vars"></a>TH1 Variables Before expanding the TH1 within the header and footer, Fossil first initializes a number of TH1 variables to values that depend on repository settings and the specific page being generated. * **project_name** - The project_name variable is filled with the name of the project as configured under the Admin/Configuration menu. * **project_description** - The project_description variable is filled with the description of the project as configured under the Admin/Configuration menu. * **title** - The title variable holds the title of the page being generated. The title variable is special in that it is deleted after the header script runs and before the footer script. This is necessary to avoid a conflict with a variable by the same name used in my ticket-screen scripts. * **baseurl** - The root of the URL namespace for this server. * **secureurl** - The same as $baseurl except that if the scheme is "http:" it is changed to "https:" * **home** - The $baseurl without the scheme and hostname. For example, if the $baseurl is "http://projectX.com/cgi-bin/fossil" then the $home will be just "/cgi-bin/fossil". * **index_page** - The landing page URI as specified by the Admin/Configuration setup page. * **current_page** - The name of the page currently being processed, without the leading "/" and without query parameters. Examples: "timeline", "doc/trunk/README.txt", "wiki". * **csrf_token** - A token used to prevent cross-site request forgery. * **default_csp** - [Fossil’s default CSP](./defcsp.md) unless [overridden by custom TH1 code](./defcsp.md#th1). Useful within the skin for inserting the CSP into a `<meta>` tag within [a custom `<head>` element](#headfoot). * **nonce** - The value of the cryptographic nonce for the request being processed. * **release_version** - The release version of Fossil. Ex: "1.31" * **manifest_version** - A prefix on the check-in hash of the specific version of fossil that is running. Ex: "\[47bb6432a1\]" * **manifest_date** - The date of the source-code check-in for the version of fossil that is running. * **compiler_name** - The name and version of the compiler used to build the fossil executable. * **login** - This variable only exists if the user has logged in. The value is the username of the user. * **stylesheet_url** - A URL for the internal style-sheet maintained by Fossil. * **log\_image\_url** - A URL for the logo image for this project, as configured on the Admin/Logo page. * **background\_image\_url** - A URL for a background image for this project, as configured on the Admin/Logo page. All of the above are variables in the sense that either the header or the footer is free to change or erase them. But they should probably be treated as constants. New predefined values are likely to be added in future releases of Fossil. |
︙ | ︙ |
Changes to www/defcsp.md.
︙ | ︙ | |||
21 22 23 24 25 26 27 | bugs that might lead to a vulnerability. ## The Default Restrictions The default CSP used by Fossil is as follows: <pre> | | | | | > | > | | | < | | | | | 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 | bugs that might lead to a vulnerability. ## The Default Restrictions The default CSP used by Fossil is as follows: <pre> default-src 'self' data:; script-src 'self' 'nonce-$nonce'; style-src 'self' 'unsafe-inline'; img-src * data:; </pre> The default is recommended for most installations. However, the site administrators can overwrite this default CSP using the [default-csp setting](/help?cmd=default-csp). For example, CSP restrictions can be completely disabled by setting the default-csp to: <pre> default-src *; </pre> The following sections detail the maining of the default CSP setting. ### <a id="base"></a> default-src 'self' data: This policy means mixed-origin content isn’t allowed, so you can’t refer to resources on other web domains. Browsers will ignore a link like the one in the following Markdown under our default CSP: ![fancy 3D Fossil logotype](https://i.imgur.com/HalpMgt.png) If you look in the browser’s developer console, you should see a CSP error when attempting to render such a page. The default policy does allow inline `data:` URIs, which means you could [data-encode][de] your image content and put it inline within the document: ![small inline image](data:image/gif;base64,R0lGODlh...) That method is best used for fairly small resources. Large `data:` URIs are hard to read and edit. There are secondary problems as well: if you put a large image into a Fossil forum post this way, anyone subscribed to email alerts will get a copy of the raw URI text, which can amount to pages and pages of [ugly Base64-encoded text][b64]. For inline images within [embedded documentation][ed], it suffices to store the referred-to files in the repo and then refer to them using repo-relative URLs: ![large inline image](./inlineimage.jpg) This avoids bloating the doc text with `data:` URI blobs: There are many other cases, [covered below](#serving). [b64]: https://en.wikipedia.org/wiki/Base64 [svr]: ./server/ ### <a id="img"></a> img-src * data: As of Fossil 2.15, we don’t restrict the source of inline images at all. You can pull them in from remote systems as well as pull them from within the Fossil repository itself, or use `data:` URIs. If you are certain all images come from only within the repository, you can close off certain risks — tracking pixels, broken image format decoders, system dialog box spoofing, etc. — by changing this to “`img-src 'self'`” possibly followed by “`data:`” if you will also use `data:` URIs. ### <a id="style"></a> style-src 'self' 'unsafe-inline' This policy allows CSS information to come from separate files hosted under the Fossil repo server’s Internet domain. It also allows inline CSS `<style>` tags within the document text. The `'unsafe-inline'` declaration allows CSS within individual HTML elements: <p style="margin-left: 4em">Indented text.</p> As the "`unsafe-`" prefix on the name implies, the `'unsafe-inline'` feature is suboptimal for security. However, there are a few places in the Fossil-generated HTML that benefit from this flexibility and the work-arounds are verbose and difficult to maintain. Furthermore, the harm that can be done with style injections is far less than the harm possible with injected javascript. And so the |
︙ | ︙ | |||
171 172 173 174 175 176 177 | offers free Fossil repository hosting to anyone on the Internet, all served under the same `http://chiselapp.com/user/$NAME/$REPO` URL scheme. Any one of those hundreds of repositories could trick you into visiting their repository home page, set to [an HTML-formatted embedded doc page][hfed] via Admin → Configuration → Index Page, with this content: | | | 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 | offers free Fossil repository hosting to anyone on the Internet, all served under the same `http://chiselapp.com/user/$NAME/$REPO` URL scheme. Any one of those hundreds of repositories could trick you into visiting their repository home page, set to [an HTML-formatted embedded doc page][hfed] via Admin → Configuration → Index Page, with this content: <script src="/doc/trunk/bad.js"></script> That script can then do anything allowed in JavaScript to *any other* Chisel repository your browser can access. The possibilities for mischief are *vast*. For just one example, if you have login cookies on four different Chisel repositories, your attacker could harvest the login cookies for all of them through this path if we allowed Fossil to serve JavaScript files under the same CSP policy as we do for CSS files. |
︙ | ︙ | |||
195 196 197 198 199 200 201 | path around this restriction. If you are serving a Fossil repository that has any user you do not implicitly trust to a level that you would willingly run any JavaScript code they’ve provided, blind, you **must not** give the `--with-th1-docs` option when configuring Fossil, because that allows substitution of the [pre-defined `$nonce` TH1 variable](./th1.md#nonce) into [HTML-formatted embedded docs][hfed]: | | | 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 | path around this restriction. If you are serving a Fossil repository that has any user you do not implicitly trust to a level that you would willingly run any JavaScript code they’ve provided, blind, you **must not** give the `--with-th1-docs` option when configuring Fossil, because that allows substitution of the [pre-defined `$nonce` TH1 variable](./th1.md#nonce) into [HTML-formatted embedded docs][hfed]: <script src="/doc/trunk/bad.js" nonce="$nonce"></script> Even with this feature enabled, you cannot put `<script>` tags into Fossil Wiki or Markdown-formatted content, because our HTML generators for those formats purposely strip or disable such tags in the output. Therefore, if you trust those users with check-in rights to provide JavaScript but not those allowed to file tickets, append to wiki articles, etc., you might justify enabling TH1 docs on your repository, |
︙ | ︙ | |||
328 329 330 331 332 333 334 | Changing this setting is the easiest way to set a nonstandard CSP on your site. Because a blank setting tells Fossil to use its hard-coded default CSP, you have to say something like the following to get a repository without content security policy restrictions: | | | 329 330 331 332 333 334 335 336 337 338 339 340 341 342 343 | Changing this setting is the easiest way to set a nonstandard CSP on your site. Because a blank setting tells Fossil to use its hard-coded default CSP, you have to say something like the following to get a repository without content security policy restrictions: $ fossil set -R /path/to/served/repo.fossil default-csp 'default-src *' We recommend that instead of using the command line to change this setting that you do it via the repository’s web interface, in Admin → Settings. Write your CSP rules in the edit box marked "`default-csp`". Do not add hard newlines in that box: the setting needs to be on a single long line. Beware that changes take effect immediately, so be careful with your edits: you could end up locking |
︙ | ︙ | |||
363 364 365 366 367 368 369 | `default-csp` setting and uses *that* to inject the value into generated HTML pages in its stock configuration. This means that another way you can override this value is to use the [`th1-setup` hook script](./th1-hooks.md), which runs before TH1 processing happens during skin processing: | | | 364 365 366 367 368 369 370 371 372 373 374 375 376 377 378 | `default-csp` setting and uses *that* to inject the value into generated HTML pages in its stock configuration. This means that another way you can override this value is to use the [`th1-setup` hook script](./th1-hooks.md), which runs before TH1 processing happens during skin processing: $ fossil set th1-setup "set default_csp {default-src 'self'}" After [the above](#admin-ui), this is the cleanest method. [thvar]: ./customskin.md#vars |
︙ | ︙ |
Changes to www/delta-manifests.md.
1 2 | # Delta Manifests | < < < < > > > > > > | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 | # Delta Manifests This article describes "delta manifests," a special-case form of checkin manifest which is intended to take up far less space than a normal checkin manifest, in particular for repositories with many files. We'll see, however, that the space savings, if indeed there are any, come with some caveats. This article assumes that the reader is at least moderately familiar with Fossil's [artifact file format](./fileformat.wiki), in particular the structure of checkin manifests, and it won't make much sense to readers unfamiliar with that topic. Sidebar: delta manifests are not to be confused with the core [Fossil delta format](./delta_format.wiki). The former is a special-case form of delta which applies *only* to checkin manifests whereas the latter is a general-purpose delta compression which can apply to any Fossil-stored data (including delta manifests). # Background and Motivation of Delta Manifests A checkin manifest includes a list of every file in that checkin. A moderately-sized project can easily have a thousand files, and every checkin manifest will include those thousand files. As of this writing Fossil's own checkins contain 989 files and the manifests are 80kb |
︙ | ︙ |
Changes to www/delta_encoder_algorithm.wiki.
1 | <title>Fossil Delta Encoding Algorithm</title> | | | 1 2 3 4 5 6 7 8 9 | <title>Fossil Delta Encoding Algorithm</title> <nowiki> <h2>Abstract</h2> <p>A key component for the efficient storage of multiple revisions of a file in fossil repositories is the use of delta-compression, i.e. to store only the changes between revisions instead of the whole file.</p> |
︙ | ︙ | |||
105 106 107 108 109 110 111 | to <a href="delta_format.wiki#copyrange">copy a range</a>, or </li> <li>move the window forward one byte. </li> </ul> </p> | > | < | 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 | to <a href="delta_format.wiki#copyrange">copy a range</a>, or </li> <li>move the window forward one byte. </li> </ul> </p> <div style="float:right"> <verbatim type="pikchr" style="float:right"> TARGET: [ down "Target" bold box fill palegreen width 150% height 200% "Processed" GI: box same as first box fill yellow height 25% "Gap → Insert" CC: box same fill orange height 200% "Common → Copy" W: box same as GI fill lightgray width 125% height 200% "Window" bold box same as CC height 125% "" |
︙ | ︙ | |||
131 132 133 134 135 136 137 138 139 140 141 142 143 144 | B1: box fill white B2: box fill orange height 200% B3: box fill white height 200% ] with .nw at 0.75 right of TARGET.ne arrow from TARGET.W.e to ORIGIN.B2.w "Signature" aligned above </verbatim> <p>To make this decision the encoder first computes the hash value for the NHASH bytes in the window and then looks at all the locations in the "origin" which have the same signature. This part uses the hash table created by the pre-processing step to efficiently find these locations.</p> | > | 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 | B1: box fill white B2: box fill orange height 200% B3: box fill white height 200% ] with .nw at 0.75 right of TARGET.ne arrow from TARGET.W.e to ORIGIN.B2.w "Signature" aligned above </verbatim> </div> <p>To make this decision the encoder first computes the hash value for the NHASH bytes in the window and then looks at all the locations in the "origin" which have the same signature. This part uses the hash table created by the pre-processing step to efficiently find these locations.</p> |
︙ | ︙ | |||
214 215 216 217 218 219 220 | and a new byte is shifted in.<p> <h3 id="rhdef">4.1 Definition</h3> <p>Assuming an array Z of NHASH bytes (indexing starting at 0) the hash V is computed via</p> | > | | | > > | | | > | 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 | and a new byte is shifted in.<p> <h3 id="rhdef">4.1 Definition</h3> <p>Assuming an array Z of NHASH bytes (indexing starting at 0) the hash V is computed via</p> <p align=center><table><tr><td> <p><img src="encode1.gif" align="center"></p> <p><img src="encode2.gif" align="center"></p> <p><img src="encode3.gif" align="center"></p> </td></tr></table></p> where A and B are unsigned 16-bit integers (hence the <u>mod</u>), and V is a 32-bit unsigned integer with B as MSB, A as LSB. <h3 id="rhincr">4.2 Incremental recalculation</h3> <p>Assuming an array Z of NHASH bytes (indexing starting at 0) with hash V (and components A and B), the dropped byte <img src="encode4.gif" align="center">, and the new byte <img src="encode5.gif" align="center"> , the new hash can be computed incrementally via: </p> <p align=center><table><tr><td> <p><img src="encode6.gif" align="center"></p> <p><img src="encode7.gif" align="center"></p> <p><img src="encode8.gif" align="center"></p> </td></tr></table></p> <p>For A, the regular sum, it can be seen easily that this the correct way recomputing that component.</p> <p>For B, the weighted sum, note first that <img src="encode4.gif" align="center"> has the weight NHASH in the sum, so that is what has to be removed. Then adding in <img src="encode9.gif" align="center"> adds one weight factor to all the other values of Z, and at last adds in <img src="encode5.gif" align="center"> with weight 1, also generating the correct new sum</p> |
Changes to www/delta_format.wiki.
︙ | ︙ | |||
188 189 190 191 192 193 194 | The format currently handles only 32 bit integer numbers. They are written base-64 encoded, MSB first, and without leading "0"-characters, except if they are significant (i.e. 0 => "0"). The base-64 encoding uses one character for each 6 bits of the integer to be encoded. The encoding characters are: | | | | | 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 | The format currently handles only 32 bit integer numbers. They are written base-64 encoded, MSB first, and without leading "0"-characters, except if they are significant (i.e. 0 => "0"). The base-64 encoding uses one character for each 6 bits of the integer to be encoded. The encoding characters are: <blockquote><pre> 0123456789ABCDEFGHIJKLMNOPQRSTUVWXYZ_abcdefghijklmnopqrstuvwxyz~ </pre></blockquote> The least significant 6 bits of the integer are encoded by the first character, followed by the next 6 bits, and so on until all non-zero bits of the integer are encoded. The minimum number of encoding characters is used. Note that for integers less than 10, the base-64 coding is a ASCII decimal rendering of the number itself. <h1 id="examples">4.0 Examples</h1> <h2 id="examplesint">4.1 Integer encoding</h2> <table border=1> <tr> <th>Value</th> <th>Encoding</th> </tr> <tr> <td>0</td> <td>0</td> |
︙ | ︙ | |||
226 227 228 229 230 231 232 | </tr> </table> <h2 id="examplesdelta">4.2 Delta encoding</h2> An example of a delta using the specified encoding is: | | | | | | 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 | </tr> </table> <h2 id="examplesdelta">4.2 Delta encoding</h2> An example of a delta using the specified encoding is: <table border=1><tr><td><pre> 1Xb 4E@0,2:thFN@4C,6:scenda1B@Jd,6:scenda5x@Kt,6:pieces79@Qt,F: Example: eskil~E@Y0,2zMM3E;</pre> </td></tr></table> This can be taken apart into the following parts: <table border=1> <tr><th>What </th> <th>Encoding </th><th>Meaning </th><th>Details</th></tr> <tr><td>Header</td> <td>1Xb </td><td>Size </td><td> 6246 </td></tr> <tr><td>S-List</td> <td>4E@0, </td><td>Copy </td><td> 270 @ 0 </td></tr> <tr><td> </td> <td>2:th </td><td>Literal </td><td> 2 'th' </td></tr> <tr><td> </td> <td>FN@4C, </td><td>Copy </td><td> 983 @ 268 </td></tr> <tr><td> </td> <td>6:scenda </td><td>Literal </td><td> 6 'scenda' </td></tr> <tr><td> </td> <td>1B@Jd, </td><td>Copy </td><td> 75 @ 1256 </td></tr> <tr><td> </td> <td>6:scenda </td><td>Literal </td><td> 6 'scenda' </td></tr> <tr><td> </td> <td>5x@Kt, </td><td>Copy </td><td> 380 @ 1336 </td></tr> <tr><td> </td> <td>6:pieces </td><td>Literal </td><td> 6 'pieces' </td></tr> <tr><td> </td> <td>79@Qt, </td><td>Copy </td><td> 457 @ 1720 </td></tr> <tr><td> </td> <td>F: Example: eskil</td><td>Literal </td><td> 15 ' Example: eskil'</td></tr> <tr><td> </td> <td>~E@Y0, </td><td>Copy </td><td> 4046 @ 2176 </td></tr> <tr><td>Trailer</td><td>2zMM3E </td><td>Checksum</td><td> -1101438770 </td></tr> </table> The unified diff behind the above delta is <table border=1><tr><td><pre> bluepeak:(761) ~/Projects/Tcl/Fossil/Devel/devel > diff -u ../DELTA/old ../DELTA/new --- ../DELTA/old 2007-08-23 21:14:40.000000000 -0700 +++ ../DELTA/new 2007-08-23 21:14:33.000000000 -0700 @@ -5,7 +5,7 @@ * If the server does not have write permission on the database file, or on the directory containing the database file (and |
︙ | ︙ | |||
293 294 295 296 297 298 299 | single file. Allow diffs against any two arbitrary versions, not just diffs against the current check-out. Allow configuration options to replace tkdiff with some other - visual differ of the users choice. + visual differ of the users choice. Example: eskil. * Ticketing interface (expand this bullet) | | > | 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 | single file. Allow diffs against any two arbitrary versions, not just diffs against the current check-out. Allow configuration options to replace tkdiff with some other - visual differ of the users choice. + visual differ of the users choice. Example: eskil. * Ticketing interface (expand this bullet) </pre></td></tr></table> <h1 id="notes">Notes</h1> <ul> <li>Pure text files generate a pure text delta. |
︙ | ︙ |
Changes to www/embeddeddoc.wiki.
1 2 3 4 5 6 7 8 | <title>Project Documentation</title> Fossil provides a built-in <a href="wikitheory.wiki">wiki</a> that can be used to store the documentation for a project. This is sufficient for many projects. If your project is well-served by wiki documentation, then you need read no further. | > | 1 2 3 4 5 6 7 8 9 | <title>Project Documentation</title> <h1 align="center">Project Documentation</h1> Fossil provides a built-in <a href="wikitheory.wiki">wiki</a> that can be used to store the documentation for a project. This is sufficient for many projects. If your project is well-served by wiki documentation, then you need read no further. |
︙ | ︙ | |||
27 28 29 30 31 32 33 | <h1>1.0 Fossil Support For Embedded Documentation</h1> The fossil web interface supports embedded documentation using the "/doc" page. To access embedded documentation, one points a web browser to a fossil URL of the following form: | | | | 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 | <h1>1.0 Fossil Support For Embedded Documentation</h1> The fossil web interface supports embedded documentation using the "/doc" page. To access embedded documentation, one points a web browser to a fossil URL of the following form: <blockquote> <i><baseurl></i><big><b>/doc/</b></big><i><version></i><big><b>/</b></big><i><filename></i> </blockquote> The <i><baseurl></i> is the main URL used to access the fossil web server. For example, the <i><baseurl></i> for the fossil project itself is [https://fossil-scm.org/home]. If you launch the web server using the "[/help?cmd=ui|fossil ui]" command line, then the <i><baseurl></i> is usually <b>http://localhost:8080/</b>. |
︙ | ︙ | |||
136 137 138 139 140 141 142 | Hyperlinks in Markdown and HTML embedded documents can reference the root of the Fossil repository using the special text "$ROOT" at the beginning of a URL. For example, a Markdown hyperlink to the Markdown formatting rules might be written in the embedded document like this: | | | | | | | | | | | | 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 | Hyperlinks in Markdown and HTML embedded documents can reference the root of the Fossil repository using the special text "$ROOT" at the beginning of a URL. For example, a Markdown hyperlink to the Markdown formatting rules might be written in the embedded document like this: <nowiki><pre> [Markdown formatting rules]($ROOT/wiki_rules) </pre></nowiki> Depending on how the how the Fossil server is configured, that hyperlink might be renderer like one of the following: <nowiki><pre> <a href="/wiki_rules">Wiki formatting rules</a> <a href="/cgi-bin/fossil/wiki_rules">Wiki formatting rules</a> </pre></nowiki> So, in other words, the "$ROOT" text is converted into whatever the "<baseurl>" is for the document. This substitution works for HTML and Markdown documents. It does not work for Wiki embedded documents, since with Wiki you can just begin a URL with "/" and it automatically knows to prepend the $ROOT. <h2>2.2 "$CURRENT" In "/doc/" Hyperlinks</h2> Similarly, URLs of the form "/doc/$CURRENT/..." have the check-in hash of the check-in currently being viewed substituted in place of the "$CURRENT" text. This feature, in combination with the "$ROOT" substitution above, allows an absolute path to be used for hyperlinks. For example, if an embedded document documented wanted to reference some other document in a separate file named "www/otherdoc.md", it could use a URL like this: <nowiki><pre> [Other Document]($ROOT/doc/$CURRENT/www/otherdoc.md) </pre></nowiki> As with "$ROOT", this substitution only works for Markdown and HTML documents. For Wiki documents, you would need to use a relative URL. <h2 id="th1">2.3 TH1 Documents</h2> Fossil will substitute the value of [./th1.md | TH1 expressions] within |
︙ | ︙ | |||
198 199 200 201 202 203 204 | This file that you are currently reading is an example of embedded documentation. The name of this file in the fossil source tree is "<b>www/embeddeddoc.wiki</b>". You are perhaps looking at this file using the URL: | | | | | | > > | | > | 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 | This file that you are currently reading is an example of embedded documentation. The name of this file in the fossil source tree is "<b>www/embeddeddoc.wiki</b>". You are perhaps looking at this file using the URL: [https://fossil-scm.org/home/doc/trunk/www/embeddeddoc.wiki]. The first part of this path, the "[https://fossil-scm.org/home]", is the base URL. You might have originally typed: [https://fossil-scm.org/]. The web server at the fossil-scm.org site automatically redirects such links by appending "home". The "home" file on fossil-scm.org is really a [./server/any/cgi.md|CGI script] which runs the fossil web service in CGI mode. The "home" CGI script looks like this: <blockquote><pre> #!/usr/bin/fossil repository: /fossil/fossil.fossil </pre></blockquote> This is one of the many ways to set up a <a href="./server/">Fossil server</a>. The "<b>/trunk/</b>" part of the URL tells fossil to use the documentation files from the most recent trunk check-in. If you wanted to see an historical version of this document, you could substitute the name of a check-in for "<b>/trunk/</b>". For example, to see the version of this document associated with check-in [9be1b00392], simply replace the "<b>/trunk/</b>" with "<b>/9be1b00392/</b>". You can also substitute the symbolic name for a particular version or branch. For example, you might replace "<b>/trunk/</b>" with "<b>/experimental/</b>" to get the latest version of this document in the "experimental" branch. The symbolic name can also be a date and time string in any of the following formats:</p> <ul> <li> <i>YYYY-MM-DD</i> <li> <i>YYYY-MM-DD</i><b>T</b><i>HH:MM</i> <li> <i>YYYY-MM-DD</i><b>T</b><i>HH:MM:SS</i> </ul> When the symbolic name is a date and time, fossil shows the version of the document that was most recently checked in as of the date and time specified. So, for example, to see what the fossil website looked like at the beginning of 2010, enter: <blockquote> <a href="/doc/2010-01-01/www/index.wiki"> https://fossil-scm.org/home/doc/<b>2010-01-01</b>/www/index.wiki </a> </blockquote> The file that encodes this document is stored in the fossil source tree under the name "<b>www/embeddeddoc.wiki</b>" and so that name forms the last part of the URL for this document. As I sit writing this documentation file, I am testing my work by running the "<b>fossil ui</b>" command line and viewing <b>http://localhost:8080/doc/ckout/www/embeddeddoc.wiki</b> in Firefox. I am doing this even though I have not yet checked in the "<b>www/embeddeddoc.wiki</b>" file for the first time. Using the special "<b>ckout</b>" version identifier on the "<b>/doc</b>" page it is easy to make multiple changes to multiple files and see how they all look together before committing anything to the repository. |
Changes to www/encryptedrepos.wiki.
1 | <title>How To Use Encrypted Repositories</title> | < | < | | < < | | < | | | | < | | < < | | < < | | < < | | < > | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 | <title>How To Use Encrypted Repositories</title> <h2>Introduction</h2><blockquote> Fossil can be compiled so that it works with encrypted repositories using the [https://www.sqlite.org/see/doc/trunk/www/readme.wiki|SQLite Encryption Extension]. This technical note explains the process. </blockquote> <h2>Building An Encryption-Enabled Fossil</h2><blockquote> The SQLite Encryption Extension (SEE) is proprietary software and requires [https://sqlite.org/purchase/see|purchasing a license]. Assuming you have an SEE license, the first step of compiling Fossil to use SEE is to create an SEE-enabled version of the SQLite database source code. This alternative SQLite database source file should be called "sqlite3-see.c" and should be placed in the extsrc/ subfolder of the Fossil sources, right beside the public-domain "sqlite3.c" source file. Also make a copy of the SEE-enabled "shell.c" file, renamed as "shell-see.c", and place it in the extsrc/ subfolder beside the original "shell.c". Add the --with-see command-line option to the configuration script to enable the use of SEE on unix-like systems. <blockquote><pre> ./configure --with-see; make </pre></blockquote> To build for Windows using MSVC, add the "USE_SEE=1" argument to the "nmake" command line. <blockquote><pre> nmake -f makefile.msc USE_SEE=1 </pre></blockquote> </blockquote> <h2>Using Encrypted Repositories</h2><blockquote> Any Fossil repositories whose filename ends with ".efossil" is taken to be an encrypted repository. Fossil will prompt for the encryption password and attempt to open the repository database using that password. Every invocation of fossil on an encrypted repository requires retyping the encryption password. To avoid excess password typing, consider using the "fossil shell" command which prompts for the password just once, then reuses it for each subsequent Fossil command entered at the prompt. On Windows, the "fossil server", "fossil ui", and "fossil shell" commands do not (currently) work on an encrypted repository. </blockquote> <h2>Additional Security</h2><blockquote> Use the FOSSIL_SECURITY_LEVEL environment for additional protection. <blockquote><pre> export FOSSIL_SECURITY_LEVEL=1 </pre></blockquote> A setting of 1 or greater prevents fossil from trying to remember the previous sync password. <blockquote><pre> export FOSSIL_SECURITY_LEVEL=2 </pre></blockquote> A setting of 2 or greater causes all password prompts to be preceded by a random translation matrix similar to the following: <blockquote><pre> abcde fghij klmno pqrst uvwyz qresw gjymu dpcoa fhkzv inlbt </pre></blockquote> When entering the password, the user must substitute the letter on the second line that corresponds to the letter on the first line. Uppercase substitutes for uppercase inputs, and lowercase substitutes for lowercase inputs. Letters that are not in the translation matrix (digits, punctuation, and "x") are not modified. For example, given the translation matrix above, if the password is "pilot-9crazy-xube", then the user must type "fmpav-9ekqtb-xirw". This simple substitution cypher helps prevent password capture by keyloggers. </blockquote> |
Changes to www/event.wiki.
︙ | ︙ | |||
71 72 73 74 75 76 77 | There is a hyperlink under the /wikihelp menu that can be used to create new technotes. And there is a submenu hyperlink on technote displays for editing existing technotes. Technotes can also be created using the <b>wiki create</b> command: | > | | | | > > | | | | | > > | | | > | 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 | There is a hyperlink under the /wikihelp menu that can be used to create new technotes. And there is a submenu hyperlink on technote displays for editing existing technotes. Technotes can also be created using the <b>wiki create</b> command: <blockquote> <b> fossil wiki create TestTechnote -t now --technote-bgcolor lightgreen technote.md<br> <tt>Created new tech note 2021-03-15 13:05:56</tt><br> </b> </blockquote> This command inserts a light green technote in the timeline at 2021-03-15 13:05:56, with the contents of file <b>technote.md</b> and comment "TestTechnote". Specifying a different time using <b>-t DATETIME</b> will insert the technote at the specified timestamp location in the timeline. Different technotes can have the same timestamp. The first argument to create, <b>TECHNOTE-COMMENT</b>, is the title text for the technote that appears in the timeline. To view all technotes, use the <b>wiki ls</b> command: <blockquote> <b> fossil wiki ls --technote --show-technote-ids<br> <tt>z739263a134bf0da1d28e939f4c4367f51ef4c51 2020-12-19 13:20:19</tt><br> <tt>e15a918a8bed71c2ac091d74dc397b8d3340d5e1 2018-09-22 17:40:10</tt><br> </b> </blockquote> A technote ID is the UUID of the technote. To view an individual technote, use the <b>wiki export</b> command: <blockquote> <b> fossil wiki export --technote version-2.16<br> Release Notes 2021-07-02 This note describes changes in the Fossil snapshot for ... </b> </blockquote> The <b>-t|--technote</b> option to the <b>export</b> subcommand takes one of three identifiers: <b>DATETIME</b>; <b>TECHNOTE-ID</b>; and <b>TAG</b>. See the [/help?cmd=wiki | wiki help] for specifics. Users must have check-in privileges (permission "i") in order to create or edit technotes. In addition, users must have create-wiki |
︙ | ︙ |
Changes to www/faq.tcl.
︙ | ︙ | |||
10 11 12 13 14 15 16 | faq { What GUIs are available for fossil? } { The fossil executable comes with a [./webui.wiki | web-based GUI] built in. Just run: | | | | 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 | faq { What GUIs are available for fossil? } { The fossil executable comes with a [./webui.wiki | web-based GUI] built in. Just run: <blockquote> <b>fossil [/help/ui|ui]</b> <i>REPOSITORY-FILENAME</i> </blockquote> And your default web browser should pop up and automatically point to the fossil interface. (Hint: You can omit the <i>REPOSITORY-FILENAME</i> if you are within an open check-out.) } faq { |
︙ | ︙ | |||
40 41 42 43 44 45 46 | When you are checking in a new change using the <b>[/help/commit|commit]</b> command, you can add the option "--branch <i>BRANCH-NAME</i>" to make the new check-in be the first check-in for a new branch. If you want to create a new branch whose initial content is the same as an existing check-in, use this command: | | | | 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 | When you are checking in a new change using the <b>[/help/commit|commit]</b> command, you can add the option "--branch <i>BRANCH-NAME</i>" to make the new check-in be the first check-in for a new branch. If you want to create a new branch whose initial content is the same as an existing check-in, use this command: <blockquote> <b>fossil [/help/branch|branch] new</b> <i>BRANCH-NAME BASIS</i> </blockquote> The <i>BRANCH-NAME</i> argument is the name of the new branch and the <i>BASIS</i> argument is the name of the check-in that the branch splits off from. If you already have a fork in your check-in tree and you want to convert that fork to a branch, you can do this from the web interface. |
︙ | ︙ | |||
73 74 75 76 77 78 79 | "--tag <i>TAGNAME</i>" command-line option. You can repeat the --tag option to give a check-in multiple tags. Tags need not be unique. So, for example, it is common to give every released version a "release" tag. If you want add a tag to an existing check-in, you can use the <b>[/help/tag|tag]</b> command. For example: | | | | 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 | "--tag <i>TAGNAME</i>" command-line option. You can repeat the --tag option to give a check-in multiple tags. Tags need not be unique. So, for example, it is common to give every released version a "release" tag. If you want add a tag to an existing check-in, you can use the <b>[/help/tag|tag]</b> command. For example: <blockquote> <b>fossil [/help/branch|tag] add</b> <i>TAGNAME</i> <i>CHECK-IN</i> </blockquote> The CHECK-IN in the previous line can be any [./checkin_names.wiki | valid check-in name format]. You can also add (and remove) tags from a check-in using the [./webui.wiki | web interface]. First locate the check-in that you what to tag on the timeline, then click on the link to go the detailed |
︙ | ︙ | |||
125 126 127 128 129 130 131 | See the article on [./shunning.wiki | "shunning"] for details. } faq { How do I make a clone of the fossil self-hosting repository? } { Any of the following commands should work: | < | | < < | | < < | | | > | | 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 | See the article on [./shunning.wiki | "shunning"] for details. } faq { How do I make a clone of the fossil self-hosting repository? } { Any of the following commands should work: <blockquote><pre> fossil [/help/clone|clone] https://fossil-scm.org/ fossil.fossil fossil [/help/clone|clone] https://www2.fossil-scm.org/ fossil.fossil fossil [/help/clone|clone] https://www3.fossil-scm.org/site.cgi fossil.fossil </pre></blockquote> Once you have the repository cloned, you can open a local check-out as follows: <blockquote><pre> mkdir src; cd src; fossil [/help/open|open] ../fossil.fossil </pre></blockquote> Thereafter you should be able to keep your local check-out up to date with the latest code in the public repository by typing: <blockquote><pre> fossil [/help/update|update] </pre></blockquote> } faq { How do I import or export content from and to other version control systems? } { Please see [./inout.wiki | Import And Export] } ############################################################################# # Code to actually generate the FAQ # puts "<title>Fossil FAQ</title>" puts "<h1 align=\"center\">Frequently Asked Questions</h1>\n" puts "Note: See also <a href=\"qandc.wiki\">Questions and Criticisms</a>.\n" puts {<ol>} for {set i 1} {$i<$cnt} {incr i} { puts "<li><a href=\"#q$i\">[lindex $faq($i) 0]</a></li>" } puts {</ol>} puts {<hr>} for {set i 1} {$i<$cnt} {incr i} { puts "<p id=\"q$i\"><b>($i) [lindex $faq($i) 0]</b></p>\n" set body [lindex $faq($i) 1] regsub -all "\n *" [string trim $body] "\n" body puts "<blockquote>$body</blockquote></li>\n" } puts {</ol>} |
Changes to www/faq.wiki.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 | <title>Fossil FAQ</title> Note: See also <a href="qandc.wiki">Questions and Criticisms</a>. <ol> <li><a href="#q1">What GUIs are available for fossil?</a></li> <li><a href="#q2">What is the difference between a "branch" and a "fork"?</a></li> <li><a href="#q3">How do I create a new branch?</a></li> <li><a href="#q4">How do I tag a check-in?</a></li> <li><a href="#q5">How do I create a private branch that won't get pushed back to the main repository.</a></li> <li><a href="#q6">How can I delete inappropriate content from my fossil repository?</a></li> <li><a href="#q7">How do I make a clone of the fossil self-hosting repository?</a></li> <li><a href="#q8">How do I import or export content from and to other version control systems?</a></li> </ol> <hr> <p id="q1"><b>(1) What GUIs are available for fossil?</b></p> | > | | | | | | | | | | | | | | | | | | < | | < < | | < < | | | | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 | <title>Fossil FAQ</title> <h1 align="center">Frequently Asked Questions</h1> Note: See also <a href="qandc.wiki">Questions and Criticisms</a>. <ol> <li><a href="#q1">What GUIs are available for fossil?</a></li> <li><a href="#q2">What is the difference between a "branch" and a "fork"?</a></li> <li><a href="#q3">How do I create a new branch?</a></li> <li><a href="#q4">How do I tag a check-in?</a></li> <li><a href="#q5">How do I create a private branch that won't get pushed back to the main repository.</a></li> <li><a href="#q6">How can I delete inappropriate content from my fossil repository?</a></li> <li><a href="#q7">How do I make a clone of the fossil self-hosting repository?</a></li> <li><a href="#q8">How do I import or export content from and to other version control systems?</a></li> </ol> <hr> <p id="q1"><b>(1) What GUIs are available for fossil?</b></p> <blockquote>The fossil executable comes with a [./webui.wiki | web-based GUI] built in. Just run: <blockquote> <b>fossil [/help/ui|ui]</b> <i>REPOSITORY-FILENAME</i> </blockquote> And your default web browser should pop up and automatically point to the fossil interface. (Hint: You can omit the <i>REPOSITORY-FILENAME</i> if you are within an open check-out.)</blockquote></li> <p id="q2"><b>(2) What is the difference between a "branch" and a "fork"?</b></p> <blockquote>This is a big question - too big to answer in a FAQ. Please read the <a href="branching.wiki">Branching, Forking, Merging, and Tagging</a> document.</blockquote></li> <p id="q3"><b>(3) How do I create a new branch?</b></p> <blockquote>There are lots of ways: When you are checking in a new change using the <b>[/help/commit|commit]</b> command, you can add the option "--branch <i>BRANCH-NAME</i>" to make the new check-in be the first check-in for a new branch. If you want to create a new branch whose initial content is the same as an existing check-in, use this command: <blockquote> <b>fossil [/help/branch|branch] new</b> <i>BRANCH-NAME BASIS</i> </blockquote> The <i>BRANCH-NAME</i> argument is the name of the new branch and the <i>BASIS</i> argument is the name of the check-in that the branch splits off from. If you already have a fork in your check-in tree and you want to convert that fork to a branch, you can do this from the web interface. First locate the check-in that you want to be the initial check-in of your branch on the timeline and click on its link so that you are on the <b>ci</b> page. Then find the "<b>edit</b>" link (near the "Commands:" label) and click on that. On the "Edit Check-in" page, check the box beside "Branching:" and fill in the name of your new branch to the right and press the "Apply Changes" button.</blockquote></li> <p id="q4"><b>(4) How do I tag a check-in?</b></p> <blockquote>There are several ways: When you are checking in a new change using the <b>[/help/commit|commit]</b> command, you can add a tag to that check-in using the "--tag <i>TAGNAME</i>" command-line option. You can repeat the --tag option to give a check-in multiple tags. Tags need not be unique. So, for example, it is common to give every released version a "release" tag. If you want add a tag to an existing check-in, you can use the <b>[/help/tag|tag]</b> command. For example: <blockquote> <b>fossil [/help/branch|tag] add</b> <i>TAGNAME</i> <i>CHECK-IN</i> </blockquote> The CHECK-IN in the previous line can be any [./checkin_names.wiki | valid check-in name format]. You can also add (and remove) tags from a check-in using the [./webui.wiki | web interface]. First locate the check-in that you what to tag on the timeline, then click on the link to go the detailed information page for that check-in. Then find the "<b>edit</b>" link (near the "Commands:" label) and click on that. There are controls on the edit page that allow new tags to be added and existing tags to be removed.</blockquote></li> <p id="q5"><b>(5) How do I create a private branch that won't get pushed back to the main repository.</b></p> <blockquote>Use the <b>--private</b> command-line option on the <b>commit</b> command. The result will be a check-in which exists on your local repository only and is never pushed to other repositories. All descendants of a private check-in are also private. Unless you specify something different using the <b>--branch</b> and/or <b>--bgcolor</b> options, the new private check-in will be put on a branch named "private" with an orange background color. You can merge from the trunk into your private branch in order to keep your private branch in sync with the latest changes on the trunk. Once you have everything in your private branch the way you want it, you can then merge your private branch back into the trunk and push. Only the final merge operation will appear in other repositories. It will seem as if all the changes that occurred on your private branch occurred in a single check-in. Of course, you can also keep your branch private forever simply by not merging the changes in the private branch back into the trunk. [./private.wiki | Additional information]</blockquote></li> <p id="q6"><b>(6) How can I delete inappropriate content from my fossil repository?</b></p> <blockquote>See the article on [./shunning.wiki | "shunning"] for details.</blockquote></li> <p id="q7"><b>(7) How do I make a clone of the fossil self-hosting repository?</b></p> <blockquote>Any of the following commands should work: <blockquote><pre> fossil [/help/clone|clone] https://fossil-scm.org/ fossil.fossil fossil [/help/clone|clone] https://www2.fossil-scm.org/ fossil.fossil fossil [/help/clone|clone] https://www3.fossil-scm.org/site.cgi fossil.fossil </pre></blockquote> Once you have the repository cloned, you can open a local check-out as follows: <blockquote><pre> mkdir src; cd src; fossil [/help/open|open] ../fossil.fossil </pre></blockquote> Thereafter you should be able to keep your local check-out up to date with the latest code in the public repository by typing: <blockquote><pre> fossil [/help/update|update] </pre></blockquote></blockquote></li> <p id="q8"><b>(8) How do I import or export content from and to other version control systems?</b></p> <blockquote>Please see [./inout.wiki | Import And Export]</blockquote></li> </ol> |
Changes to www/fileedit-page.md.
︙ | ︙ | |||
260 261 262 263 264 265 266 267 268 269 270 271 272 273 | lost on a page reload. How that is done is completely dependent on the 3rd-party editor widget, but it generically looks something like: ``` myCustomWidget.on('eventName', ()=>fossil.page.notifyOfChange()); ``` Lastly, if the 3rd-party editor does *not* hide or remove the native editor widget, and does not inject itself into the DOM on the caller's behalf, we can replace the native widget with the 3rd-party one with: ```javascript fossil.page.replaceEditorWidget(yourNewWidgetElement); ``` | > > > | 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 | lost on a page reload. How that is done is completely dependent on the 3rd-party editor widget, but it generically looks something like: ``` myCustomWidget.on('eventName', ()=>fossil.page.notifyOfChange()); ``` (This feature requires fossil version 2.13 or later. In 2.12 it is possible to do this but requires making use of a "leaky abstraction".) Lastly, if the 3rd-party editor does *not* hide or remove the native editor widget, and does not inject itself into the DOM on the caller's behalf, we can replace the native widget with the 3rd-party one with: ```javascript fossil.page.replaceEditorWidget(yourNewWidgetElement); ``` |
︙ | ︙ |
Changes to www/fileformat.wiki.
1 2 3 4 5 6 7 8 | <title>Fossil File Formats</title> The global state of a fossil repository is kept simple so that it can endure in useful form for decades or centuries. A fossil repository is intended to be readable, searchable, and extensible by people not yet born. The global state of a fossil repository is an unordered | > > > | 1 2 3 4 5 6 7 8 9 10 11 | <title>Fossil File Formats</title> <h1 align="center"> Fossil File Formats </h1> The global state of a fossil repository is kept simple so that it can endure in useful form for decades or centuries. A fossil repository is intended to be readable, searchable, and extensible by people not yet born. The global state of a fossil repository is an unordered |
︙ | ︙ | |||
105 106 107 108 109 110 111 | well as information such as parent check-ins, the username of the programmer who created the check-in, the date and time when the check-in was created, and any check-in comments associated with the check-in. Allowed cards in the manifest are as follows: | | | | 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 | well as information such as parent check-ins, the username of the programmer who created the check-in, the date and time when the check-in was created, and any check-in comments associated with the check-in. Allowed cards in the manifest are as follows: <blockquote> <b>B</b> <i>baseline-manifest</i><br> <b>C</b> <i>checkin-comment</i><br> <b>D</b> <i>time-and-date-stamp</i><br> <b>F</b> <i>filename</i> ?<i>hash</i>? ?<i>permissions</i>? ?<i>old-name</i>?<br> <b>N</b> <i>mimetype</i><br> <b>P</b> <i>artifact-hash</i>+<br> <b>Q</b> (<b>+</b>|<b>-</b>)<i>artifact-hash</i> ?<i>artifact-hash</i>?<br> <b>R</b> <i>repository-checksum</i><br> <b>T</b> (<b>+</b>|<b>-</b>|<b>*</b>)<i>tag-name</i> <b>*</b> ?<i>value</i>?<br> <b>U</b> <i>user-login</i><br> <b>Z</b> <i>manifest-checksum</i> </blockquote> A manifest may optionally have a single <b>B</b> card. The <b>B</b> card specifies another manifest that serves as the "baseline" for this manifest. A manifest that has a <b>B</b> card is called a delta-manifest and a manifest that omits the <b>B</b> card is a baseline-manifest. The other manifest identified by the argument of the <b>B</b> card must be a baseline-manifest. A baseline-manifest records the complete contents of a check-in. |
︙ | ︙ | |||
143 144 145 146 147 148 149 | in the comment. A manifest must have exactly one <b>D</b> card. The sole argument to the <b>D</b> card is a date-time stamp in the ISO8601 format. The date and time should be in coordinated universal time (UTC). The format one of: | | > > | | 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 | in the comment. A manifest must have exactly one <b>D</b> card. The sole argument to the <b>D</b> card is a date-time stamp in the ISO8601 format. The date and time should be in coordinated universal time (UTC). The format one of: <blockquote> <i>YYYY</i><b>-</b><i>MM</i><b>-</b><i>DD</i><b>T</b><i>HH</i><b>:</b><i>MM</i><b>:</b><i>SS</i><br> <i>YYYY</i><b>-</b><i>MM</i><b>-</b><i>DD</i><b>T</b><i>HH</i><b>:</b><i>MM</i><b>:</b><i>SS</i><b>.</b><i>SSS</i> </blockquote> A manifest has zero or more <b>F</b> cards. Each <b>F</b> card identifies a file that is part of the check-in. There are one, two, three, or four arguments. The first argument is the pathname of the file in the check-in relative to the root of the project file hierarchy. No ".." or "." directories are allowed within the filename. Space characters are escaped as in <b>C</b> card comment text. Backslash characters and |
︙ | ︙ | |||
256 257 258 259 260 261 262 | Clusters are used during repository synchronization to help reduce network traffic. As such, clusters are an optimization and may be removed from a repository without loss or damage to the underlying project code. Allowed cards in the cluster are as follows: | | | | | | 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 | Clusters are used during repository synchronization to help reduce network traffic. As such, clusters are an optimization and may be removed from a repository without loss or damage to the underlying project code. Allowed cards in the cluster are as follows: <blockquote> <b>M</b> <i>artifact-id</i><br /> <b>Z</b> <i>checksum</i> </blockquote> A cluster contains one or more <b>M</b> cards followed by a single <b>Z</b> card. Each <b>M</b> card has a single argument which is the artifact ID of another artifact in the repository. The <b>Z</b> card works exactly like the <b>Z</b> card of a manifest. The argument to the <b>Z</b> card is the lower-case hexadecimal representation of the MD5 checksum of all prior cards in the cluster. The <b>Z</b> card is required. An example cluster from Fossil can be seen [/artifact/d03dbdd73a2a8 | here]. <h3 id="ctrl">2.3 Control Artifacts</h3> Control artifacts are used to assign properties to other artifacts within the repository. Allowed cards in a control artifact are as follows: <blockquote> <b>D</b> <i>time-and-date-stamp</i><br /> <b>T</b> (<b>+</b>|<b>-</b>|<b>*</b>)<i>tag-name</i> <i>artifact-id</i> ?<i>value</i>?<br /> <b>U</b> <i>user-name</i><br /> <b>Z</b> <i>checksum</i><br /> </blockquote> A control artifact must have one <b>D</b> card, one <b>U</b> card, one <b>Z</b> card and one or more <b>T</b> cards. No other cards or other text is allowed in a control artifact. Control artifacts might be PGP clearsigned. The <b>D</b> card and the <b>Z</b> card of a control artifact are the same |
︙ | ︙ | |||
331 332 333 334 335 336 337 | <h3 id="wikichng">2.4 Wiki Pages</h3> A wiki artifact defines a single version of a single wiki page. Wiki artifacts accept the following card types: | | | | 336 337 338 339 340 341 342 343 344 345 346 347 348 349 350 351 352 353 354 355 356 357 358 359 | <h3 id="wikichng">2.4 Wiki Pages</h3> A wiki artifact defines a single version of a single wiki page. Wiki artifacts accept the following card types: <blockquote> <b>C</b> <i>change-comment</i><br> <b>D</b> <i>time-and-date-stamp</i><br /> <b>L</b> <i>wiki-title</i><br /> <b>N</b> <i>mimetype</i><br /> <b>P</b> <i>parent-artifact-id</i>+<br /> <b>U</b> <i>user-name</i><br /> <b>W</b> <i>size</i> <b>\n</b> <i>text</i> <b>\n</b><br /> <b>Z</b> <i>checksum</i> </blockquote> The <b>D</b> card is the date and time when the wiki page was edited. The <b>P</b> card specifies the parent wiki pages, if any. The <b>L</b> card gives the name of the wiki page. The optional <b>N</b> card specifies the mimetype of the wiki text. If the <b>N</b> card is omitted, the mimetype is assumed to be text/x-fossil-wiki. The <b>U</b> card specifies the login |
︙ | ︙ | |||
372 373 374 375 376 377 378 | [/artifact?name=7b2f5fd0e0&txt=1 | here]. <h3 id="tktchng">2.5 Ticket Changes</h3> A ticket-change artifact represents a change to a trouble ticket. The following cards are allowed on a ticket change artifact: | | | | 377 378 379 380 381 382 383 384 385 386 387 388 389 390 391 392 393 394 395 396 397 | [/artifact?name=7b2f5fd0e0&txt=1 | here]. <h3 id="tktchng">2.5 Ticket Changes</h3> A ticket-change artifact represents a change to a trouble ticket. The following cards are allowed on a ticket change artifact: <blockquote> <b>D</b> <i>time-and-date-stamp</i><br /> <b>J</b> ?<b>+</b>?<i>name</i> ?<i>value</i>?<br /> <b>K</b> <i>ticket-id</i><br /> <b>U</b> <i>user-name</i><br /> <b>Z</b> <i>checksum</i> </blockquote> The <b>D</b> card is the usual date and time stamp and represents the point in time when the change was entered. The <b>U</b> card is the login of the programmer who entered this change. The <b>Z</b> card is the required checksum over the entire artifact. Every ticket has a distinct ticket-id: |
︙ | ︙ | |||
420 421 422 423 424 425 426 | An attachment artifact associates some other artifact that is the attachment (the source artifact) with a ticket or wiki page or technical note to which the attachment is connected (the target artifact). The following cards are allowed on an attachment artifact: | | | | 425 426 427 428 429 430 431 432 433 434 435 436 437 438 439 440 441 442 443 444 445 446 | An attachment artifact associates some other artifact that is the attachment (the source artifact) with a ticket or wiki page or technical note to which the attachment is connected (the target artifact). The following cards are allowed on an attachment artifact: <blockquote> <b>A</b> <i>filename target</i> ?<i>source</i>?<br /> <b>C</b> <i>comment</i><br /> <b>D</b> <i>time-and-date-stamp</i><br /> <b>N</b> <i>mimetype</i><br /> <b>U</b> <i>user-name</i><br /> <b>Z</b> <i>checksum</i> </blockquote> The <b>A</b> card specifies a filename for the attachment in its first argument. The second argument to the <b>A</b> card is the name of the wiki page or ticket or technical note to which the attachment is connected. The third argument is either missing or else it is the lower-case artifact ID of the attachment itself. A missing third argument means that the attachment should be deleted. |
︙ | ︙ | |||
462 463 464 465 466 467 468 | A technical note or "technote" artifact (formerly known as an "event" artifact) associates a timeline comment and a page of text (similar to a wiki page) with a point in time. Technotes can be used to record project milestones, release notes, blog entries, process checkpoints, or news articles. The following cards are allowed on an technote artifact: | | | | 467 468 469 470 471 472 473 474 475 476 477 478 479 480 481 482 483 484 485 486 487 488 489 490 491 | A technical note or "technote" artifact (formerly known as an "event" artifact) associates a timeline comment and a page of text (similar to a wiki page) with a point in time. Technotes can be used to record project milestones, release notes, blog entries, process checkpoints, or news articles. The following cards are allowed on an technote artifact: <blockquote> <b>C</b> <i>comment</i><br> <b>D</b> <i>time-and-date-stamp</i><br /> <b>E</b> <i>technote-time</i> <i>technote-id</i><br /> <b>N</b> <i>mimetype</i><br /> <b>P</b> <i>parent-artifact-id</i>+<br /> <b>T</b> <b>+</b><i>tag-name</i> <b>*</b> ?<i>value</i>?<br /> <b>U</b> <i>user-name</i><br /> <b>W</b> <i>size</i> <b>\n</b> <i>text</i> <b>\n</b><br /> <b>Z</b> <i>checksum</i> </blockquote> The <b>C</b> card contains text that is displayed on the timeline for the technote. The <b>C</b> card is optional, but there can only be one. A single <b>D</b> card is required to give the date and time when the technote artifact was created. This is different from the time at which the technote appears on the timeline. |
︙ | ︙ | |||
527 528 529 530 531 532 533 | <h3 id="forum">2.8 Forum Posts</h3> Forum posts are intended as a mechanism for users and developers to discuss a project. Forum posts are like messages on a mailing list. The following cards are allowed on an forum post artifact: | | | | 532 533 534 535 536 537 538 539 540 541 542 543 544 545 546 547 548 549 550 551 552 553 554 555 556 | <h3 id="forum">2.8 Forum Posts</h3> Forum posts are intended as a mechanism for users and developers to discuss a project. Forum posts are like messages on a mailing list. The following cards are allowed on an forum post artifact: <blockquote> <b>D</b> <i>time-and-date-stamp</i><br /> <b>G</b> <i>thread-root</i><br /> <b>H</b> <i>thread-title</i><br /> <b>I</b> <i>in-reply-to</i><br /> <b>N</b> <i>mimetype</i><br /> <b>P</b> <i>parent-artifact-id</i><br /> <b>U</b> <i>user-name</i><br /> <b>W</b> <i>size</i> <b>\n</b> <i>text</i> <b>\n</b><br /> <b>Z</b> <i>checksum</i> </blockquote> Every forum post must have either one <b>I</b> card and one <b>G</b> card or one <b>H</b> card. Forum posts are organized into topic threads. The initial post for a thread (the root post) has an <b>H</b> card giving the title or subject for that thread. The argument to the <b>H</b> card is a string in the same format as a comment string in a <b>C</b> card. |
︙ | ︙ | |||
603 604 605 606 607 608 609 | The following table summarizes the various kinds of cards that appear on Fossil artifacts. A blank entry means that combination of card and artifact is not legal. A number or range of numbers indicates the number of times a card may (or must) appear in the corresponding artifact type. e.g. a value of 1 indicates a required unique card and 1+ indicates that one or more such cards are required. | | | > > > | 608 609 610 611 612 613 614 615 616 617 618 619 620 621 622 623 624 625 626 627 | The following table summarizes the various kinds of cards that appear on Fossil artifacts. A blank entry means that combination of card and artifact is not legal. A number or range of numbers indicates the number of times a card may (or must) appear in the corresponding artifact type. e.g. a value of 1 indicates a required unique card and 1+ indicates that one or more such cards are required. <table border=1 width="100%"> <tr> <th rowspan=2 valign=bottom>Card Format</th> <th colspan=8>Used By</th> </tr> <tr> <th>Manifest</th> <th>Cluster</th> <th>Control</th> <th>Wiki</th> <th>Ticket</th> <th>Attachment</th> <th>Technote</th> |
︙ | ︙ | |||
899 900 901 902 903 904 905 | wrong order. Both bugs have now been fixed. However, to prevent historical Technical Note artifacts that were inserted by users in good faith from being rejected by newer Fossil builds, the card ordering requirement is relaxed slightly. The actual implementation is this: | | | | 907 908 909 910 911 912 913 914 915 916 917 918 919 920 921 922 923 924 925 | wrong order. Both bugs have now been fixed. However, to prevent historical Technical Note artifacts that were inserted by users in good faith from being rejected by newer Fossil builds, the card ordering requirement is relaxed slightly. The actual implementation is this: <blockquote> "All cards must be in strict lexicographic order, except that the N and P cards of a Technical Note artifact are allowed to be interchanged." </blockquote> Future versions of Fossil might strengthen this slightly to only allow the out of order N and P cards for Technical Notes entered before a certain date. <h3>4.2 R-Card Hash Calculation</h3> |
︙ | ︙ |
Changes to www/forum.wiki.
︙ | ︙ | |||
132 133 134 135 136 137 138 | The remainder of this section summarizes the differences you're expected to see when taking option #2. The first thing is that you'll need to add something like the following to the Header part of the skin to create the navbar link: <verbatim> | | | | | | | | | | | | | | 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 | The remainder of this section summarizes the differences you're expected to see when taking option #2. The first thing is that you'll need to add something like the following to the Header part of the skin to create the navbar link: <verbatim> if {[anycap 23456] || [anoncap 2] || [anoncap 3]} { menulink /forum Forum } </verbatim> These rules say that any logged-in user with any [./caps/ref.html#2 | forum-related capability] or an anonymous user <b>RdForum</b> or <b>WrForum</b> capability will see the "Forum" navbar link, which just takes you to <tt>/forum</tt>. The exact code you need here varies depending on which skin you're using. Follow the style you see for the other navbar links. The new forum feature also brings many new CSS styles to the table. If you're using the stock skin or something sufficiently close, the changes may work with your existing skin as-is. Otherwise, you might need to adjust some things, such as the background color used for the selected forum post: <verbatim> div.forumSel { background-color: rgba(0, 0, 0, 0.05); } </verbatim> That overrides the default — a hard-coded light cyan — with a 95% transparent black overlay instead, which simply darkens your skin's normal background color underneath the selected post. That should work with almost any background color except for very dark background colors. For dark skins, an inverse of the above trick will work better: <verbatim> div.forumSel { background-color: rgba(255, 255, 255, 0.05); } </verbatim> That overlays the background with 5% white to lighten it slightly. Another new forum-related CSS style you might want to reflect into your existing skin is: <verbatim> div.forumPosts a:visited { color: #6A7F94; } </verbatim> This changes the clicked-hyperlink color for the forum post links on the main <tt>/forum</tt> page only, which allows your browser's history mechanism to show which threads a user has read and which not. The link color will change back to the normal link color — indicating "unread" — when a reply is added to an existing thread because that changes where |
︙ | ︙ |
Changes to www/fossil-v-git.wiki.
︙ | ︙ | |||
32 33 34 35 36 37 38 | <h2>2.0 Differences Between Fossil And Git</h2> Differences between Fossil and Git are summarized by the following table, with further description in the text that follows. | | | < | | < | 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 | <h2>2.0 Differences Between Fossil And Git</h2> Differences between Fossil and Git are summarized by the following table, with further description in the text that follows. <blockquote><table border=1 cellpadding=5 align=center> <tr><th width="49%">GIT</th><th width="49%">FOSSIL</th><th width="2%">more</th></tr> <tr> <td>File versioning only</td> <td>VCS, tickets, wiki, docs, notes, forum, chat, UI, [https://en.wikipedia.org/wiki/Role-based_access_control|RBAC]</td> <td><a href="#features">2.1 ↓</a></td> </tr> <tr> <td>A federation of many small programs</td> <td>One self-contained, stand-alone executable</td> <td><a href="#selfcontained">2.2 ↓</a></td> </tr> |
︙ | ︙ | |||
97 98 99 100 101 102 103 | <td><a href="#testing">2.8 ↓</a></td> </tr> <tr> <td>SHA-1 or SHA-2</td> <td>SHA-1 and/or SHA-3, in the same repository</td> <td><a href="#hash">2.9 ↓</a></td> </tr> | | | 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 | <td><a href="#testing">2.8 ↓</a></td> </tr> <tr> <td>SHA-1 or SHA-2</td> <td>SHA-1 and/or SHA-3, in the same repository</td> <td><a href="#hash">2.9 ↓</a></td> </tr> </table></blockquote> <h3 id="features">2.1 Featureful</h3> Git provides file versioning services only, whereas Fossil adds an integrated [./wikitheory.wiki | wiki], [./bugtheory.wiki | ticketing & bug tracking], [./embeddeddoc.wiki | embedded documentation], |
︙ | ︙ | |||
797 798 799 800 801 802 803 | which every commit is tested first. It encourages thinking before acting. We believe this is an inherently good thing. Incidentally, this is a good example of Git's messy command design. These three commands: <pre> | | | | | | | | 795 796 797 798 799 800 801 802 803 804 805 806 807 808 809 810 811 812 813 814 815 816 817 818 819 | which every commit is tested first. It encourages thinking before acting. We believe this is an inherently good thing. Incidentally, this is a good example of Git's messy command design. These three commands: <pre> $ git merge HASH $ git cherry-pick HASH $ git revert HASH </pre> ...are all the same command in Fossil: <pre> $ fossil merge HASH $ fossil merge --cherrypick HASH $ fossil merge --backout HASH </pre> If you think about it, they're all the same function: apply work done on one branch to another. All that changes between these commands is how much work gets applied — just one check-in or a whole branch — and the merge direction. This is the sort of thing we mean when we point out that Fossil's command interface is simpler than Git's: there are fewer |
︙ | ︙ | |||
843 844 845 846 847 848 849 | Fossil delivered a new release allowing a clean migration to [https://en.wikipedia.org/wiki/SHA-3|256-bit SHA-3] with [./hashpolicy.wiki|full backwards compatibility] to old SHA-1 based repositories. In October 2019, after the last of the major binary package repos offering Fossil upgraded to Fossil 2.<i>x</i>, | | | | 841 842 843 844 845 846 847 848 849 850 851 852 853 854 855 856 | Fossil delivered a new release allowing a clean migration to [https://en.wikipedia.org/wiki/SHA-3|256-bit SHA-3] with [./hashpolicy.wiki|full backwards compatibility] to old SHA-1 based repositories. In October 2019, after the last of the major binary package repos offering Fossil upgraded to Fossil 2.<i>x</i>, we switched the default hash mode so that from Fossil 2.10 forward, the conversion to SHA-3 is fully automatic. This not only solves the SHAttered problem, it should prevent a reoccurrence of similar problems for the foreseeable future. Meanwhile, the Git community took until August 2018 to publish [https://git-scm.com/docs/hash-function-transition/|their first plan] for solving the same problem by moving to SHA-256, a variant of the |
︙ | ︙ | |||
949 950 951 952 953 954 955 | <li><p>Both Fossil and Git support [https://en.wikipedia.org/wiki/Patch_(Unix)|<tt>patch(1)</tt> files] — unified diff formatted output — for accepting drive-by contributions, but it's a lossy contribution path for both systems. Unlike Git PRs and Fossil bundles, patch files collapse multiple checkins together, they don't include check-in comments, and they cannot encode changes made above the individual file content layer: you lose branching decisions, | | | | 947 948 949 950 951 952 953 954 955 956 957 958 959 960 | <li><p>Both Fossil and Git support [https://en.wikipedia.org/wiki/Patch_(Unix)|<tt>patch(1)</tt> files] — unified diff formatted output — for accepting drive-by contributions, but it's a lossy contribution path for both systems. Unlike Git PRs and Fossil bundles, patch files collapse multiple checkins together, they don't include check-in comments, and they cannot encode changes made above the individual file content layer: you lose branching decisions, tag changes, file renames, and more when using patch files. Fossil 2.16 adds [./patchcmd.md | a <tt>fossil patch</tt> command] that also solves these problems, but it is because it works like a Fossil bundle, only for uncommitted changes; it doesn't use Larry Wall's <tt>patch</tt> tool to apply unified diff output to the receiving Fossil checkout.</p></li> </ol></i></small> |
Changes to www/fossil_prompt.wiki.
1 2 3 4 5 6 7 8 9 10 11 12 13 | <title>Fossilized Bash Prompt</title> Dan Kennedy has contributed a [./fossil_prompt.sh?mimetype=text/plain | bash script] that manipulates the bash prompt to show the status of the Fossil repository that the user is currently visiting. The prompt shows the branch, version, and time stamp for the current checkout, and the prompt changes colors from blue to red when there are uncommitted changes. To try out this script, simply download it from the link above, then type: | > | | | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 | <title>Fossilized Bash Prompt</title> <h1>2013-02-21</h1> Dan Kennedy has contributed a [./fossil_prompt.sh?mimetype=text/plain | bash script] that manipulates the bash prompt to show the status of the Fossil repository that the user is currently visiting. The prompt shows the branch, version, and time stamp for the current checkout, and the prompt changes colors from blue to red when there are uncommitted changes. To try out this script, simply download it from the link above, then type: <blockquote><pre> . fossil_prompt.sh </pre></blockquote> For a permanent installation, you can graft the code into your <tt>.bashrc</tt> file in your home directory. The code is very simple (only 32 non-comment lines, as of this writing) and hence easy to customized. |
Changes to www/gitusers.md.
︙ | ︙ | |||
71 72 73 74 75 76 77 | advocate a switch-in-place working mode instead, so that is how most users end up working with Git. Contrast [Fossil’s check-out workflow document][ckwf] to see the practical differences. There is one Git-specific detail we wish to add beyond what that document already covers. This command: | | | | 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 | advocate a switch-in-place working mode instead, so that is how most users end up working with Git. Contrast [Fossil’s check-out workflow document][ckwf] to see the practical differences. There is one Git-specific detail we wish to add beyond what that document already covers. This command: git checkout some-branch …is best given as: fossil update some-branch …in Fossil. There is a [`fossil checkout`][co] command, but it has [several differences](./co-vs-up.md) that make it less broadly useful than [`fossil update`][up] in everyday operation, so we recommend that Git users moving to Fossil develop a habit of typing `fossil up` rather than `fossil checkout`. That said, one of those differences does match up with Git users’ expectations: `fossil checkout` doesn’t pull changes |
︙ | ︙ | |||
107 108 109 110 111 112 113 | choice also tends to make Fossil feel comfortable to Subversion expatriates.) The `fossil pull` command is simply the reverse of `fossil push`, so that `fossil sync` [is functionally equivalent to](./sync.wiki#sync): | | | 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 | choice also tends to make Fossil feel comfortable to Subversion expatriates.) The `fossil pull` command is simply the reverse of `fossil push`, so that `fossil sync` [is functionally equivalent to](./sync.wiki#sync): fossil push ; fossil pull There is no implicit “and update the local working directory” step in Fossil’s push, pull, or sync commands, as there is with `git pull`. Someone coming from the Git perspective may perceive that `fossil up` has two purposes: |
︙ | ︙ | |||
178 179 180 181 182 183 184 | There are at least three different ways to get [Fossil-style multiple check-out directories][mcw] with Git. The old way is to simply symlink the `.git` directory between working trees: | | | | | | | | | | | | | | | | | | | | | 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 | There are at least three different ways to get [Fossil-style multiple check-out directories][mcw] with Git. The old way is to simply symlink the `.git` directory between working trees: mkdir ../foo-branch ln -s ../actual-clone-dir/.git . git checkout foo-branch The symlink trick has a number of problems, the largest being that symlinks weren’t available on Windows until Vista, and until the Windows 10 Creators Update was released in spring of 2017, you had to be an Administrator to use the feature besides. ([Source][wsyml]) Git 2.5 solved this problem back when Windows XP was Microsoft’s current offering by adding the `git-worktree` command: git worktree add ../foo-branch foo-branch cd ../foo-branch That is approximately equivalent to this in Fossil: mkdir ../foo-branch cd ../foo-branch fossil open /path/to/repo.fossil foo-branch The Fossil alternative is wordier, but since this tends to be one-time setup, not something you do everyday, the overhead is insignificant. This author keeps a “scratch” check-out for cases where it’s inappropriate to reuse the “trunk” check-out, isolating all of my expedient switch-in-place actions to that one working directory. Since the other peer check-outs track long-lived branches, and that set rarely changes once a development machine is set up, I rarely pay the cost of these wordier commands. That then leads us to the closest equivalent in Git to [closing a Fossil check-out](#close): git worktree remove . Note, however, that unlike `fossil close`, once the Git command determines that there are no uncommitted changes, it blows away all of the checked-out files! Fossil’s alternative is shorter, easier to remember, and safer. There’s another way to get Fossil-like separate worktrees in Git: git clone --separate-git-dir repo.git https://example.com/repo This allows you to have your Git repository directory entirely separate from your working tree, with `.git` in the check-out directory being a file that points to `../repo.git`, in this example. [mcw]: ./ckout-workflows.md#mcw [wsyml]: https://blogs.windows.com/windowsdeveloper/2016/12/02/symlinks-windows-10/ #### <a id="iip"></a> Init in Place To illustrate the differences that Fossil’s separation of repository from working directory creates in practice, consider this common Git “init in place” method for creating a new repository from an existing tree of files, perhaps because you are placing that project under version control for the first time: cd long-established-project git init git add * git commit -m "Initial commit of project." The closest equivalent in Fossil is: cd long-established-project fossil init .fsl fossil open --force .fsl fossil add * fossil ci -m "Initial commit of project." Note that unlike in Git, you can abbreviate the “`commit`” command in Fossil as “`ci`” for compatibility with CVS, Subversion, etc. This creates a `.fsl` repo DB at the root of the project check-out to emulate the `.git` repo dir. We have to use the `--force` flag on opening the new repo because Fossil expects you to open a repo into an |
︙ | ︙ | |||
314 315 316 317 318 319 320 | #### <a id="emu-log"></a> Emulating `git log` If you truly need a backwards-in-time-only view of history in Fossil to emulate `git log`, this is as close as you can currently come: | | | | | | | | | | | | | | | | | | | 314 315 316 317 318 319 320 321 322 323 324 325 326 327 328 329 330 331 332 333 334 335 336 337 338 339 340 341 342 343 344 345 346 347 348 349 350 351 352 353 354 355 356 357 358 359 360 361 362 363 364 365 366 367 368 369 370 371 372 373 374 375 376 377 378 379 380 381 382 383 384 385 386 387 388 389 390 391 392 393 394 395 396 397 398 399 400 401 402 403 404 405 406 407 408 409 410 411 412 413 414 415 416 417 418 419 420 | #### <a id="emu-log"></a> Emulating `git log` If you truly need a backwards-in-time-only view of history in Fossil to emulate `git log`, this is as close as you can currently come: fossil timeline parents current Again, though, this isn’t restricted to a single branch, as `git log` is. Another useful rough equivalent is: git log --raw fossil time -v This shows what changed in each version, though Fossil’s view is more a summary than a list of raw changes. To dig deeper into single commits, you can use Fossil’s [`info` command][infoc] or its [`/info` view][infow]. Inversely, you may more exactly emulate the default `fossil timeline` output with `git log --name-status`. #### <a id="whatchanged"></a> What Changed? A related — though deprecated — command is `git whatchanged`, which gives results similar to `git log --raw`, so we cover it here. Though there is no `fossil whatchanged` command, the same sort of information is available. For example, to pull the current changes from the remote repository and then inspect them before updating the local working directory, you might say this in Git: git fetch git whatchanged ..@{u} …which you can approximate in Fossil as: fossil pull fossil up -n fossil diff --from tip To invert the `diff` to show a more natural patch, the command needs to be a bit more complicated, since you can’t currently give `--to` without `--from`. fossil diff --from current --to tip Rather than use the “dry run” form of [the `update` command][up], you can say: fossil timeline after current …or if you want to restrict the output to the current branch: fossil timeline descendants current #### <a id="ckin-names"></a> Symbolic Check-In Names Note the use of [human-readable symbolic version names][scin] in Fossil rather than [Git’s cryptic notations][gcn]. For a more dramatic example of this, let us ask Git, “What changed since the beginning of last month?” being October 2020 as I write this: git log master@{2020-10-01}..HEAD That’s rather obscure! Fossil answers the same question with a simpler command: fossil timeline after 2020-10-01 You may need to add `-n 0` to bypass the default output limit of `fossil timeline`, 20 entries. Without that, this command reads almost like English. Some Git users like to write commands like the above so: git log @{2020-10-01}..@ Is that better? “@” now means two different things: an at-time reference and a shortcut for `HEAD`! If you are one of those that like short commands, Fossil’s method is less cryptic: it lets you shorten words in most cases up to the point that they become ambiguous. For example, you may abbreviate the last `fossil` command in the prior section: fossil tim d c …beyond which the `timeline` command becomes ambiguous with `ticket`. Some Fossil users employ shell aliases, symlinks, or scripts to shorten the command still further: alias f=fossil f tim d c Granted, that’s rather obscure, but you you can also choose something intermediate like “`f time desc curr`”, which is reasonably clear. [35pct]: https://www.sqlite.org/fasterthanfs.html [btree]: https://sqlite.org/btreemodule.html [gcn]: https://git-scm.com/docs/gitrevisions |
︙ | ︙ | |||
466 467 468 469 470 471 472 | Fossil omits the "Git index" or "staging area" concept. When you type "`fossil commit`" _all_ changes in your check-out are committed, automatically. There is no need for the "-a" option as with Git. If you only want to commit _some_ of the changes, list the names of the files or directories you want to commit as arguments, like this: | | | | | 466 467 468 469 470 471 472 473 474 475 476 477 478 479 480 481 482 483 484 485 486 487 488 489 490 491 492 493 494 495 | Fossil omits the "Git index" or "staging area" concept. When you type "`fossil commit`" _all_ changes in your check-out are committed, automatically. There is no need for the "-a" option as with Git. If you only want to commit _some_ of the changes, list the names of the files or directories you want to commit as arguments, like this: fossil commit src/feature.c doc/feature.md examples/feature Note that the last element is a directory name, meaning “any changed file under the `examples/feature` directory.” Although there are currently no <a id="csplit"></a>[commit splitting][gcspl] features in Fossil like `git add -p`, `git commit -p`, or `git rebase -i`, you can get the same effect by converting an uncommitted change set to a patch and then running it through [Patchouli]. Rather than use `fossil diff -i` to produce such a patch, a safer and more idiomatic method would be: fossil stash save -m 'my big ball-o-hackage' fossil stash diff > my-changes.patch That stores your changes in the stash, then lets you operate on a copy of that patch. Each time you re-run the second command, it will take the current state of the working directory into account to produce a potentially different patch, likely smaller because it leaves out patch hunks already applied. |
︙ | ︙ | |||
524 525 526 527 528 529 530 | <a id="bneed"></a> ## Create Branches at Point of Need, Rather Than Ahead of Need Fossil prefers that you create new branches as part of the first commit on that branch: | | | | | | | | | 524 525 526 527 528 529 530 531 532 533 534 535 536 537 538 539 540 541 542 543 544 545 546 547 548 549 550 551 552 553 554 555 556 557 558 559 560 561 562 563 564 565 566 567 568 569 570 | <a id="bneed"></a> ## Create Branches at Point of Need, Rather Than Ahead of Need Fossil prefers that you create new branches as part of the first commit on that branch: fossil commit --branch my-branch If that commit is successful, your local check-out directory is then switched to the tip of that branch, so subsequent commits don’t need the “`--branch`” option. You simply say `fossil commit` again to continue adding commits to the tip of that branch. To switch back to the parent branch, say something like: fossil update trunk (This is approximately equivalent to `git checkout master`.) Fossil does also support the Git style, creating the branch ahead of need: fossil branch new my-branch fossil up my-branch ...work on first commit... fossil commit This is more verbose, giving the same overall effect though the initial actions are inverted: create a new branch for the first commit, switch the check-out directory to that branch, and make that first commit. As above, subsequent commits are descendants of that initial branch commit. We think you’ll agree that creating a branch as part of the initial commit is simpler. Fossil also allows you to move a check-in to a different branch *after* you commit it, using the "`fossil amend`" command. For example: fossil amend current --branch my-branch This works by inserting a tag into the repository that causes the web UI to relabel commits from that point forward with the new name. Like Git, Fossil’s fundamental data structure is the interlinked DAG of commit hashes; branch names are supplemental data for making it easier for the humans to understand this DAG, so this command does not change the core history of the project, only annotate it for better display to the |
︙ | ︙ | |||
589 590 591 592 593 594 595 | [Fossil is an AP-mode system][capt], which in this case means it works *very hard* to ensure that all repos are as close to identical as it can make them under this eventually-consistent design philosophy. Branch *names* sync automatically in Fossil, not just the content of those branches. That means this common Git command: | | | | | 589 590 591 592 593 594 595 596 597 598 599 600 601 602 603 604 605 606 607 608 609 610 611 612 613 614 615 616 617 618 619 620 621 622 623 624 625 | [Fossil is an AP-mode system][capt], which in this case means it works *very hard* to ensure that all repos are as close to identical as it can make them under this eventually-consistent design philosophy. Branch *names* sync automatically in Fossil, not just the content of those branches. That means this common Git command: git push origin master …is simply this in Fossil: fossil push Fossil doesn’t need to be told what to push or where to push it: it just keeps using the same remote server URL you gave it last until you [tell it to do something different][rem]. It pushes all branches, not just one named local branch. [capt]: ./cap-theorem.md [rem]: /help?cmd=remote <a id="autosync"></a> ## Autosync Fossil’s [autosync][wflow] feature, normally enabled, has no equivalent in Git. If you want Fossil to behave like Git, you can turn it off: fossil set autosync 0 Let’s say that you have a typical server-and-workstations model with two working clones on different machines, that you have disabled autosync, and that this common sequence then occurs: 1. Alice commits to her local clone and *separately* pushes the change up to Condor — their central server — in typical Git fashion. |
︙ | ︙ | |||
690 691 692 693 694 695 696 | We make no guarantee that there will always be a line beginning with “`repo`” and that it will be separated from the repository’s file name by a colon. The simplified example above is also liable to become confused by whitespace in file names.) ``` | | | | | | | | | | | 690 691 692 693 694 695 696 697 698 699 700 701 702 703 704 705 706 707 708 709 710 711 712 713 714 715 716 717 718 719 720 721 722 723 724 725 726 727 728 729 730 731 | We make no guarantee that there will always be a line beginning with “`repo`” and that it will be separated from the repository’s file name by a colon. The simplified example above is also liable to become confused by whitespace in file names.) ``` $ repo=$(fossil status | grep ^repo | cut -f2 -d:) $ url=$(fossil remote) $ fossil close # Stop here and think if it warns you! $ mv $repo ${repo}.old $ fossil clone $url $repo $ fossil open --force $repo ``` What, then, should you as a Git transplant do instead when you find yourself reaching for “`git reset`”? Since the correct answer to that depends on why you think it’s a good solution to your immediate problem, we’ll take our motivating scenario from the problem setup above, where we discussed Fossil’s [autosync] feature. Let us further say Alice’s pique results from a belief that Bob’s commit is objectively wrong-headed and should be expunged henceforth. Since Fossil goes out of its way to ensure that [commits are durable][wdm], it should be no further surprise that there is no easier method to reset Bob’s clone in favor of Alice’s than the above sequence in Fossil’s command set. Except in extreme situations, we believe that sort of thing is unnecessary. Instead, Bob can say something like this: ``` fossil amend --branch MISTAKE --hide --close -m "mea culpa" tip fossil up trunk fossil push ``` Unlike in Git, the “`amend`” command doesn’t modify prior committed artifacts. Bob’s first command doesn’t delete anything, merely tells Fossil to hide his mistake from timeline views by inserting a few new records into the local repository to change how the client interprets the data it finds there henceforth.(^One to change the tag marking this |
︙ | ︙ | |||
748 749 750 751 752 753 754 | to return her check-out’s parent commit to the previous version lest her next attempted commit land atop this mistake branch. The fact that Bob marked the branch as closed will prevent that from going thru, cluing Alice into what she needs to do to remedy the situation, but that merely shows why it’s a better workflow if Alice makes the amendment herself: ``` | | | | | | 748 749 750 751 752 753 754 755 756 757 758 759 760 761 762 763 764 765 | to return her check-out’s parent commit to the previous version lest her next attempted commit land atop this mistake branch. The fact that Bob marked the branch as closed will prevent that from going thru, cluing Alice into what she needs to do to remedy the situation, but that merely shows why it’s a better workflow if Alice makes the amendment herself: ``` fossil amend --branch MISTAKE --hide --close \ -m "shunt Bob’s erroneous commit off" tip fossil up trunk fossil push ``` Then she can fire off an email listing Bob’s assorted failings and go about her work. This asynchronous workflow solves the problem without requiring explicit coordination with Bob. When he gets his email, he can then say “`fossil up trunk`” himself, which by default will trigger an autosync, pulling down Alice’s amendments and getting him back onto her |
︙ | ︙ | |||
779 780 781 782 783 784 785 | is "`trunk`". The "`trunk`" branch in Fossil corresponds to the "`master`" branch in stock Git or to [the “`main`” branch in GitHub][mbgh]. Because the `fossil git export` command has to work with both stock Git and with GitHub, Fossil uses Git’s traditional default rather than GitHub’s new default: your Fossil repo’s “trunk” branch becomes “master” when [mirroring to GitHub][mirgh] unless you give the `--mainbranch` | | | 779 780 781 782 783 784 785 786 787 788 789 790 791 792 793 | is "`trunk`". The "`trunk`" branch in Fossil corresponds to the "`master`" branch in stock Git or to [the “`main`” branch in GitHub][mbgh]. Because the `fossil git export` command has to work with both stock Git and with GitHub, Fossil uses Git’s traditional default rather than GitHub’s new default: your Fossil repo’s “trunk” branch becomes “master” when [mirroring to GitHub][mirgh] unless you give the `--mainbranch` option added in Fossil 2.14. We do not know what happens on subsequent exports if you later rename this branch on the GitHub side. [mbgh]: https://github.com/github/renaming [mirgh]: ./mirrortogithub.md |
︙ | ︙ | |||
832 833 834 835 836 837 838 | format][udiff] output, suitable for producing a [patch file][pfile]. Nevertheless, there are multiple ways to get colorized diff output from Fossil: * The most direct method is to delegate diff behavior back to Git: | | | | | 832 833 834 835 836 837 838 839 840 841 842 843 844 845 846 847 848 849 850 851 852 853 854 855 856 857 858 859 860 861 862 863 864 865 | format][udiff] output, suitable for producing a [patch file][pfile]. Nevertheless, there are multiple ways to get colorized diff output from Fossil: * The most direct method is to delegate diff behavior back to Git: fossil set --global diff-command 'git diff --no-index' The flag permits it to diff files that aren’t inside a Git repository. * Another method is to install [`colordiff`][cdiff] — included in [many package systems][cdpkg] — then say: fossil set --global diff-command 'colordiff -wu' Because this is unconditional, unlike `git diff --color=auto`, you will then have to remember to add the `-i` option to `fossil diff` commands when you want color disabled, such as when producing `patch(1)` files or piping diff output to another command that doesn’t understand ANSI escape sequences. There’s an example of this [below](#dstat). * Use the Fossil web UI to diff existing commits. * To diff the current working directory contents against some parent instead, Fossil 2.17 expanded the diff command so it can produce colorized HTML output and open it in the OS’s default web browser. For example, `fossil diff -by` will show side-by-side diffs. * Use the older `fossil diff --tk` option to do much the same using Tcl/Tk instead of a browser. Viewed this way, Fossil doesn’t lack colorized diffs, it simply has |
︙ | ︙ | |||
874 875 876 877 878 879 880 | While there is no direct equivalent to Git’s “`show`” command, similar functionality is present in Fossil under other commands: #### <a id="patch"></a> Show a Patch for a Commit | | | | | | | | | | | 874 875 876 877 878 879 880 881 882 883 884 885 886 887 888 889 890 891 892 893 894 895 896 897 898 899 900 901 902 903 904 905 906 907 908 909 910 911 912 913 914 915 916 917 918 919 920 921 922 923 924 925 926 927 928 929 930 931 932 933 934 935 936 937 938 939 940 941 942 943 944 945 | While there is no direct equivalent to Git’s “`show`” command, similar functionality is present in Fossil under other commands: #### <a id="patch"></a> Show a Patch for a Commit git show -p COMMIT_ID …gives much the same output as fossil diff --checkin COMMIT_ID …only without the patch email header. Git comes out of the [LKML] world, where emailing a patch is a normal thing to do. Fossil is [designed for cohesive teams][devorg] where drive-by patches are rarer. You can use any of [Fossil’s special check-in names][scin] in place of the `COMMIT_ID` in this and later examples. Fossil docs usually say “`VERSION`” or “`NAME`” where this is allowed, since the version string or name might not refer to a commit ID, but instead to a forum post, a wiki document, etc. For instance, the following command answers the question “What did I just commit?” fossil diff --checkin tip …or equivalently using a different symbolic commit name: fossil diff --from prev [devorg]: ./fossil-v-git.wiki#devorg [LKML]: https://lkml.org/ #### <a id="cmsg"></a> Show a Specific Commit Message git show -s COMMIT_ID …is fossil time -n 1 COMMIT_ID …or with a shorter, more obvious command, though with more verbose output: fossil info COMMIT_ID The `fossil info` command isn’t otherwise a good equivalent to `git show`; it just overlaps its functionality in some areas. Much of what’s missing is present in the corresponding [`/info` web view][infow], though. #### <a id="dstat"></a> Diff Statistics Fossil’s closest internal equivalent to commands like `git show --stat` is: fossil diff -i --from 2020-04-01 --numstat The `--numstat` output is a bit cryptic, so we recommend delegating this task to [the widely-available `diffstat` tool][dst], which gives a histogram in its default output mode rather than bare integers: fossil diff -i -v --from 2020-04-01 | diffstat We gave the `-i` flag in both cases to force Fossil to use its internal diff implementation, bypassing [your local `diff-command` setting][dcset]. The `--numstat` option has no effect when you have an external diff command set, and some diff command alternatives like [`colordiff`][cdiff] (covered [above](#cdiff)) produce output that confuses `diffstat`. |
︙ | ︙ | |||
997 998 999 1000 1001 1002 1003 | The "[`fossil mv`][mv]" and "[`fossil rm`][rm]" commands work like they do in CVS in that they schedule the changes for the next commit by default: they do not actually rename or delete the files in your check-out. If you don’t like that default, you can change it globally: | | | | 997 998 999 1000 1001 1002 1003 1004 1005 1006 1007 1008 1009 1010 1011 1012 1013 1014 1015 1016 1017 1018 1019 | The "[`fossil mv`][mv]" and "[`fossil rm`][rm]" commands work like they do in CVS in that they schedule the changes for the next commit by default: they do not actually rename or delete the files in your check-out. If you don’t like that default, you can change it globally: fossil setting --global mv-rm-files 1 Now these commands behave like in Git in any Fossil repository where this setting hasn’t been overridden locally. If you want to keep Fossil’s soft `mv/rm` behavior most of the time, you can cast it away on a per-command basis: fossil mv --hard old-name new-name [mv]: /help?cmd=mv [rm]: /help?cmd=rm ---- |
︙ | ︙ | |||
1030 1031 1032 1033 1034 1035 1036 | history to find a “good” version to anchor the start point of a [`fossil bisect`][fbis] operation. My search engine’s first result for “git checkout by date” is [this highly-upvoted accepted Stack Overflow answer][gcod]. The first command it gives is based on Git’s [`rev-parse` feature][grp]: | | | 1030 1031 1032 1033 1034 1035 1036 1037 1038 1039 1040 1041 1042 1043 1044 | history to find a “good” version to anchor the start point of a [`fossil bisect`][fbis] operation. My search engine’s first result for “git checkout by date” is [this highly-upvoted accepted Stack Overflow answer][gcod]. The first command it gives is based on Git’s [`rev-parse` feature][grp]: git checkout master@{2020-03-17} There are a number of weaknesses in this command. From least to most critical: 1. It’s a bit cryptic. Leave off the refname or punctuation, and it means something else. You cannot simplify the cryptic incantation in the typical use case. |
︙ | ︙ | |||
1070 1071 1072 1073 1074 1075 1076 | Consequently, we cannot recommend this command at all. It’s unreliable even in the best case. That same Stack Overflow answer therefore goes on to recommend an entirely different command: | | | | | | | | | 1070 1071 1072 1073 1074 1075 1076 1077 1078 1079 1080 1081 1082 1083 1084 1085 1086 1087 1088 1089 1090 1091 1092 1093 1094 1095 1096 1097 1098 1099 1100 1101 1102 1103 1104 1105 1106 1107 1108 1109 1110 1111 1112 1113 1114 1115 1116 1117 1118 1119 1120 1121 | Consequently, we cannot recommend this command at all. It’s unreliable even in the best case. That same Stack Overflow answer therefore goes on to recommend an entirely different command: git checkout $(git rev-list -n 1 --first-parent --before="2020-03-17" master) We believe you get such answers to Git help requests in part because of its lack of an always-up-to-date [index into its log](#log) and in part because of its “small tools loosely joined” design philosophy. This sort of command is therefore composed piece by piece: <p style="text-align:center">◆ ◆ ◆</p> “Oh, I know, I’ll search the rev-list, which outputs commit IDs by parsing the log backwards from `HEAD`! Easy!” git rev-list --before=2020-03-17 “Blast! Forgot the commit ID!” git rev-list --before=2020-03-17 master “Double blast! It just spammed my terminal with revision IDs! I need to limit it to the single closest match: git rev-list -n 1 --before=2020-03-17 master “Okay, it gives me a single revision ID now, but is it what I’m after? Let’s take a look…” git show $(git rev-list -n 1 --before=2020-03-17 master) “Oops, that’s giving me a merge commit, not what I want. Off to search the web… Okay, it says I need to give either the `--first-parent` or `--no-merges` flag to show only regular commits, not merge-commits. Let’s try the first one:” git show $(git rev-list -n 1 --first-parent --before=2020-03-17 master) “Better. Let’s check it out:” git checkout $(git rev-list -n 1 --first-parent --before=2020-03-17 master) “Success, I guess?” <p style="text-align:center">◆ ◆ ◆</p> This vignette is meant to explain some of Git’s popularity: it rewards the sort of people who enjoy puzzles, many of whom are software |
︙ | ︙ | |||
1130 1131 1132 1133 1134 1135 1136 | second `git show` command above on [Git’s own repository][gitgh], your results may vary because there were four non-merge commits to Git on the 17th of March, 2020. You may be asking with an exasperated huff, “What is your *point*, man?” The point is that the equivalent in Fossil is simply: | | | 1130 1131 1132 1133 1134 1135 1136 1137 1138 1139 1140 1141 1142 1143 1144 | second `git show` command above on [Git’s own repository][gitgh], your results may vary because there were four non-merge commits to Git on the 17th of March, 2020. You may be asking with an exasperated huff, “What is your *point*, man?” The point is that the equivalent in Fossil is simply: fossil up 2020-03-17 …which will *always* give the commit closest to midnight UTC on the 17th of March, 2020, no matter whether you do it on a fresh clone or a stale one. The answer won’t shift about from one clone to the next or from one local time of day to the next. We owe this reliability and stability to three Fossil design choices: |
︙ | ︙ | |||
1179 1180 1181 1182 1183 1184 1185 | and your family’s home NAS. #### Git Method We first need to clone the work repo down to our laptop, so we can work on it at home: | | | | | | | | | | | | | | | | | | | | | | | | 1179 1180 1181 1182 1183 1184 1185 1186 1187 1188 1189 1190 1191 1192 1193 1194 1195 1196 1197 1198 1199 1200 1201 1202 1203 1204 1205 1206 1207 1208 1209 1210 1211 1212 1213 1214 1215 1216 1217 1218 1219 1220 1221 1222 1223 1224 1225 1226 1227 1228 1229 1230 1231 1232 1233 1234 1235 1236 1237 1238 1239 1240 1241 1242 1243 1244 1245 1246 1247 1248 1249 1250 1251 1252 1253 1254 1255 1256 1257 1258 1259 1260 1261 1262 1263 1264 1265 1266 1267 1268 1269 1270 1271 1272 1273 1274 1275 1276 1277 1278 1279 1280 1281 1282 1283 1284 1285 1286 1287 1288 1289 1290 1291 1292 1293 1294 1295 1296 1297 1298 1299 1300 1301 1302 1303 1304 1305 1306 1307 1308 1309 1310 1311 1312 1313 1314 1315 1316 1317 1318 1319 1320 1321 1322 1323 1324 1325 1326 1327 1328 1329 1330 1331 | and your family’s home NAS. #### Git Method We first need to clone the work repo down to our laptop, so we can work on it at home: git clone https://dev-server.example.com/repo cd repo git remote rename origin work The last command is optional, strictly speaking. We could continue to use Git’s default name for the work repo’s origin — sensibly enough called “`origin`” — but it makes later commands harder to understand, so we rename it here. This will also make the parallel with Fossil easier to draw. The first time we go home after this, we have to reverse-clone the work repo up to the NAS: ssh my-nas.local 'git init --bare /SHARES/dayjob/repo.git' git push --all ssh://my-nas.local//SHARES/dayjob/repo.git Realize that this is carefully optimized down to these two long commands. In practice, we’d expect a user typing these commands by hand from memory to need to give four or more commands here instead. Packing the “`git init`” call into the “`ssh`” call is something more often done in scripts and documentation examples than done interactively, which then necessitates a third command before the push, “`exit`”. There’s also a good chance that you’ll forget the need for the `--bare` option here to avoid a fatal complaint from Git that the laptop can’t push into a non-empty repo. If you fall into this trap, among the many that Git lays for newbies, you have to nuke the incorrectly initted repo, search the web or Git man pages to find out about `--bare`, and try again. Having navigated that little minefield, we can tell Git that there is a second origin, a “home” repo in addition to the named “work” repo we set up earlier: git remote add home ssh://my-nas.local//SHARES/dayjob/repo.git git config master.remote home We don’t have to push or pull because the remote repo is a complete clone of the repo on the laptop at this point, so we can just get to work now, committing along the way to get our work safely off-machine and onto our home NAS, like so: git add git commit git push We didn’t need to give a remote name on the push because we told it the new upstream is the home NAS earlier. Now Friday comes along, and one of your office-mates needs a feature you’re working on. You agree to come into the office later that afternoon to sync up via the dev server: git push work master # send your changes from home up git pull work master # get your coworkers’ changes Alternately, we could add “`--set-upstream/-u work`” to the first command if we were coming into work long enough to do several Git-based things, not just pop in and sync. That would allow the second to be just “`git pull`”, but the cost is that when returning home, you’d have to manually reset the upstream again. This example also shows a consequence of that fact that [Git doesn’t sync branch names](#syncall): you have to keep repeating yourself like an obsequious supplicant: “Master, master.” Didn’t we invent computers to serve humans, rather than the other way around? #### Fossil Method Now we’re going to do the same thing using Fossil, with the commands arranged in blocks corresponding to those above for comparison. We start the same way, cloning the work repo down to the laptop: fossil clone https://dev-server.example.com/repo cd repo fossil remote add work https://dev-server.example.com/repo We’ve chosen the new “`fossil clone URI`” syntax added in Fossil 2.14 rather than separate `clone` and `open` commands to make the parallel with Git clearer. [See above](#mwd) for more on that topic. Our [`remote` command][rem] is longer than the Git equivalent because Fossil currently has no short command to rename an existing remote. Worse, unlike with Git, we can’t just keep using the default remote name because Fossil uses that slot in its configuration database to store the *current* remote name, so on switching from work to home, the home URL will overwrite the work URL if we don’t give it an explicit name first. Although the Fossil commands are longer, so far, keep it in perspective: they’re one-time setup costs, easily amortized to insignificance by the shorter day-to-day commands below. On first beginning to work from home, we reverse-clone the Fossil repo up to the NAS: rsync repo.fossil my-nas.local:/SHARES/dayjob/ Now we’re beginning to see the advantage of Fossil’s simpler model, relative to the tricky “`git init && git push`” sequence above. Fossil’s alternative is almost impossible to get wrong: copy this to that. *Done.* We’re relying on the `rsync` feature that creates up to one level of missing directory (here, `dayjob/`) on the remote. If you know in advance that the remote directory already exists, you could use a slightly shorter `scp` command instead. Even with the extra 2 characters in the `rsync` form, it’s much shorter because a Fossil repository is a single SQLite database file, not a tree containing a pile of assorted files. Because of this, it works reliably without any of [the caveats inherent in using `rsync` to clone a Git repo][grsync]. Now we set up the second remote, which is again simpler in the Fossil case: fossil remote add home ssh://my-nas.local//SHARES/dayjob/repo.fossil fossil remote home The first command is nearly identical to the Git version, but the second is considerably simpler. And to be fair, you won’t find the “`git config`” command above in all Git tutorials. The more common alternative we found with web searches is even longer: “`git push --set-upstream home master`”. Where Fossil really wins is in the next step, making the initial commit from home: fossil ci It’s one short command for Fossil instead of three for Git — or two if you abbreviate it as “`git commit -a && git push`” — because of Fossil’s [autosync] feature and deliberate omission of a [staging feature](#staging). The “Friday afternoon sync-up” case is simpler, too: fossil remote work fossil sync Back at home, it’s simpler still: we may be able to do away with the second command, saying just “`fossil remote home`” because the sync will happen as part of the next commit, thanks once again to Fossil’s autosync feature. If the working branch now has commits from other developers after syncing with the central repository, though, you’ll want to say “`fossil up`” to avoid creating an inadvertent fork in the branch. |
︙ | ︙ |
Changes to www/globs.md.
︙ | ︙ | |||
40 41 42 43 44 45 46 | The parser allows whitespace and commas in a pattern by quoting _the entire pattern_ with either single or double quotation marks. Internal quotation marks are treated literally. Moreover, a pattern that begins with a quote mark ends when the first instance of the same mark occurs, _not_ at a whitespace or comma. Thus, this: | | | 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 | The parser allows whitespace and commas in a pattern by quoting _the entire pattern_ with either single or double quotation marks. Internal quotation marks are treated literally. Moreover, a pattern that begins with a quote mark ends when the first instance of the same mark occurs, _not_ at a whitespace or comma. Thus, this: "foo bar"qux …constitutes _two_ patterns rather than one with an embedded space, in contravention of normal shell quoting rules. A list matches a file when any pattern in that list matches. A pattern must consume and |
︙ | ︙ |
Changes to www/glossary.md.
︙ | ︙ | |||
165 166 167 168 169 170 171 172 173 174 175 176 177 178 | recommend keeping them all in a single subdirectory such as "`~/fossils`" or "`%USERPROFILE%\Fossils`". A flat set of files suffices for simple purposes, but you may have use for something more complicated. This author uses a scheme like the following on mobile machines that shuttle between home and the office: ``` pikchr toggle indent box "~/museum/" fit move right 0.1 line right dotted move right 0.05 box invis "where one stores valuable fossils" ljust arrow down 50% from first box.s then right 50% | > | 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 | recommend keeping them all in a single subdirectory such as "`~/fossils`" or "`%USERPROFILE%\Fossils`". A flat set of files suffices for simple purposes, but you may have use for something more complicated. This author uses a scheme like the following on mobile machines that shuttle between home and the office: ``` pikchr toggle indent scale=0.8 box "~/museum/" fit move right 0.1 line right dotted move right 0.05 box invis "where one stores valuable fossils" ljust arrow down 50% from first box.s then right 50% |
︙ | ︙ | |||
430 431 432 433 434 435 436 | organizational tool well-suited to complicated documentation. * Your repository’s Home page is a good candidate for the wiki, as is documentation meant for use only with the current version of the repository’s contents. * If you are at all uncertain whether to use the wiki or the embedded | | | | | 431 432 433 434 435 436 437 438 439 440 441 442 443 444 445 446 447 | organizational tool well-suited to complicated documentation. * Your repository’s Home page is a good candidate for the wiki, as is documentation meant for use only with the current version of the repository’s contents. * If you are at all uncertain whether to use the wiki or the embedded documentation feature, prefer the latter, since it is more powerful and, with the addition of the [`/fileedit` feature][fef] in Fossil 2.12, it’s nearly as easy to use. (This very file is embedded documentation: clone [Fossil’s self-hosting repository][fshr] and you will find it as `www/glossary.md`.) [edoc]: ./embeddeddoc.wiki [fef]: ./fileedit-page.md |
︙ | ︙ |
Changes to www/grep.md.
︙ | ︙ | |||
43 44 45 46 47 48 49 | Fossil `grep` doesn’t support any of the GNU and BSD `grep` extensions. For instance, it doesn’t support the common `-R` extension to POSIX, which would presumably search a subtree of managed files. If Fossil does one day get this feature, it would have a different option letter, since `-R` in Fossil has a different meaning, by convention. Until then, you can get the same effect on systems with a POSIX shell like so: | | | 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 | Fossil `grep` doesn’t support any of the GNU and BSD `grep` extensions. For instance, it doesn’t support the common `-R` extension to POSIX, which would presumably search a subtree of managed files. If Fossil does one day get this feature, it would have a different option letter, since `-R` in Fossil has a different meaning, by convention. Until then, you can get the same effect on systems with a POSIX shell like so: $ fossil grep COMMAND: $(fossil ls src) If you run that in a check-out of the [Fossil self-hosting source repository][fshsr], that returns the first line of the built-in documentation for each Fossil command, across all historical verisons. Fossil `grep` has extensions relative to these other `grep` standards, such as `--verbose` to print each checkin ID considered, regardless of |
︙ | ︙ |
Changes to www/hashes.md.
1 2 3 4 5 6 | # Hashes: Fossil Artifact Identification All artifacts in Fossil are identified by a unique hash, currently using [the SHA3 algorithm by default][hpol], but historically using the SHA1 algorithm: | > | < | | > | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 | # Hashes: Fossil Artifact Identification All artifacts in Fossil are identified by a unique hash, currently using [the SHA3 algorithm by default][hpol], but historically using the SHA1 algorithm: <table border="1" cellspacing="0" cellpadding="10"> <tr><th>Algorithm</th><th>Raw Bits</th> <th>Hexadecimal digits</th></tr> <tr><td>SHA3-256</td> <td>256</td> <td>64</td></tr> <tr><td>SHA1</td> <td>160</td> <td>40</td></tr> </table> There are many types of artifacts in Fossil: commits (a.k.a. check-ins), tickets, ticket comments, wiki articles, forum postings, file data belonging to check-ins, etc. ([More info...](./concepts.wiki#artifacts)). There is a loose hierarchy of terms used instead of “hash” in various parts of the Fossil UI, which we cover in the sections below. |
︙ | ︙ |
Changes to www/hashpolicy.wiki.
︙ | ︙ | |||
166 167 168 169 170 171 172 | repositories can be overridden using the "--sha1" option to the "fossil new" command. If you are still on Fossil 2.1 through 2.9 but you want Fossil to go ahead and start using SHA3 hashes, change the hash policy to "sha3" using a command like this: | | | | 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 | repositories can be overridden using the "--sha1" option to the "fossil new" command. If you are still on Fossil 2.1 through 2.9 but you want Fossil to go ahead and start using SHA3 hashes, change the hash policy to "sha3" using a command like this: <blockquote><verbatim> fossil hash-policy sha3 </verbatim></blockquote> The next check-in will use a SHA3 hash, so that when that check-in is pushed to colleagues, their clones will include the new SHA3-named artifact, so their local Fossil instances will automatically convert their clones to "sha3" mode as well. Of course, if some members of your team stubbornly refuse to upgrade past |
︙ | ︙ |
Changes to www/hints.wiki.
︙ | ︙ | |||
35 36 37 38 39 40 41 | on in the Fossil repository on 2008-01-01, visit [/timeline?c=2008-01-01]. 7. Further to the previous two hints, there are lots of query parameters that you can add to timeline pages. The available query parameters are tersely documented [/help?cmd=/timeline | here]. | | | | < | 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 | on in the Fossil repository on 2008-01-01, visit [/timeline?c=2008-01-01]. 7. Further to the previous two hints, there are lots of query parameters that you can add to timeline pages. The available query parameters are tersely documented [/help?cmd=/timeline | here]. 8. You can run "[/help?cmd=test-diff | fossil test-diff --tk $file1 $file2]" to get a pop-up window with side-by-side diffs of two files, even if neither of the two files is part of any Fossil repository. Note that this command is "test-diff", not "diff". 9. On web pages showing the content of a file (for example [/artifact/c7dd1de9f]) you can manually add a query parameter of the form "ln=FROM,TO" to the URL that will cause the range of lines indicated to be highlighted. This is useful in pointing out a few lines of code using a hyperlink in an email or text message. Example: |
︙ | ︙ |
Changes to www/image-format-vs-repo-size.md.
︙ | ︙ | |||
157 158 159 160 161 162 163 | Since programs that produce and consume binary-compressed data files often make it either difficult or impossible to work with the uncompressed form, we want an automated method for producing the uncompressed form to make Fossil happy while still having the compressed form to keep our content creation applications happy. This `Makefile` should[^makefile] do that for BMP, PNG, SVG, and XLSX files: | | | | | | | | | | | | | | | | | | | | 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 | Since programs that produce and consume binary-compressed data files often make it either difficult or impossible to work with the uncompressed form, we want an automated method for producing the uncompressed form to make Fossil happy while still having the compressed form to keep our content creation applications happy. This `Makefile` should[^makefile] do that for BMP, PNG, SVG, and XLSX files: .SUFFIXES: .bmp .png .svg .svgz .svgz.svg: gzip -dc < $< > $@ .svg.svgz: gzip -9c < $< > $@ .bmp.png: convert -quality 95 $< $@ .png.bmp: convert $< $@ SS_FILES := $(wildcard spreadsheet/*) all: $(SS_FILES) illus.svg image.bmp doc-big.pdf reconstitute: illus.svgz image.png ( cd spreadsheet ; zip -9 ../spreadsheet.xlsx) * ) qpdf doc-big.pdf doc-small.pdf $(SS_FILES): spreadsheet.xlsx unzip $@ -d $< doc-big.pdf: doc-small.pdf qpdf --stream-data=uncompress $@ $< This `Makefile` allows you to treat the compressed version as the process input, but to actually check in only the changes against the uncompressed version by typing “`make`” before “`fossil ci`”. This is not actually an extra step in practice, since if you’ve got a `Makefile`-based project, you should be building — and testing — it before checking each change in anyway! |
︙ | ︙ |
Changes to www/index.wiki.
|
| | | | | | < | | | | | | | | | | | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 | <title>Home</title> <h3>What Is Fossil?</h3> <div style='float:right;border:2px solid #446979;padding:0 15px 10px 0;margin:0 0 10px 10px;'> <ul style='margin-left: -10px;'> <li> [/uv/download.html | Download] <li> [./quickstart.wiki | Quick Start] <li> [./build.wiki | Install] <li> [https://fossil-scm.org/forum | Support/Forum ] <li> [./hints.wiki | Tips & Hints] <li> [./changes.wiki | Change Log] <li> [../COPYRIGHT-BSD2.txt | License] <li> [./userlinks.wiki | User Links] <li> [./hacker-howto.wiki | Hacker How-To] <li> [./fossil-v-git.wiki | Fossil vs. Git] <li> [./permutedindex.html | Doc Index] </ul> <p style="text-align:center"><img src="fossil3.gif" alt="Fossil logo"></p> </div> Fossil is a simple, high-reliability, distributed software configuration management system with these advanced features: 1. <b>Project Management</b> - In addition to doing [./concepts.wiki | distributed version control] like Git and Mercurial, Fossil also supports [./bugtheory.wiki | bug tracking], [./wikitheory.wiki | wiki], [./forum.wiki | forum], [./alerts.md|email alerts], [./chat.md | chat], and [./event.wiki | technotes]. 2. <b>Built-in Web Interface</b> - Fossil has a built-in, [/skins | themeable], [./serverext.wiki | extensible], and intuitive [./webui.wiki | web interface] with a rich variety of information pages ([./webpage-ex.md|examples]) promoting situational awareness. <br><br> This entire website is just a running instance of Fossil. The pages you see here are all [./wikitheory.wiki | wiki] or [./embeddeddoc.wiki | embedded documentation] or (in the case of the [/uv/download.html|download] page) [./unvers.wiki | unversioned files]. When you clone Fossil from one of its [./selfhost.wiki | self-hosting repositories], you get more than just source code - you get this entire website. 3. <b>All-in-one</b> - Fossil is a single self-contained, stand-alone executable. To install, simply download a [/uv/download.html | precompiled binary] for Linux, Mac, or Windows and put it on your $PATH. [./build.wiki | Easy-to-compile source code] is also available. 4. <b>Self-host Friendly</b> - Stand up a project website in minutes using [./server/ | a variety of techniques]. Fossil is CPU and memory efficient. Most projects can be hosted comfortably on a $5/month VPS or a Raspberry Pi. You can also set up an automatic [./mirrortogithub.md | GitHub mirror]. 5. <b>Simple Networking</b> - Fossil uses ordinary HTTPS (or SSH if you prefer) for network communications, so it works fine from behind firewalls and [./quickstart.wiki#proxy|proxies]. The protocol is [./stats.wiki | bandwidth efficient] to the point that Fossil can be used comfortably over dial-up, weak 3G, or airliner Wifi. 6. <b>Autosync</b> - Fossil supports [./concepts.wiki#workflow | "autosync" mode] which helps to keep projects moving forward by reducing the amount of needless [./branching.wiki | forking and merging] often associated with distributed projects. 7. <b>Robust & Reliable</b> - Fossil stores content using an [./fileformat.wiki | enduring file format] in an SQLite database so that transactions are atomic even if interrupted by a power loss or system crash. Automatic [./selfcheck.wiki | self-checks] verify that all aspects of the repository are consistent prior to each commit. 8. <b>Free and Open-Source</b> - [../COPYRIGHT-BSD2.txt|2-clause BSD license]. <hr> <h3>Latest Release: 2.23 ([/timeline?c=version-2.23|2023-11-01])</h3> * [/uv/download.html|Download] * [./changes.wiki#v2_23|Change Summary] * [/timeline?p=version-2.23&bt=version-2.22&y=ci|Check-ins in version 2.23] |
︙ | ︙ |
Changes to www/inout.wiki.
1 2 3 4 5 6 7 8 9 10 11 12 | <title>Import And Export</title> Fossil has the ability to import and export repositories from and to [http://git-scm.com/ | Git]. And since most other version control systems will also import/export from Git, that means that you can import/export a Fossil repository to most version control systems using Git as an intermediary. <h2>Git → Fossil</h2> To import a Git repository into Fossil, say something like: | | | | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 | <title>Import And Export</title> Fossil has the ability to import and export repositories from and to [http://git-scm.com/ | Git]. And since most other version control systems will also import/export from Git, that means that you can import/export a Fossil repository to most version control systems using Git as an intermediary. <h2>Git → Fossil</h2> To import a Git repository into Fossil, say something like: <blockquote><pre> cd git-repo git fast-export --all | fossil import --git new-repo.fossil </pre></blockquote> The 3rd argument to the "fossil import" command is the name of a new Fossil repository that is created to hold the Git content. The --git option is not actually required. The git-fast-export file format is currently the only VCS interchange format that Fossil understands. But |
︙ | ︙ | |||
56 57 58 59 60 61 62 | any dependency on the amount of data involved. <h2>Fossil → Git</h2> To convert a Fossil repository into a Git repository, run commands like this: | | | | 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 | any dependency on the amount of data involved. <h2>Fossil → Git</h2> To convert a Fossil repository into a Git repository, run commands like this: <blockquote><pre> git init new-repo cd new-repo fossil export --git ../repo.fossil | git fast-import </pre></blockquote> In other words, create a new Git repository, then pipe the output from the "fossil export --git" command into the "git fast-import" command. Note that the "fossil export --git" command only exports the versioned files. Tickets and wiki and events are not exported, since Git does not understand those concepts. |
︙ | ︙ | |||
95 96 97 98 99 100 101 | artifacts which are known by both Git and Fossil to exist at a given point in time. To illustrate, consider the example of a remote Fossil repository that a user wants to import into a local Git repository. First, the user would clone the remote repository and import it into a new Git repository: | | | | | | | | 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 | artifacts which are known by both Git and Fossil to exist at a given point in time. To illustrate, consider the example of a remote Fossil repository that a user wants to import into a local Git repository. First, the user would clone the remote repository and import it into a new Git repository: <blockquote><pre> fossil clone /path/to/remote/repo.fossil repo.fossil mkdir repo cd repo fossil open ../repo.fossil mkdir ../repo.git cd ../repo.git git init . fossil export --git --export-marks ../repo/fossil.marks \ ../repo.fossil | git fast-import \ --export-marks=../repo/git.marks </pre></blockquote> Once the import has completed, the user would need to <tt>git checkout trunk</tt>. At any point after this, new changes can be imported from the remote Fossil repository: <blockquote><pre> cd ../repo fossil pull cd ../repo.git fossil export --git --import-marks ../repo/fossil.marks \ --export-marks ../repo/fossil.marks \ ../repo.fossil | git fast-import \ --import-marks=../repo/git.marks \ --export-marks=../repo/git.marks </pre></blockquote> Changes in the Git repository can be exported to the Fossil repository and then pushed to the remote: <blockquote><pre> git fast-export --import-marks=../repo/git.marks \ --export-marks=../repo/git.marks --all | fossil import --git \ --incremental --import-marks ../repo/fossil.marks \ --export-marks ../repo/fossil.marks ../repo.fossil cd ../repo fossil push </pre></blockquote> |
Changes to www/javascript.md.
︙ | ︙ | |||
319 320 321 322 323 324 325 | diff them” feature. [wt]: https://fossil-scm.org/home/timeline ### <a id="wedit"></a>The New Wiki Editor | | | 319 320 321 322 323 324 325 326 327 328 329 330 331 332 333 | diff them” feature. [wt]: https://fossil-scm.org/home/timeline ### <a id="wedit"></a>The New Wiki Editor The [new wiki editor][fwt] added in Fossil 2.12 has many new features, a few of which are impossible to get without use of JavaScript. First, it allows in-browser previews without losing client-side editor state, such as where your cursor is. With the old editor, you had to re-locate the place you were last editing on each preview, which would reduce the incentive to use the preview function. In the new wiki editor, you just click the Preview tab to see how Fossil interprets your |
︙ | ︙ | |||
353 354 355 356 357 358 359 | this new editor was created, replacing it. If someone rescues that feature, merging it in with the new editor, it will doubtless require JavaScript in order to react to editor button clicks like the “**B**” button, meaning “make \[selected\] text boldface.” There is no standard WYSIWYG editor component in browsers, doubtless because it’s relatively straightforward to create one using JavaScript. | | | | | | | | | | | | | | | | | | | 353 354 355 356 357 358 359 360 361 362 363 364 365 366 367 368 369 370 371 372 373 374 375 376 377 378 379 380 381 382 383 384 385 386 387 388 389 390 391 392 393 394 395 396 397 398 399 400 401 402 403 404 405 406 407 408 | this new editor was created, replacing it. If someone rescues that feature, merging it in with the new editor, it will doubtless require JavaScript in order to react to editor button clicks like the “**B**” button, meaning “make \[selected\] text boldface.” There is no standard WYSIWYG editor component in browsers, doubtless because it’s relatively straightforward to create one using JavaScript. _Graceful Fallback:_ Unlike in the Fossil 2.11 and earlier days, there is no longer a script-free wiki editor mode. This is not from lack of desire, only because the person who wrote the new wiki editor didn’t want to maintain three different editors. (New Ajaxy editor, old script-free HTML form based editor, and the old WYSIWYG JavaScript-based editor.) If someone wants to implement a `<noscript>` alternative to the new wiki editor, we will likely accept that [contribution][cg] as long as it doesn’t interfere with the new editor. (The same goes for adding a WYSIWYG mode to the new Ajaxy wiki editor.) _Workaround:_ You don’t have to use the browser-based wiki editor to maintain your repository’s wiki at all. Fossil’s [`wiki` command][fwc] lets you manipulate wiki documents from the command line. For example, consider this Vi based workflow: ```shell $ vi 'My Article.wiki' # begin work on new article ...write, write, write... :w # save changes to disk copy :!fossil wiki create 'My Article' '%' # current file (%) to new article ...write, write, write some more... :w # save again :!fossil wiki commit 'My Article' '%' # update article from disk :q # done writing for today ....days later... $ vi # work sans named file today :r !fossil wiki export 'My Article' - # pull article text into vi buffer ...write, write, write yet more... :w !fossil wiki commit - # vi buffer updates article ``` Extending this concept to other text editors is an exercise left to the reader. [fwc]: /help?cmd=wiki [fwt]: ./wikitheory.wiki ### <a id="fedit"></a>The File Editor Fossil 2.12 adds the [optional file editor feature][fedit], which works much like [the new wiki editor](#wedit), only on files committed to the repository. The original designed purpose for this feature is to allow [embedded documentation][edoc] to be interactively edited in the same way that wiki articles can be. (Indeed, the associated `fileedit-glob` feature allows you to restrict the editor to working *only* on files that can be |
︙ | ︙ | |||
435 436 437 438 439 440 441 | per [the `/file` docs](/help?cmd=/file). _Potential Better Workaround:_ Someone sufficiently interested could [provide a patch][cg] to add a `<noscript>` wrapped HTML button that would reload the page with this parameter included/excluded to implement the toggle via a server round-trip. | | | | | 435 436 437 438 439 440 441 442 443 444 445 446 447 448 449 450 451 452 | per [the `/file` docs](/help?cmd=/file). _Potential Better Workaround:_ Someone sufficiently interested could [provide a patch][cg] to add a `<noscript>` wrapped HTML button that would reload the page with this parameter included/excluded to implement the toggle via a server round-trip. As of Fossil 2.12, there is also a JavaScript-based interactive method for selecting a range of lines by clicking the line numbers when they’re visible, then copying the resulting URL to share your selection with others. _Workaround:_ These interactive features would be difficult and expensive (in terms of network I/O) to implement without JavaScript. A far simpler alternative is to manually edit the URL, per above. [mainc]: https://fossil-scm.org/home/artifact?ln&name=87d67e745 |
︙ | ︙ | |||
463 464 465 466 467 468 469 | in one box, you probably want to examine the same point on that line in the other box. _Graceful Fallback:_ Manually scroll both boxes to sync their views. ### <a id="diffcontext"></a>Diff Context Loading | | | 463 464 465 466 467 468 469 470 471 472 473 474 475 476 477 | in one box, you probably want to examine the same point on that line in the other box. _Graceful Fallback:_ Manually scroll both boxes to sync their views. ### <a id="diffcontext"></a>Diff Context Loading As of version 2.17, fossil adds the ability for the diff views to dynamically load more lines of context around changed blocks. The UI controls for this feature are injected using JavaScript when the page initializes and make use of XHR requests to fetch data from the fossil instance. _Graceful Fallback:_ The UI controls for this feature do not appear when JS is unavailable, leaving the user with the "legacy" static diff |
︙ | ︙ | |||
564 565 566 567 568 569 570 | patch to do this][cg] may well be accepted. Since this is not a *necessary* Fossil feature, an interested user is unlikely to get the core developers to do this work for them. ### <a id="chat"></a>Chat | | | | | 564 565 566 567 568 569 570 571 572 573 574 575 576 577 578 579 580 581 582 583 584 585 586 587 588 589 590 591 592 593 594 595 596 | patch to do this][cg] may well be accepted. Since this is not a *necessary* Fossil feature, an interested user is unlikely to get the core developers to do this work for them. ### <a id="chat"></a>Chat The [chat feature](./chat.md) added in Fossil 2.14 is deeply dependent on JavaScript. There is no obvious way to do this sort of thing without active client-side code of some sort. _Potential Workaround:_ It would not be especially difficult for someone sufficiently motivated to build a Fossil chat gateway, connecting to IRC, Jabber, etc. The messages are stored in the repository’s `chat` table with monotonically increasing IDs, so a poller that did something like SELECT xfrom, xmsg FROM chat WHERE msgid > 1234; …would pull the messages submitted since the last poll. Making the gateway bidirectional should be possible as well, as long as it properly uses SQLite transactions. ### <a id="brlist"></a>List of branches Since Fossil 2.16 the [`/brlist`](/brlist) page uses JavaScript to enable selection of several branches for further study via `/timeline`. Client-side script interactively responds to checkboxes' events and constructs a special hyperlink in the submenu. Clicking this hyperlink loads a `/timeline` page that shows only these selected branches (and the related check-ins). _Potential Workaround:_ A user can manually construct an appropriate |
︙ | ︙ |
Changes to www/loadmgmt.md.
︙ | ︙ | |||
23 24 25 26 27 28 29 | due to excessive requests to expensive pages: 1. An optional cache is available that remembers the 10 most recently requested `/zip` or `/tarball` pages and returns the precomputed answer if the same page is requested again. 2. Page requests can be configured to fail with a | | | < < < | | | | 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 | due to excessive requests to expensive pages: 1. An optional cache is available that remembers the 10 most recently requested `/zip` or `/tarball` pages and returns the precomputed answer if the same page is requested again. 2. Page requests can be configured to fail with a “[503 Server Overload][503]” HTTP error if an expensive request is received while the host load average is too high. Both of these load-control mechanisms are turned off by default, but they are recommended for high-traffic sites. The webpage cache is activated using the [`fossil cache init`](/help/cache) command-line on the server. Add a `-R` option to specify the specific repository for which to enable caching. If running this command as root, be sure to “`chown`” the cache database to give the Fossil server write permission for the user ID of the web server; this is a separate file in the same directory and with the same name as the repository but with the “`.fossil`” suffix changed to “`.cache`”. To activate the server load control feature visit the Admin → Access setup page in the administrative web interface; in the “**Server Load Average Limit**” box enter the load average threshold above which “503 Server Overload” replies will be issued for expensive requests. On the self-hosting Fossil server, that value is set to 1.5, but you could easily set it higher on a multi-core server. The maximum load average can also be set on the command line using commands like this: fossil set max-loadavg 1.5 fossil all set max-loadavg 1.5 The second form is especially useful for changing the maximum load average simultaneously on a large number of repositories. Note that this load-average limiting feature is only available on operating systems that support the [`getloadavg()`][gla] API. Most modern Unix systems have this interface, but Windows does not, so the feature will not work on Windows. Because Linux implements `getloadavg()` by accessing the `/proc/loadavg` virtual file, you will need to make sure `/proc` is available to the Fossil server. The most common reason for it to not be available is that you are running a Fossil instance [inside a `chroot(2)` jail](./chroot.md) and you have not mounted the `/proc` virtual file system inside that jail. On the [self-hosting Fossil repositories][sh], this was accomplished by adding a line to the `/etc/fstab` file: chroot_jail_proc /home/www/proc proc ro 0 0 The `/home/www/proc` pathname should be adjusted so that the `/proc` component is at the root of the chroot jail, of course. To see if the load-average limiter is functional, visit the [`/test_env`][hte] page of the server to view the current load average. If the value for the load average is greater than zero, that means that |
︙ | ︙ |
Changes to www/makefile.wiki.
︙ | ︙ | |||
144 145 146 147 148 149 150 | The VERSION.h header file is generated by a C program: tools/mkversion.c. To run the VERSION.h generator, first compile the tools/mkversion.c source file into a command-line program (named "mkversion.exe") then run: | | | | | | 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 | The VERSION.h header file is generated by a C program: tools/mkversion.c. To run the VERSION.h generator, first compile the tools/mkversion.c source file into a command-line program (named "mkversion.exe") then run: <blockquote><pre> mkversion.exe manifest.uuid manifest VERSION >VERSION.h </pre></blockquote> The pathnames in the above command might need to be adjusted to get the directories right. The point is that the manifest.uuid, manifest, and VERSION files in the root of the source tree are the three arguments and the generated VERSION.h file appears on standard output. The builtin_data.h header file is generated by a C program: tools/mkbuiltin.c. The builtin_data.h file contains C-language byte-array definitions for the content of resource files used by Fossil. To generate the builtin_data.h file, first compile the mkbuiltin.c program, then run: <blockquote><pre> mkbuiltin.exe diff.tcl <i>OtherFiles...</i> >builtin_data.h </pre></blockquote> At the time of this writing, the "diff.tcl" script (a Tcl/Tk script used to generate implement --tk option on the diff command) is the only resource file processed using mkbuiltin.exe. However, new resources will likely be added using this facility in future versions of Fossil. <h1 id="preprocessing">4.0 Preprocessing</h1> |
︙ | ︙ | |||
183 184 185 186 187 188 189 | The mkindex program scans the "src.c" source files looking for special comments that identify routines that implement various Fossil commands, web interface methods, and help text comments. The mkindex program generates some C code that Fossil uses in order to dispatch commands and HTTP requests and to show on-line help. Compile the mkindex program from the mkindex.c source file. Then run: | | | | | | 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 | The mkindex program scans the "src.c" source files looking for special comments that identify routines that implement various Fossil commands, web interface methods, and help text comments. The mkindex program generates some C code that Fossil uses in order to dispatch commands and HTTP requests and to show on-line help. Compile the mkindex program from the mkindex.c source file. Then run: <blockquote><pre> ./mkindex src.c >page_index.h </pre></blockquote> Note that "src.c" in the above is a stand-in for the (79) regular source files of Fossil - all source files except for the exceptions described in section 2.0 above. The output of the mkindex program is a header file that is #include-ed by the main.c source file during the final compilation step. <h2>4.2 The translate preprocessor</h2> The translate preprocessor looks for lines of source code that begin with "@" and converts those lines into string constants or (depending on context) into special "printf" operations for generating the output of an HTTP request. The translate preprocessor is a simple C program whose sources are in the translate.c source file. The translate preprocess is run on each of the other ordinary source files separately, like this: <blockquote><pre> ./translate src.c >src_.c </pre></blockquote> In this case, the "src.c" file represents any single source file from the set of ordinary source files as described in section 2.0 above. Note that each source file is translated separately. By convention, the names of the translated source files are the names of the input sources with a single "_" character at the end. But a new makefile can use any naming convention it wants - the "_" is not critical to the build process. |
︙ | ︙ | |||
233 234 235 236 237 238 239 | The makeheaders program is run once. It scans all inputs source files and generates header files for each one. Note that the sqlite3.c and shell.c source files are not scanned by makeheaders. Makeheaders only runs over "ordinary" source files, not the exceptional source files. However, makeheaders also uses some extra header files as input. The general format is like this: | | | | 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 | The makeheaders program is run once. It scans all inputs source files and generates header files for each one. Note that the sqlite3.c and shell.c source files are not scanned by makeheaders. Makeheaders only runs over "ordinary" source files, not the exceptional source files. However, makeheaders also uses some extra header files as input. The general format is like this: <blockquote><pre> makeheaders src_.c:src.h sqlite3.h th.h VERSION.h </pre></blockquote> In the example above the "src_.c" and "src.h" names represent all of the (79) ordinary C source files, each as a separate argument. <h1>5.0 Compilation</h1> After all generated files have been created and all ordinary source files |
︙ | ︙ | |||
302 303 304 305 306 307 308 | However, in practice it is instead recommended to add a respective configure option for the target platform and then perform a clean build. This way the Debug flags are consistently applied across the whole build process. For example, use these Debug flags in addition to other flags passed to the configure scripts: On Linux, *NIX and similar platforms: | | | | | | 302 303 304 305 306 307 308 309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324 325 326 | However, in practice it is instead recommended to add a respective configure option for the target platform and then perform a clean build. This way the Debug flags are consistently applied across the whole build process. For example, use these Debug flags in addition to other flags passed to the configure scripts: On Linux, *NIX and similar platforms: <blockquote><pre> ./configure --fossil-debug </pre></blockquote> On Windows: <blockquote><pre> win\buildmsvc.bat FOSSIL_DEBUG=1 </pre></blockquote> The resulting fossil binary could then be loaded into a platform-specific debugger. Source files displayed in the debugger correspond to the ones generated from the translation stage of the build process, that is what was actually compiled into the object files. <h1>8.0 See Also</h1> * [./tech_overview.wiki | A Technical Overview Of Fossil] * [./adding_code.wiki | How To Add Features To Fossil] |
Changes to www/mirrortogithub.md.
︙ | ︙ | |||
9 10 11 12 13 14 15 | 2. Create a new project. GitHub will ask you if you want to prepopulate your project with various things like a README file. Answer "no" to everything. You want a completely blank project. GitHub will then supply you with a URL for your project that will look something like this: | | | | | 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 | 2. Create a new project. GitHub will ask you if you want to prepopulate your project with various things like a README file. Answer "no" to everything. You want a completely blank project. GitHub will then supply you with a URL for your project that will look something like this: https://github.com/username/project.git 3. Back on your workstation, move to a checkout for your Fossil project and type: <blockquote> <pre> $ fossil git export /path/to/git/repo --autopush \ https://<font color="orange">username</font>:<font color="red">password</font>@github.com/username/project.git </pre> </blockquote> In place of the <code>/path/to...</code> argument above, put in some directory name that is <i>outside</i> of your Fossil checkout. If you keep multiple Fossil checkouts in a directory of their own, consider using <code>../git-mirror</code> to place the Git export |
︙ | ︙ | |||
56 57 58 59 60 61 62 | 5. And you are done! Assuming everything worked, your project is now mirrored on GitHub. 6. Whenever you update your project, simply run this command to update the mirror: | | | | 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 | 5. And you are done! Assuming everything worked, your project is now mirrored on GitHub. 6. Whenever you update your project, simply run this command to update the mirror: $ fossil git export Unlike with the first time you ran that command, you don’t need the remaining arguments, because Fossil remembers those things. Subsequent mirror updates should usually happen in a fraction of a second. 7. To see the status of your mirror, run: $ fossil git status ## Notes: * Unless you specify --force, the mirroring only happens if the Fossil repo has changed, with Fossil reporting "no changes", because Fossil does not care about the success or failure of the mirror run. If a mirror run failed (for example, due to an incorrect password, or a transient |
︙ | ︙ | |||
98 99 100 101 102 103 104 | subsequent invocations of "`fossil git export`" will know where you left off the last time and what new content needs to be moved over into Git. Be careful not to mess with the `.mirror_state` directory or any of its contents. Do not put those files under Git management. Do not edit or delete them. * The name of the "trunk" branch is automatically translated into "master" | | > | 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 | subsequent invocations of "`fossil git export`" will know where you left off the last time and what new content needs to be moved over into Git. Be careful not to mess with the `.mirror_state` directory or any of its contents. Do not put those files under Git management. Do not edit or delete them. * The name of the "trunk" branch is automatically translated into "master" in the Git mirror unless you give the `--mainbranch` option, added in Fossil 2.14. * Only check-ins and simple tags are translated to Git. Git does not support wiki or tickets or unversioned content or any of the other features of Fossil that make it so convenient to use, so those other elements cannot be mirrored in Git. * In Git, all tags must be unique. If your Fossil repository has the |
︙ | ︙ | |||
139 140 141 142 143 144 145 | ## <a id='ex1'></a>Example GitHub Mirrors As of this writing (2019-03-16) Fossil’s own repository is mirrored on GitHub at: | > | > | > | 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 | ## <a id='ex1'></a>Example GitHub Mirrors As of this writing (2019-03-16) Fossil’s own repository is mirrored on GitHub at: > <https://github.com/drhsqlite/fossil-mirror> In addition, an official Git mirror of SQLite is available: > <https://github.com/sqlite/sqlite> The Fossil source repositories for these mirrors are at <https://www2.fossil-scm.org/fossil> and <https://www2.sqlite.org/src>, respectively. Both repositories are hosted on the same VM at [Linode](https://www.linode.com). On that machine, there is a [cron job](https://linux.die.net/man/8/cron) that runs at 17 minutes after the hour, every hour that does: > /usr/bin/fossil sync -u -R /home/www/fossil/fossil.fossil /usr/bin/fossil sync -R /home/www/fossil/sqlite.fossil /usr/bin/fossil git export -R /home/www/fossil/fossil.fossil /usr/bin/fossil git export -R /home/www/fossil/sqlite.fossil The initial two "sync" commands pull in changes from the primary Fossil repositories for Fossil and SQLite. The last two lines export the changes to Git and push the results up to GitHub. |
Changes to www/mkindex.tcl.
︙ | ︙ | |||
164 165 166 167 168 169 170 | <li> <a href='quickstart.wiki'>Quick-start Guide</a> <li> <a href='$ROOT/help'>Built-in help for commands and webpages</a> <li> <a href='history.md'>Purpose and History of Fossil</a> <li> <a href='build.wiki'>Compiling and installing Fossil</a> <li> <a href='../COPYRIGHT-BSD2.txt'>License</a> <li> <a href='userlinks.wiki'>Miscellaneous Docs for Fossil Users</a> <li> <a href='hacker-howto.wiki'>Fossil Developer's Guide</a> | | | | 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 | <li> <a href='quickstart.wiki'>Quick-start Guide</a> <li> <a href='$ROOT/help'>Built-in help for commands and webpages</a> <li> <a href='history.md'>Purpose and History of Fossil</a> <li> <a href='build.wiki'>Compiling and installing Fossil</a> <li> <a href='../COPYRIGHT-BSD2.txt'>License</a> <li> <a href='userlinks.wiki'>Miscellaneous Docs for Fossil Users</a> <li> <a href='hacker-howto.wiki'>Fossil Developer's Guide</a> <ul><li><a href='$ROOT/wiki?name=Release Build How-To'>Release Build How-To</a>, a.k.a. how deliverables are built</li></ul> </li> <li> <a href='$ROOT/wiki?name=To+Do+List'>To Do List (Wiki)</a> <li> <a href='https://fossil-scm.org/fossil-book/'>Fossil book</a> </ul> <h2 id="pindex">Other Documents:</h2> <ul>} foreach entry $permindex { |
︙ | ︙ |
Changes to www/newrepo.wiki.
1 2 3 4 5 6 7 8 9 10 11 | <title>How To Create A New Fossil Repository</title> The [/doc/tip/www/quickstart.wiki|quickstart guide] explains how to get up and running with fossil. But once you're running, what can you do with it? This document will walk you through the process of creating a fossil repository, populating it with files, and then sharing it over the web. The first thing we need to do is create a fossil repository file: <verbatim> | | | | | | | | | > | | | | | | | | | | | | | | | | > | | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 | <title>How To Create A New Fossil Repository</title> The [/doc/tip/www/quickstart.wiki|quickstart guide] explains how to get up and running with fossil. But once you're running, what can you do with it? This document will walk you through the process of creating a fossil repository, populating it with files, and then sharing it over the web. The first thing we need to do is create a fossil repository file: <verbatim> stephan@ludo:~/fossil$ fossil new demo.fossil project-id: 9d8ccff5671796ee04e60af6932aa7788f0a990a server-id: 145fe7d71e3b513ac37ac283979d73e12ca04bfe admin-user: stephan (initial password is ******) </verbatim> The numbers it spits out are unimportant (they are version numbers). Now we have an empty repository file named <tt>demo.fossil</tt>. There is nothing magical about the extension <tt>.fossil</tt> - it's just a convention. You may name your files anything you like. The first thing we normally want to do is to run fossil as a local server so that you can configure the access rights to the repo: <verbatim> stephan@ludo:~/fossil$ fossil ui demo.fossil </verbatim> The <tt>ui</tt> command starts up a server (with an optional <tt>-port NUMBER</tt> argument) and launches a web browser pointing at the fossil server. From there it takes just a few moments to configure the repo. Most importantly, go to the Admin menu, then the Users link, and set your account name and password, and grant your account all access privileges. (I also like to grant Clone access to the anonymous user, but that's personal preference.) Once you are done, kill the fossil server (with Ctrl-C or equivalent) and close the browser window. <blockquote> Tip: it is not strictly required to configure a repository this way, but if you are going to share a repo over the net then it is highly recommended. If you are only going to work with the repo locally, you can skip the configuration step and do it later if you decide you want to share your repo. </blockquote> The next thing we need to do is <em>open</em> the repository. To do so we create a working directory and then <tt>cd</tt> to it: <verbatim> stephan@ludo:~/fossil$ mkdir demo stephan@ludo:~/fossil$ cd demo stephan@ludo:~/fossil/demo$ fossil open ../demo.fossil stephan@ludo:~/fossil/demo$ </verbatim> That creates a file called <tt>_FOSSIL_</tt> in the current directory, and this file contains all kinds of fossil-related information about your local repository. You can ignore it for all purposes, but be sure not to accidentally remove it or otherwise damage it - it belongs to fossil, not you. The next thing we need to do is add files to our repository. As it happens, we have a few C source files lying around, which we'll simply copy into our working directory. <verbatim> stephan@ludo:~/fossil/demo$ cp ../csnip/*.{c,h} . stephan@ludo:~/fossil/demo$ ls clob.c clob.h clobz.c _FOSSIL_ mkdep.c test-clob.c tokenize_path.c tokenize_path.h vappendf.c vappendf.h </verbatim> Fossil doesn't know about those files yet. Telling fossil about a new file is a two-step process. First we <em>add</em> the file to the repository, then we <em>commit</em> the file. This is a familiar process for anyone who's worked with SCM systems before: <verbatim> stephan@ludo:~/fossil/demo$ fossil add *.{c,h} stephan@ludo:~/fossil/demo$ fossil commit -m "egg" New_Version: d1296b4a08b9f8b943bb6c73698e51eed23f8f91 </verbatim> We now have a working repository! The file <tt>demo.fossil</tt> is the central storage, and we can share it amongst an arbitrary number of trees. As a silly example: <verbatim> stephan@ludo:~/fossil/demo$ cd ~/fossil stephan@ludo:~/fossil$ mkdir demo2 stephan@ludo:~/fossil$ cd demo2 stephan@ludo:~/fossil/demo2$ fossil open ../demo.fossil ADD clob.c ADD clob.h ADD clobz.c ADD mkdep.c ADD test-clob.c ADD tokenize_path.c ADD tokenize_path.h ADD vappendf.c </verbatim> You may modify the repository (e.g. add, remove, or commit files) from both working directories, and doing so might be useful when working on a branch or experimental code. Making your repository available over the web is trivial to do. We assume you have some web space where you can store your fossil file and run a CGI script. If not, then this option is not for you. If you do, then here's how... Copy the fossil repository file to your web server (it doesn't matter where, really). In your <tt>cgi-bin</tt> (or equivalent) directory, create a file which looks like this: <verbatim> #!/path/to/fossil repository: /path/to/my_repo.fossil </verbatim> Make that script executable, and you're all ready to go: <verbatim> ~/www/cgi-bin> chmod +x myrepo.cgi </verbatim> Now simply point your browser to <tt>http://my.domain/cgi-bin/myrepo.cgi</tt> and you should be able to manage the repository from there. To check out a copy of your remote repository, use the <em>clone</em> command: <verbatim> stephan@ludo:~/fossil$ fossil clone \ http://MyAccountName:MyAccountPassword@my.domain/cgi-bin/myrepo.cgi </verbatim> Note that you should pass your fossil login name and password (as set via local server mode) during the clone - that ensures that fossil won't ask you for it on each commit! A clone is a local copy of a remote repository, and can be opened just like a local one (as shown above). It is treated identically to your local repository, with one very important difference. When you commit changes to a cloned remote repository, they will be pushed back to the remote repository. If you have <tt>autosync</tt> on then this sync happens automatically, otherwise you will need to use the |
︙ | ︙ |
Changes to www/password.wiki.
1 2 3 4 5 6 7 8 | <title>Fossil Password Management</title> Fossil handles user authentication using passwords. Passwords are unique to each repository. Passwords are not part of the persistent state of a project. Passwords are not versioned and are not transmitted from one repository to another during a sync. Passwords are local configuration information that can (and usually does) vary from one repository to the next within the same project. | > | 1 2 3 4 5 6 7 8 9 | <title>Fossil Password Management</title> <h1 align="center">Password Management</h1> Fossil handles user authentication using passwords. Passwords are unique to each repository. Passwords are not part of the persistent state of a project. Passwords are not versioned and are not transmitted from one repository to another during a sync. Passwords are local configuration information that can (and usually does) vary from one repository to the next within the same project. |
︙ | ︙ | |||
19 20 21 22 23 24 25 | The SHA1 hash in the USER.PW field is a hash of a string composed of the project-code, the user login, and the user cleartext password. Suppose user "alice" with password "asdfg" had an account on the Fossil self-hosting repository. Then the value of USER.PW for alice would be the SHA1 hash of | | | | | | | | 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 | The SHA1 hash in the USER.PW field is a hash of a string composed of the project-code, the user login, and the user cleartext password. Suppose user "alice" with password "asdfg" had an account on the Fossil self-hosting repository. Then the value of USER.PW for alice would be the SHA1 hash of <blockquote> CE59BB9F186226D80E49D1FA2DB29F935CCA0333/alice/asdfg </blockquote> Note that by including the project-code and the login as part of the hash, a different USER.PW value results even if two or more users on the repository select the same "asdfg" password or if user alice reuses the same password on multiple projects. Whenever a password is changed using the web interface or using the "user" command-line method, the new password is stored using the SHA1 encoding. Thus, cleartext passwords will gradually migrate to become SHA1 passwords. All remaining cleartext passwords can be converted to SHA1 passwords using the following command: <blockquote><pre> fossil test-hash-passwords <i>REPOSITORY-NAME</i> </pre></blockquote> Remember that converting from cleartext to SHA1 passwords is an irreversible operation. The only way to insert a new cleartext password into the USER table is to do so manually using SQL commands. For example: <blockquote><pre> UPDATE user SET pw='asdfg' WHERE login='alice'; </pre></blockquote> Note that an password that is an empty string or NULL will disable all login for that user. Thus, to lock a user out of the system, one has only to set their password to an empty string, using either the web interface or direct SQL manipulation of the USER table. Note also that the password field is essentially ignored for the special users named "anonymous", "developer", |
︙ | ︙ | |||
114 115 116 117 118 119 120 | This means that when USER.PW holds a cleartext password, the login card will work for both older and newer clients. If the USER.PW on the server only holds the SHA1 hash of the password, then only newer clients will be able to authenticate to the server. The client normally gets the login and password from the "remote URL". | | | | | | 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 | This means that when USER.PW holds a cleartext password, the login card will work for both older and newer clients. If the USER.PW on the server only holds the SHA1 hash of the password, then only newer clients will be able to authenticate to the server. The client normally gets the login and password from the "remote URL". <blockquote><pre> http://<span style="color:blue">login:password</span>@servername.org/path </pre></blockquote> For older clients, the password is used for the shared secret as stated in the URL and with no encoding. For newer clients, the shared secret is derived from the password by transformed the password using the SHA1 hash encoding described above. However, if the first character of the password is "*" (ASCII 0x2a) then the "*" is skipped and the rest of the password is used directly as the share secret without the SHA1 encoding. <blockquote><pre> http://<span style="color:blue">login:*password</span>@servername.org/path </pre></blockquote> This *-before-the-password trick can be used by newer clients to sync against a legacy server that does not understand the new SHA1 password encoding. |
Changes to www/patchcmd.md.
1 2 3 4 5 6 7 8 9 10 11 | # The "fossil patch" command The "[fossil patch](/help?cmd=patch)" command is designed to transfer uncommitted changes from one check-out to another, including transfering those changes to other machines. For example, if you are working on a Windows desktop and you want to test your changes on a Linux server before you commit, you can use the "fossil patch push" command to make a copy of all your changes on the remote Linux server: | | | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 | # The "fossil patch" command The "[fossil patch](/help?cmd=patch)" command is designed to transfer uncommitted changes from one check-out to another, including transfering those changes to other machines. For example, if you are working on a Windows desktop and you want to test your changes on a Linux server before you commit, you can use the "fossil patch push" command to make a copy of all your changes on the remote Linux server: > fossil patch push linuxserver:/path/to/checkout In the previous "linuxserver" is the name of the remote machine and "/path/to/checkout" is an existing checkout directory for the same project on the remote machine. The "fossil patch push" command works by first creating a patch file, then transfering that patch file to the remote machine using "ssh", then |
︙ | ︙ | |||
33 34 35 36 37 38 39 | The "fossil patch push" and "fossil patch pull" commands will only work if you have "ssh" available on the local machine and if "fossil" is on the default PATH on the remote machine. To check if Fossil is installed correctly on the remote, try a command like this: | | | | 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 | The "fossil patch push" and "fossil patch pull" commands will only work if you have "ssh" available on the local machine and if "fossil" is on the default PATH on the remote machine. To check if Fossil is installed correctly on the remote, try a command like this: > `ssh -T remote "fossil version"` If the command above shows a recent version of Fossil, then you should be set to go. If you get "fossil not found", or if the version shown is too old, put a newer fossil executable on the default PATH. The default PATH can be shown using: > `ssh -T remote 'echo $PATH'` ### Custom PATH Caveat On Unix-like systems, the init script for the user's login shell (e.g. `~/.profile` or `~/.bash_profile`) may be configured to *not do anything* when running under a non-interactive shell. Thus a fossil binary installed to a custom directory might not be found. To allow |
︙ | ︙ | |||
90 91 92 93 94 95 96 | The "fossil patch apply" command reads the database that is the patch file and applies it to the local check-out. If a filename is given as an argument, then the database is read from that file. If the argument is "-" then the database is read from standard input. Hence the command: | | | | | | 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 | The "fossil patch apply" command reads the database that is the patch file and applies it to the local check-out. If a filename is given as an argument, then the database is read from that file. If the argument is "-" then the database is read from standard input. Hence the command: > `fossil patch push remote:projectA` Is equivalent to: > `fossil patch create - | ssh -T remote 'cd projectA;fossil patch apply -'` Likewise, a command like this: > `fossil patch pull remote:projB` Could be entered like this: > `ssh -T remote 'cd projB;fossil patch create -' | fossil patch apply -` The "fossil patch view" command just opens the database file and prints a summary of its contents on standard output. |
Changes to www/permutedindex.html.
︙ | ︙ | |||
9 10 11 12 13 14 15 | <li> <a href='quickstart.wiki'>Quick-start Guide</a> <li> <a href='$ROOT/help'>Built-in help for commands and webpages</a> <li> <a href='history.md'>Purpose and History of Fossil</a> <li> <a href='build.wiki'>Compiling and installing Fossil</a> <li> <a href='../COPYRIGHT-BSD2.txt'>License</a> <li> <a href='userlinks.wiki'>Miscellaneous Docs for Fossil Users</a> <li> <a href='hacker-howto.wiki'>Fossil Developer's Guide</a> | | | | 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 | <li> <a href='quickstart.wiki'>Quick-start Guide</a> <li> <a href='$ROOT/help'>Built-in help for commands and webpages</a> <li> <a href='history.md'>Purpose and History of Fossil</a> <li> <a href='build.wiki'>Compiling and installing Fossil</a> <li> <a href='../COPYRIGHT-BSD2.txt'>License</a> <li> <a href='userlinks.wiki'>Miscellaneous Docs for Fossil Users</a> <li> <a href='hacker-howto.wiki'>Fossil Developer's Guide</a> <ul><li><a href='$ROOT/wiki?name=Release Build How-To'>Release Build How-To</a>, a.k.a. how deliverables are built</li></ul> </li> <li> <a href='$ROOT/wiki?name=To+Do+List'>To Do List (Wiki)</a> <li> <a href='https://fossil-scm.org/fossil-book/'>Fossil book</a> </ul> <h2 id="pindex">Other Documents:</h2> <ul> <li><a href="tech_overview.wiki">A Technical Overview Of The Design And Implementation Of Fossil</a></li> |
︙ | ︙ |
Changes to www/pikchr.md.
︙ | ︙ | |||
20 21 22 23 24 25 26 | arrow <-> down 70% from last box.s box same "Pikchr" "Formatter" "(pikchr.c)" fit ``` The diagram above was generated by the following lines of Markdown: ~~~~~ | | | | | | | | | 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 | arrow <-> down 70% from last box.s box same "Pikchr" "Formatter" "(pikchr.c)" fit ``` The diagram above was generated by the following lines of Markdown: ~~~~~ ``` pikchr arrow right 200% "Markdown" "Source" box rad 10px "Markdown" "Formatter" "(markdown.c)" fit arrow right 200% "HTML+SVG" "Output" arrow <-> down 70% from last box.s box same "Pikchr" "Formatter" "(pikchr.c)" fit ``` ~~~~~ See the [original Markdown source text of this document][4] for an example of Pikchr in operation. [4]: ./pikchr.md?mimetype=text/plain |
︙ | ︙ | |||
89 90 91 92 93 94 95 | content is interpreted as Pikchr script and is replaced by the equivalent SVG. So either of these work: [fcb]: https://spec.commonmark.org/0.29/#fenced-code-blocks ~~~~~~ | | | | | | | | | | | 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 | content is interpreted as Pikchr script and is replaced by the equivalent SVG. So either of these work: [fcb]: https://spec.commonmark.org/0.29/#fenced-code-blocks ~~~~~~ ~~~ pikchr arrow; box "Hello" "World!" fit; arrow ~~~ ``` pikchr arrow; box "Hello" "World!" fit; arrow ``` ~~~~~~ For Fossil Wiki, the Pikchr code goes within `<verbatim type="pikchr"> ... </verbatim>`. Normally `<verbatim>` content is displayed verbatim. The extra `type="pikchr"` attribute causes the content to be interpreted as Pikchr and replaced by SVG. ~~~~~~ <verbatim type="pikchr"> arrow; box "Hello" "World!" fit; arrow </verbatim> ~~~~~~ ## Extra Arguments In "Pikchr" Code Blocks Extra formatting arguments can be included in the fenced code block start tag, or in the "`type=`" attribute of `<verbatim>`, to change the formatting of the diagram. |
︙ | ︙ |
Changes to www/pop.wiki.
1 2 3 4 5 6 7 8 | <title>Principles Of Operation</title> This page attempts to define the foundational principals upon which Fossil is built. * A project consists of source files, wiki pages, and trouble tickets, and control files (collectively "artifacts"). All historical copies of all artifacts | > | 1 2 3 4 5 6 7 8 9 | <title>Principles Of Operation</title> <h1 align="center">Principles Of Operation</h1> This page attempts to define the foundational principals upon which Fossil is built. * A project consists of source files, wiki pages, and trouble tickets, and control files (collectively "artifacts"). All historical copies of all artifacts |
︙ | ︙ |
Changes to www/private.wiki.
1 2 3 4 5 6 7 8 9 10 11 12 | <title>Private Branches</title> By default, everything you check into a Fossil repository is shared to all clones of that repository. In Fossil, you don't push and pull individual branches; you push and pull everything all at once. But sometimes users want to keep some private work that is not shared with others. This might be a preliminary or experimental change that needs further refinement before it is shared and which might never be shared at all. To do this in Fossil, simply commit the change with the --private command-line option: | | | | | < < < < < < < < < < < < < > > > > > > > > > > > > > | | | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 | <title>Private Branches</title> By default, everything you check into a Fossil repository is shared to all clones of that repository. In Fossil, you don't push and pull individual branches; you push and pull everything all at once. But sometimes users want to keep some private work that is not shared with others. This might be a preliminary or experimental change that needs further refinement before it is shared and which might never be shared at all. To do this in Fossil, simply commit the change with the --private command-line option: <blockquote><pre> fossil commit --private </pre></blockquote> The --private option causes Fossil to put the check-in in a new branch named "private". That branch will not participate in subsequent clone, sync, push, or pull operations. The branch will remain on the one local repository where it was created. Note that you only use the --private option for the first check-in that creates the private branch. Additional checkins into the private branch remain private automatically. <h2>Publishing Private Changes</h2> After additional work, one might desire to publish the changes associated with a private branch. The usual way to do this is to merge those changes into a public branch. For example: <blockquote><pre> fossil update trunk fossil merge private fossil commit </pre></blockquote> The private branch remains private and is not recorded as a parent in the merge manifest's P-card, but all of the changes associated with the private branch are now folded into the public branch and are hence visible to other users of the project. A private branch created with Fossil version 1.30 or newer can also be converted into a public branch using the <code>fossil publish</code> command. However, there is no way to convert a private branch created with older versions of Fossil into a public branch. The <code>--integrate</code> option of <code>fossil merge</code> (to close the merged branch when committing) is ignored for a private branch -- or the check-in manifest of the resulting merge child would include a <code>+close</code> tag referring to the leaf check-in on the private branch, and generate a missing artifact reference on repository clones without that private branch. It's still possible to close the leaf of the private branch (after committing the merge child) with the <code>fossil amend --close</code> command. <blockquote><small> Side note: For the same reason, i.e. so as not to generate a missing artifact reference on peer repositories without the private branch, the merge parent is not recorded when merging the private branch into a public branch. As a consequence, the web UI timeline does not draw a merge line from the private merge parent to the public merge child. Moreover, repeat private-to-public merge operations (without the [/help?cmd=merge | --force option]) with files added on the private branch may only work once, but later abort with "WARNING: no common ancestor for FILE", as the parent-child relationship is not recorded (see the [/doc/trunk/www/branching.wiki | Branching, Forking, Merging, and Tagging] document for more information). </small></blockquote> <h2>Syncing Private Branches</h2> A private branch normally stays on the one repository where it was originally created. But sometimes you want to share private branches with another repository. For example, you might be building a cross-platform application and have separate repositories on your Windows laptop, your Linux desktop, and your iMac. You can transfer private branches between these machines by using the --private option on the "sync", "push", "pull", and "clone" commands. For example, if you are running "fossil server" on your Linux box and you want to clone that repository to your Mac, including all private branches, use: <blockquote><pre> fossil clone --private http://user@linux.localnetwork:8080/ mac-clone.fossil </pre></blockquote> You'll have to supply a username and password in order for this to work. Fossil will not clone (or sync) private branches anonymously. By default, there are no users that can do private branch syncing. You will have to give a user the "Private" capability ("x") if you want them to be able to do this. |
︙ | ︙ | |||
99 100 101 102 103 104 105 | again, this restriction is designed to make it hard to accidentally push private branches beyond their intended audience. <h2>Purging Private Branches</h2> You can remove all private branches from a repository using this command: | | | | 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 | again, this restriction is designed to make it hard to accidentally push private branches beyond their intended audience. <h2>Purging Private Branches</h2> You can remove all private branches from a repository using this command: <blockquote><pre> fossil scrub --private </pre></blockquote> Note that the above is a permanent and irreversible change. You will be asked to confirm before continuing. Once the private branches are removed, they cannot be retrieved (unless you have synced them to another repository.) So be careful with the command. <h2>Additional Notes</h2> All of the features above apply to <u>all</u> private branches in a single repository at once. There is no mechanism in Fossil (currently) that allows you to push, pull, clone, sync, or scrub an individual private branch within a repository that contains multiple private branches. |
Changes to www/qandc.wiki.
1 2 3 4 5 | <title>Questions And Criticisms</title> This page is a collection of real questions and criticisms that were raised against Fossil early in its history (circa 2008). This page is old and has not been kept up-to-date. See the | > > | | | | | | | | | | | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 | <title>Questions And Criticisms</title> <nowiki> <h1 align="center">Questions And Criticisms</h1> This page is a collection of real questions and criticisms that were raised against Fossil early in its history (circa 2008). This page is old and has not been kept up-to-date. See the </nowiki>[/finfo?name=www/qandc.wiki|change history of this page]<nowiki>. <b>Fossil sounds like a lot of reinvention of the wheel. Why create your own DVCS when you could have reused mercurial?</b> <blockquote> I wrote fossil because none of the other available DVCSes met my needs. If the other DVCSes do meet your needs, then you might not need fossil. But they don't meet mine, and so fossil is necessary for me. Features provided by fossil that one does not get with other DVCSes include: <ol> <li> Integrated <a href="wikitheory.wiki">wiki</a>. </li> <li> Integrated <a href="bugtheory.wiki">bug tracking</a> </li> <li> Immutable artifacts </li> <li> Self-contained, stand-alone executable that can be run in a <a href="http://en.wikipedia.org/wiki/Chroot">chroot jail</a> </li> <li> Simple, well-defined, <a href="fileformat.wiki">enduring file format</a> </li> <li> Integrated <a href="webui.wiki">web interface</a> </li> </ol> </blockquote> <b>Why should I use this rather than Trac?</b> <blockquote> <ol> <li> Fossil is distributed. You can view and/or edit tickets, wiki, and code while off network, then sync your changes later. With Trac, you can only view and edit tickets and wiki while you are connected to the server. </li> <li> Fossil is lightweight and fully self-contained. It is very easy to setup on a low-resource machine. Fossil does not require an administrator.</li> <li> Fossil integrates code versioning into the same repository with wiki and tickets. There is nothing extra to add or install. Fossil is an all-in-one turnkey solution. </li> </ol> </blockquote> <b>Love the concept here. Anyone using this for real work yet?</b> <blockquote> Fossil is <a href="https://fossil-scm.org/">self-hosting</a>. In fact, this page was probably delivered to your web-browser via a working fossil instance. The same virtual machine that hosts https://fossil-scm.org/ (a <a href="http://www.linode.com/">Linode 720</a>) also hosts 24 other fossil repositories for various small projects. The documentation files for <a href="http://www.sqlite.org/">SQLite</a> are hosted in a fossil repository <a href="http://www.sqlite.org/docsrc/">here</a>, for example. Other projects are also adopting fossil. But fossil does not yet have the massive user base of git or mercurial. </blockquote> <b>Fossil looks like the bug tracker that would be in your Linksys Router's administration screen.</b> <blockquote> I take a pragmatic approach to software: form follows function. To me, it is more important to have a reliable, fast, efficient, enduring, and simple DVCS than one that looks pretty. On the other hand, if you have patches that improve the appearance of Fossil without seriously compromising its reliability, performance, and/or maintainability, I will be happy to accept them. Fossil is self-hosting. Send email to request a password that will let you push to the main fossil repository. </blockquote> <b>It would be useful to have a separate application that keeps the bug-tracking database in a versioned file. That file can then be pushed and pulled along with the rest repository.</b> <blockquote> Fossil already <u>does</u> push and pull bugs along with the files in your repository. But fossil does <u>not</u> track bugs as files in the source tree. That approach to bug tracking was rejected for three reasons: <ol> <li> Check-ins in fossil are immutable. So if |
︙ | ︙ | |||
103 104 105 106 107 108 109 | of tickets to developers with check-in privileges and an installed copy of the fossil executable. Casual passers-by on the internet should be permitted to create tickets. </ol> These points are reiterated in the opening paragraphs of the <a href="bugtheory.wiki">Bug-Tracking In Fossil</a> document. | | | | | | | > | > > | 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 | of tickets to developers with check-in privileges and an installed copy of the fossil executable. Casual passers-by on the internet should be permitted to create tickets. </ol> These points are reiterated in the opening paragraphs of the <a href="bugtheory.wiki">Bug-Tracking In Fossil</a> document. </blockquote> <b>Fossil is already the name of a plan9 versioned append-only filesystem.</b> <blockquote> I did not know that. Perhaps they selected the name for the same reason that I did: because a repository with immutable artifacts preserves an excellent fossil record of a long-running project. </blockquote> <b>The idea of storing a repository in a binary blob like an SQLite database terrifies me.</b> <blockquote> The use of SQLite to store the database is likely more stable and secure than any other approach, due to the fact that SQLite is transactional. Fossil also implements several internal <a href="selfcheck.wiki">self-checks</a> to insure that no information is ever lost. </blockquote> <b>I am dubious of the benefits of including wikis and bug trackers directly in the VCS - either they are under-featured compared to full software like Trac, or the VCS is massively bloated compared to Subversion or Bazaar.</b> <blockquote> I have no doubt that Trac has many features that fossil lacks. But that is not the point. Fossil has several key features that Trac lacks and that I need: most notably the fact that fossil supports disconnected operation. As for bloat: Fossil is a single self-contained executable. You do not need any other packages (diff, patch, merge, cvs, svn, rcs, git, python, perl, tcl, apache, sqlite, and so forth) in order to run fossil. Fossil runs just fine in a chroot jail all by itself. And the self-contained fossil executable is much less than 1MB in size. (Update 2015-01-12: Fossil has grown in the years since the previous sentence was written but is still much less than 2MB according to "size" when compiled using -Os on x64 Linux.) Fossil is the very opposite of bloat. </blockquote> </nowiki> |
Changes to www/quickstart.wiki.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 | <title>Fossil Quick Start Guide</title> This is a guide to help you get started using the Fossil [https://en.wikipedia.org/wiki/Distributed_version_control|Distributed Version Control System] quickly and painlessly. <h2 id="install">Installing</h2> Fossil is a single self-contained C program. You need to either download a [https://fossil-scm.org/home/uv/download.html|precompiled binary] or <a href="build.wiki">compile it yourself</a> from sources. Install Fossil by putting the fossil binary someplace on your $PATH. You can test that Fossil is present and working like this: | > > > | | | > | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 | <title>Fossil Quick Start Guide</title> <h1 align="center">Fossil Quick Start</h1> This is a guide to help you get started using the Fossil [https://en.wikipedia.org/wiki/Distributed_version_control|Distributed Version Control System] quickly and painlessly. <h2 id="install">Installing</h2> Fossil is a single self-contained C program. You need to either download a [https://fossil-scm.org/home/uv/download.html|precompiled binary] or <a href="build.wiki">compile it yourself</a> from sources. Install Fossil by putting the fossil binary someplace on your $PATH. You can test that Fossil is present and working like this: <blockquote> <b> fossil version<br> <tt>This is fossil version 2.13 [309af345ab] 2020-09-28 04:02:55 UTC</tt><br> </b> </blockquote> <h2 id="workflow" name="fslclone">General Work Flow</h2> Fossil works with repository files (a database in a single file with the project's complete history) and with checked-out local trees (the working directory you use to do your work). (See [./glossary.md | the glossary] for more background.) |
︙ | ︙ | |||
42 43 44 45 46 47 48 | operations. <h2 id="new">Starting A New Project</h2> To start a new project with fossil create a new empty repository this way: ([/help/init | more info]) | > | | > | | > | > > > | | | | | | | | | | > > | > | 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 | operations. <h2 id="new">Starting A New Project</h2> To start a new project with fossil create a new empty repository this way: ([/help/init | more info]) <blockquote> <b>fossil init </b><i> repository-filename</i> </blockquote> You can name the database anything you like, and you can place it anywhere in the filesystem. The <tt>.fossil</tt> extension is traditional but only required if you are going to use the <tt>[/help/server | fossil server DIRECTORY]</tt> feature.” <h2 id="clone">Cloning An Existing Repository</h2> Most fossil operations interact with a repository that is on the local disk drive, not on a remote system. Hence, before accessing a remote repository it is necessary to make a local copy of that repository. Making a local copy of a remote repository is called "cloning". Clone a remote repository as follows: ([/help/clone | more info]) <blockquote> <b>fossil clone</b> <i>URL repository-filename</i> </blockquote> The <i>URL</i> specifies the fossil repository you want to clone. The <i>repository-filename</i> is the new local filename into which the cloned repository will be written. For example, to clone the source code of Fossil itself: <blockquote> <b>fossil clone https://fossil-scm.org/ myclone.fossil</b> </blockquote> If your logged-in username is 'exampleuser', you should see output something like this: <blockquote> <b><tt> Round-trips: 8 Artifacts sent: 0 received: 39421<br> Clone done, sent: 2424 received: 42965725 ip: 10.10.10.0<br> Rebuilding repository meta-data...<br> 100% complete...<br> Extra delta compression... <br> Vacuuming the database... <br> project-id: 94259BB9F186226D80E49D1FA2DB29F935CCA0333<br> server-id: 016595e9043054038a9ea9bc526d7f33f7ac0e42<br> admin-user: exampleuser (password is "yoWgDR42iv")><br> </tt></b> </blockquote> If the remote repository requires a login, include a userid in the URL like this: <blockquote> <b>fossil clone https://</b><i>remoteuserid</i><b>@www.example.org/ myclone.fossil</b> </blockquote> You will be prompted separately for the password. Use [https://en.wikipedia.org/wiki/Percent-encoding#Percent-encoding_reserved_characters|"%HH"] escapes for special characters in the userid. For example "/" would be replaced by "%2F" meaning that a userid of "Projects/Budget" would become "Projects%2FBudget") If you are behind a restrictive firewall, you might need to <a href="#proxy">specify an HTTP proxy</a>. |
︙ | ︙ | |||
128 129 130 131 132 133 134 | <h2 id="checkout">Checking Out A Local Tree</h2> To work on a project in fossil, you need to check out a local copy of the source tree. Create the directory you want to be the root of your tree and cd into that directory. Then do this: ([/help/open | more info]) | > | > > > | | | | | | > | | | | | | | | | | | | | | | | | | > > | > > > | | | > > > | | | | | | | | | > | > | | | | | | | | | | | | | | | | > | > | | | | | | | | | | > | 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 | <h2 id="checkout">Checking Out A Local Tree</h2> To work on a project in fossil, you need to check out a local copy of the source tree. Create the directory you want to be the root of your tree and cd into that directory. Then do this: ([/help/open | more info]) <blockquote> <b>fossil open </b><i> repository-filename</i> </blockquote> for example: <blockquote> <b><tt> fossil open ../myclone.fossil<br> BUILD.txt<br> COPYRIGHT-BSD2.txt<br> README.md<br> ︙<br> </tt></b> </blockquote> (or "fossil open ..\myclone.fossil" on Windows). This leaves you with the newest version of the tree checked out. From anywhere underneath the root of your local tree, you can type commands like the following to find out the status of your local tree: <blockquote> <b>[/help/info | fossil info]</b><br> <b>[/help/status | fossil status]</b><br> <b>[/help/changes | fossil changes]</b><br> <b>[/help/diff | fossil diff]</b><br> <b>[/help/timeline | fossil timeline]</b><br> <b>[/help/ls | fossil ls]</b><br> <b>[/help/branch | fossil branch]</b><br> </blockquote> If you created a new repository using "fossil init" some commands will not produce much output. Note that Fossil allows you to make multiple check-outs in separate directories from the same repository. This enables you, for example, to do builds from multiple branches or versions at the same time without having to generate extra clones. To switch a checkout between different versions and branches, use:< <blockquote> <b>[/help/update | fossil update]</b><br> <b>[/help/checkout | fossil checkout]</b><br> </blockquote> [/help/update | update] honors the "autosync" option and does a "soft" switch, merging any local changes into the target version, whereas [/help/checkout | checkout] does not automatically sync and does a "hard" switch, overwriting local changes if told to do so. <h2 id="changes">Making and Committing Changes</h2> To add new files to your project or remove existing ones, use these commands: <blockquote> <b>[/help/add | fossil add]</b> <i>file...</i><br> <b>[/help/rm | fossil rm]</b> <i>file...</i><br> <b>[/help/addremove | fossil addremove]</b> <i>file...</i><br> </blockquote> The command: <blockquote> <b> [/help/changes | fossil changes]</b> </blockquote> lists files that have changed since the last commit to the repository. For example, if you edit the file "README.md": <blockquote> <b> fossil changes<br> EDITED README.md </b> </blockquote> To see exactly what change was made you can use the command <b>[/help/diff | fossil diff]</b>: <blockquote> <b> fossil diff <br><tt> Index: README.md<br> ============================================================<br> --- README.md<br> +++ README.md<br> @@ -1,5 +1,6 @@<br> +Made some changes to the project<br> # Original text<br> </tt></b> </blockquote> "fossil diff" shows the difference between your tree on disk now and as the tree was when you last committed changes. If you haven't committed yet, then it shows the difference relative to the tip-of-trunk commit in the repository, being what you get when you "fossil open" a repository without specifying a version, populating the working directory. To see the most recent changes made to the repository by other users, use "fossil timeline" to find out the most recent commit, and then "fossil diff" between that commit and the current tree: <blockquote> <b> fossil timeline <br><tt> === 2021-03-28 === <br> 03:18:54 [ad75dfa4a0] *CURRENT* Added details to frobnicate command (user: user-one tags: trunk) <br> === 2021-03-27 === <br> 23:58:05 [ab975c6632] Update README.md. (user: user-two tags: trunk) <br> ⋮ <br> </tt><br> fossil diff --from current --to ab975c6632 <br><tt> Index: frobnicate.c<br> ============================================================<br> --- frobnicate.c<br> +++ frobnicate.c<br> @@ -1,10 +1,11 @@<br> +/* made a change to the source file */<br> # Original text<br> </tt></b> </blockquote> "current" is an alias for the checkout version, so the command "fossil diff --from ad75dfa4a0 --to ab975c6632" gives identical results. To commit your changes to a local-only repository: <blockquote> <b> fossil commit </b><i>(... Fossil will start your editor, if defined)</i><b><br><tt> # Enter a commit message for this check-in. Lines beginning with # are ignored.<br> #<br> # user: exampleuser<br> # tags: trunk<br> #<br> # EDITED README.md<br> Edited file to add description of code changes<br> New_Version: 7b9a416ced4a69a60589dde1aedd1a30fde8eec3528d265dbeed5135530440ab<br> </tt></b> </blockquote> You will be prompted for check-in comments using whatever editor is specified by your VISUAL or EDITOR environment variable. If none is specified Fossil uses line-editing in the terminal. To commit your changes to a repository that was cloned from a remote repository, you give the same command, but the results are different. |
︙ | ︙ | |||
300 301 302 303 304 305 306 | When you create a new repository, either by cloning an existing project or create a new project of your own, you usually want to do some local configuration. This is easily accomplished using the web-server that is built into fossil. Start the fossil web server like this: ([/help/ui | more info]) | | | | | | | | | | | | | | | | | 331 332 333 334 335 336 337 338 339 340 341 342 343 344 345 346 347 348 349 350 351 352 353 354 355 356 357 358 359 360 361 362 363 364 365 366 367 368 369 370 371 372 373 374 375 376 377 378 379 380 381 382 383 384 385 386 387 388 389 390 391 392 393 394 395 396 397 398 399 400 401 402 403 404 405 406 407 408 409 410 411 412 413 414 415 416 417 418 | When you create a new repository, either by cloning an existing project or create a new project of your own, you usually want to do some local configuration. This is easily accomplished using the web-server that is built into fossil. Start the fossil web server like this: ([/help/ui | more info]) <blockquote> <b>fossil ui </b><i> repository-filename</i> </blockquote> You can omit the <i>repository-filename</i> from the command above if you are inside a checked-out local tree. This starts a web server then automatically launches your web browser and makes it point to this web server. If your system has an unusual configuration, fossil might not be able to figure out how to start your web browser. In that case, first tell fossil where to find your web browser using a command like this: <blockquote> <b>fossil setting web-browser </b><i> path-to-web-browser</i> </blockquote> By default, fossil does not require a login for HTTP connections coming in from the IP loopback address 127.0.0.1. You can, and perhaps should, change this after you create a few users. When you are finished configuring, just press Control-C or use the <b>kill</b> command to shut down the mini-server. <h2 id="sharing">Sharing Changes</h2> When [./concepts.wiki#workflow|autosync] is turned off, the changes you [/help/commit | commit] are only on your local repository. To share those changes with other repositories, do: <blockquote> <b>[/help/push | fossil push]</b> <i>URL</i> </blockquote> Where <i>URL</i> is the http: URL of the server repository you want to share your changes with. If you omit the <i>URL</i> argument, fossil will use whatever server you most recently synced with. The [/help/push | push] command only sends your changes to others. To Receive changes from others, use [/help/pull | pull]. Or go both ways at once using [/help/sync | sync]: <blockquote> <b>[/help/pull | fossil pull]</b> <i>URL</i><br> <b>[/help/sync | fossil sync]</b> <i>URL</i> </blockquote> When you pull in changes from others, they go into your repository, not into your checked-out local tree. To get the changes into your local tree, use [/help/update | update]: <blockquote> <b>[/help/update | fossil update]</b> <i>VERSION</i> </blockquote> The <i>VERSION</i> can be the name of a branch or tag or any abbreviation to the 40-character artifact identifier for a particular check-in, or it can be a date/time stamp. ([./checkin_names.wiki | more info]) If you omit the <i>VERSION</i>, then fossil moves you to the latest version of the branch your are currently on. The default behavior is for [./concepts.wiki#workflow|autosync] to be turned on. That means that a [/help/pull|pull] automatically occurs when you run [/help/update|update] and a [/help/push|push] happens automatically after you [/help/commit|commit]. So in normal practice, the push, pull, and sync commands are rarely used. But it is important to know about them, all the same. <blockquote> <b>[/help/checkout | fossil checkout]</b> <i>VERSION</i> </blockquote> Is similar to update except that it does not honor the autosync setting, nor does it merge in local changes - it prefers to overwrite them and fails if local changes exist unless the <tt>--force</tt> flag is used. <h2 id="branch" name="merge">Branching And Merging</h2> |
︙ | ︙ | |||
395 396 397 398 399 400 401 | To merge two branches back together, first [/help/update | update] to the branch you want to merge into. Then do a [/help/merge|merge] of the other branch that you want to incorporate the changes from. For example, to merge "featureX" changes into "trunk" do this: | | | | | | | 426 427 428 429 430 431 432 433 434 435 436 437 438 439 440 441 442 443 444 445 | To merge two branches back together, first [/help/update | update] to the branch you want to merge into. Then do a [/help/merge|merge] of the other branch that you want to incorporate the changes from. For example, to merge "featureX" changes into "trunk" do this: <blockquote> <b>fossil [/help/update|update] trunk</b><br> <b>fossil [/help/merge|merge] featureX</b><br> <i># make sure the merge didn't break anything...</i><br> <b>fossil [/help/commit|commit] </blockquote> The argument to the [/help/merge|merge] command can be any of the version identifier forms that work for [/help/update|update]. ([./checkin_names.wiki|more info].) The merge command has options to cherry-pick individual changes, or to back out individual changes, if you don't want to do a full merge. |
︙ | ︙ | |||
427 428 429 430 431 432 433 | into trunk previously, you can do so again and Fossil will automatically know to pull in only those changes that have occurred since the previous merge. If a merge or update doesn't work out (perhaps something breaks or there are many merge conflicts) then you back up using: | | | | | | | 458 459 460 461 462 463 464 465 466 467 468 469 470 471 472 473 474 475 476 477 478 479 480 481 482 483 484 485 486 487 488 489 490 491 | into trunk previously, you can do so again and Fossil will automatically know to pull in only those changes that have occurred since the previous merge. If a merge or update doesn't work out (perhaps something breaks or there are many merge conflicts) then you back up using: <blockquote> <b>[/help/undo | fossil undo]</b> </blockquote> This will back out the changes that the merge or update made to the working checkout. There is also a [/help/redo|redo] command if you undo by mistake. Undo and redo only work for changes that have not yet been checked in using commit and there is only a single level of undo/redo. <h2 id="server">Setting Up A Server</h2> Fossil can act as a stand-alone web server using one of these commands: <blockquote> <b>[/help/server | fossil server]</b> <i>repository-filename</i><br> <b>[/help/ui | fossil ui]</b> <i>repository-filename</i> </blockquote> The <i>repository-filename</i> can be omitted when these commands are run from within an open check-out, which is a particularly useful shortcut with the <b>fossil ui</b> command. The <b>ui</b> command is intended for accessing the web user interface from a local desktop. (We sometimes call this mode "Fossil UI.") |
︙ | ︙ | |||
494 495 496 497 498 499 500 | If you are behind a restrictive firewall that requires you to use an HTTP proxy to reach the internet, then you can configure the proxy in three different ways. You can tell fossil about your proxy using a command-line option on commands that use the network, <b>sync</b>, <b>clone</b>, <b>push</b>, and <b>pull</b>. | | | | | | | | | | | | | 525 526 527 528 529 530 531 532 533 534 535 536 537 538 539 540 541 542 543 544 545 546 547 548 549 550 551 552 553 554 555 556 557 558 559 560 561 562 563 564 565 566 567 568 569 570 571 572 573 574 575 576 | If you are behind a restrictive firewall that requires you to use an HTTP proxy to reach the internet, then you can configure the proxy in three different ways. You can tell fossil about your proxy using a command-line option on commands that use the network, <b>sync</b>, <b>clone</b>, <b>push</b>, and <b>pull</b>. <blockquote> <b>fossil clone </b><i>URL</i> <b>--proxy</b> <i>Proxy-URL</i> </blockquote> It is annoying to have to type in the proxy URL every time you sync your project, though, so you can make the proxy configuration persistent using the [/help/setting | setting] command: <blockquote> <b>fossil setting proxy </b><i>Proxy-URL</i> </blockquote> Or, you can set the "<b>http_proxy</b>" environment variable: <blockquote> <b>export http_proxy=</b><i>Proxy-URL</i> </blockquote> To stop using the proxy, do: <blockquote> <b>fossil setting proxy off</b> </blockquote> Or unset the environment variable. The fossil setting for the HTTP proxy takes precedence over the environment variable and the command-line option overrides both. If you have a persistent proxy setting that you want to override for a one-time sync, that is easily done on the command-line. For example, to sync with a co-worker's repository on your LAN, you might type: <blockquote> <b>fossil sync http://192.168.1.36:8080/ --proxy off</b> </blockquote> <h2 id="links">Other Resources</h2> <ul> <li> <a href="./gitusers.md">Hints For Users With Prior Git Experience</a> <li> <a href="./whyusefossil.wiki">Why You Should Use Fossil</a> <li> <a href="./history.md">The History and Purpose of Fossil</a> <li> <a href="./branching.wiki">Branching, Forking, and Tagging</a> <li> <a href="./hints.wiki">Fossil Tips and Usage Hints</a> <li> <a href="./permutedindex.html">Comprehensive Fossil Doc Index</a> </ul> |
Changes to www/quotes.wiki.
1 2 3 4 5 6 | <title>What People Are Saying</title> The following are collected quotes from various forums and blogs about Fossil, Git, and DVCSes in general. This collection is put together by the creator of Fossil, so of course there is selection bias... | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 | <title>What People Are Saying</title> The following are collected quotes from various forums and blogs about Fossil, Git, and DVCSes in general. This collection is put together by the creator of Fossil, so of course there is selection bias... <h2>On The Usability Of Git:</h2> <ol> <li>Git approaches the usability of iptables, which is to say, utterly unusable unless you have the manpage tattooed on you arm. <blockquote> <i>by mml at [http://news.ycombinator.com/item?id=1433387]</i> </blockquote> <li><nowiki>It's simplest to think of the state of your [git] repository as a point in a high-dimensional "code-space", in which branches are represented as n-dimensional membranes, mapping the spatial loci of successive commits onto the projected manifold of each cloned repository.</nowiki> <blockquote> <i>by Jonathan Hartley at [https://www.tartley.com/posts/a-guide-to-git-using-spatial-analogies]; <br>Quoted here: [https://lwn.net/Articles/420152/].</i> </blockquote> <li>Git is not a Prius. Git is a Model T. Its plumbing and wiring sticks out all over the place. You have to be a mechanic to operate it successfully or you'll be stuck on the side of the road when it breaks down. And it <b>will</b> break down. <blockquote> <i>Nick Farina at [http://nfarina.com/post/9868516270/git-is-simpler]</i> </blockquote> <li>Initial revision of "git", The information manager from hell <blockquote> <i>Linus Torvalds - 2005-04-07 22:13:13<br> Commit comment on the very first source-code check-in for git </blockquote> <li>I've been experimenting a lot with git at work. Damn, it's complicated. It has things to trip you up with that sane people just wouldn't ever both with including the ability to allow you to commit stuff in such a way that you can't find it again afterwards (!!!) Demented workflow complexity on acid? <p>* dkf really wishes he could use fossil instead</p> <blockquote> <i>by Donal K. Fellow (dkf) on the Tcl/Tk chatroom, 2013-04-09.</i> </blockquote> <li>[G]it is <i>designed</i> to forget things. <blockquote> <i>[http://www.cs.cmu.edu/~davide/howto/git_lose.html] </blockquote> <li>[I]n nearly 31 years of using a computer i have, in total, lost more data to git (while following the instructions!!!) than any other single piece of software. <blockquote> <i>Stephan Beal on the [http://www.mail-archive.com/fossil-users@lists.fossil-scm.org/msg17181.html|Fossil mailing list] 2014-09-01.</i> </blockquote> <li>If programmers _really_ wanted to help scientists, they'd build a version control system that was more usable than Git. <blockquote> <i>Tweet by Greg Wilson @gvwilson on 2015-02-22 17:47</i> </blockquote> <li><img src='xkcd-git.gif' align='top'> <blockquote><i>Randall Munroe. [http://xkcd.com/1597/]</i></blockquote> </ol> <h2>On The Usability Of Fossil:</h2> <ol> <li value=11> Fossil mesmerizes me with simplicity especially after I struggled to get a bug-tracking system to work with mercurial. <blockquote> <i>rawjeev at [https://stackoverflow.com/a/2100469/142454]</i> </blockquote> <li>Fossil is the best thing to happen to my development workflow this year, as I am pretty sure that using Git has resulted in the premature death of too many of my brain cells. I'm glad to be able to replace Git in every place that I possibly can with Fossil. <blockquote> <i>Joe Prostko at [http://www.mail-archive.com/fossil-users@lists.fossil-scm.org/msg16716.html] </blockquote> <li>This is my favourite VCS. I can carry it on a USB. And it's a complete system, with it's own server, ticketing system, Wiki pages, and a very, very helpful timeline visualization. And the entire program in a single file! <blockquote> <i>thunderbong commenting on hacker news: [https://news.ycombinator.com/item?id=9131619]</i> </blockquote> </ol> <h2>On Git Versus Fossil</h2> <ol> <li value=14> After prolonged exposure to fossil, i tend to get the jitters when I work with git... <blockquote> <i>sriku - at [https://news.ycombinator.com/item?id=16104427]</i> </blockquote> <li> Just want to say thanks for fossil making my life easier.... Also <nowiki>[for]</nowiki> not having a misanthropic command line interface. <blockquote> <i>Joshua Paine at [http://www.mail-archive.com/fossil-users@lists.fossil-scm.org/msg02736.html]</i> </blockquote> <li>We use it at a large university to manage code that small teams write. The runs everywhere, ease of installation and portability is something that seems to be a good fit with the environment we have (highly ditrobuted, sometimes very restrictive firewalls, OSX/Win/Linux). We are happy with it and teaching a Msc/Phd student (read complete novice) fossil has just been a smoother ride than Git was. <blockquote> <i>viablepanic at [https://www.reddit.com/r/programming/comments/bxcto/why_not_fossil_scm/c0p30b4?utm_source=share&utm_medium=web2x&context=3]</i> </blockquote> <li>In the fossil community - and hence in fossil itself - development history is pretty much sacrosanct. The very name "fossil" was to chosen to reflect the unchanging nature of things in that history. <br><br> In git (or rather, the git community), the development history is part of the published aspect of the project, so it provides tools for rearranging that history so you can present what you "should" have done rather than what you actually did. <blockquote> <i>Mike Meyer on the Fossil mailing list, 2011-10-04</i> </blockquote> <li>github is such a pale shadow of what fossil does. <blockquote> <i>dkf on the Tcl chatroom, 2013-12-06</i> </blockquote> <li>[With fossil] I actually enjoy keeping track of source files again. <blockquote> <a href="https://wholesomedonut.prose.sh/using-fossil-not-git">https://wholesomedonut.prose.sh/using-fossil-not-git</a> </blockquote> </ol> |
Changes to www/rebaseharm.md.
︙ | ︙ | |||
28 29 30 31 32 33 34 | A rebase is really nothing more than a merge (or a series of merges) that deliberately forgets one of the parents of each merge step. To help illustrate this fact, consider the first rebase example from the [Git documentation][gitrebase]. The merge looks like this: | | | | 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 | A rebase is really nothing more than a merge (or a series of merges) that deliberately forgets one of the parents of each merge step. To help illustrate this fact, consider the first rebase example from the [Git documentation][gitrebase]. The merge looks like this: ~~~ pikchr toggle scale = 0.8 circle "C0" fit arrow right 50% circle same "C1" arrow same circle same "C2" arrow same circle same "C3" arrow same circle same "C5" circle same "C4" at 1cm above C3 arrow from C2 to C4 chop arrow from C4 to C5 chop ~~~ And the rebase looks like this: ~~~ pikchr toggle scale = 0.8 circle "C0" fit arrow right 50% circle same "C1" arrow same circle same "C2" arrow same |
︙ | ︙ | |||
93 94 95 96 97 98 99 | ### <a id="clean-diffs"></a>2.2 Rebase does not actually provide better feature-branch diffs Another argument, often cited, is that rebasing a feature branch allows one to see just the changes in the feature branch without the concurrent changes in the main line of development. Consider a hypothetical case: | | | 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 | ### <a id="clean-diffs"></a>2.2 Rebase does not actually provide better feature-branch diffs Another argument, often cited, is that rebasing a feature branch allows one to see just the changes in the feature branch without the concurrent changes in the main line of development. Consider a hypothetical case: ~~~ pikchr toggle scale = 0.8 circle "C0" fit fill white arrow right 50% circle same "C1" arrow same circle same "C2" arrow same |
︙ | ︙ | |||
121 122 123 124 125 126 127 | In the above, a feature branch consisting of check-ins C3 and C5 is run concurrently with the main line in check-ins C4 and C6. Advocates for rebase say that you should rebase the feature branch to the tip of main in order to remove main-line development differences from the feature branch's history: | | | 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 | In the above, a feature branch consisting of check-ins C3 and C5 is run concurrently with the main line in check-ins C4 and C6. Advocates for rebase say that you should rebase the feature branch to the tip of main in order to remove main-line development differences from the feature branch's history: ~~~ pikchr toggle # Duplicated below in section 5.0 scale = 0.8 circle "C0" fit fill white arrow right 50% circle same "C1" arrow same circle same "C2" |
︙ | ︙ | |||
156 157 158 159 160 161 162 | You could choose to collapse C3\' and C5\' into a single check-in as part of this rebase, but that's a side issue we'll deal with [separately](#collapsing). Because Fossil purposefully lacks rebase, the closest you can get to this same check-in history is the following merge: | | | 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 | You could choose to collapse C3\' and C5\' into a single check-in as part of this rebase, but that's a side issue we'll deal with [separately](#collapsing). Because Fossil purposefully lacks rebase, the closest you can get to this same check-in history is the following merge: ~~~ pikchr toggle scale = 0.8 circle "C0" fit fill white arrow right 50% circle same "C1" arrow same circle same "C2" arrow same |
︙ | ︙ | |||
196 197 198 199 200 201 202 | branch and from the mainline, whereas in the rebase case diff(C6,C5\') shows only the feature branch changes. But that argument is comparing apples to oranges, since the two diffs do not have the same baseline. The correct way to see only the feature branch changes in the merge case is not diff(C2,C7) but rather diff(C6,C7). | | | | < | | | < | 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 | branch and from the mainline, whereas in the rebase case diff(C6,C5\') shows only the feature branch changes. But that argument is comparing apples to oranges, since the two diffs do not have the same baseline. The correct way to see only the feature branch changes in the merge case is not diff(C2,C7) but rather diff(C6,C7). <table border="1" cellpadding="5" cellspacing="0" style="margin-left:auto; margin-right:auto"> <tr><th>Rebase<th>Merge<th>What You See <tr><td>diff(C2,C5\')<td>diff(C2,C7)<td>Commingled branch and mainline changes <tr><td>diff(C6,C5\')<td>diff(C6,C7)<td>Branch changes only </table> Remember: C7 and C5\' are bit-for-bit identical, so the output of the diff is not determined by whether you select C7 or C5\' as the target of the diff, but rather by your choice of the diff source, C2 or C6. So, to help with the problem of viewing changes associated with a feature branch, perhaps what is needed is not rebase but rather better tools to |
︙ | ︙ | |||
256 257 258 259 260 261 262 | branch to the parent repo? Will the many eyeballs even see those errors when they’re intermingled with code implementing some compelling new feature? ## <a id="timestamps"></a>4.0 Rebase causes timestamp confusion Consider the earlier example of rebasing a feature branch: | | | 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 | branch to the parent repo? Will the many eyeballs even see those errors when they’re intermingled with code implementing some compelling new feature? ## <a id="timestamps"></a>4.0 Rebase causes timestamp confusion Consider the earlier example of rebasing a feature branch: ~~~ pikchr toggle # Copy of second diagram in section 2.2 above scale = 0.8 circle "C0" fit fill white arrow right 50% circle same "C1" arrow same circle same "C2" |
︙ | ︙ |
Changes to www/reviews.wiki.
1 2 3 4 5 6 7 8 9 10 11 12 | <title>Reviews</title> <b>External links:</b> * [https://www.nixtu.info/2010/03/fossil-dvcs-on-go-first-impressions.html | Fossil DVCS on the Go - First Impressions] <b>See Also:</b> * [./quotes.wiki | Short Quotes on Fossil, Git, And DVCSes] <b>Daniel writes on 2009-01-06:</b> | | | | | | | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 | <title>Reviews</title> <b>External links:</b> * [https://www.nixtu.info/2010/03/fossil-dvcs-on-go-first-impressions.html | Fossil DVCS on the Go - First Impressions] <b>See Also:</b> * [./quotes.wiki | Short Quotes on Fossil, Git, And DVCSes] <b>Daniel writes on 2009-01-06:</b> <blockquote> The reasons I use fossil are that it's the only version control I have found that I can get working through the VERY annoying MS firewalls at work.. (albeit through an ntlm proxy) and I just love single .exe applications! </blockquote> <b>Joshua Paine on 2010-10-22:</b> <blockquote> With one of my several hats on, I'm in a small team using git. Another team member just checked some stuff into trunk that should have been on a branch. Nothing else had happened since, so in fossil I would have just edited that commit and put it on a new branch. In git that can't actually be done without danger once other people have pulled, so I had to create a new commit rolling back the changes, then branch and cherry pick the earlier changes, then figure out how to make my new branch shared instead of private. Just want to say thanks for fossil making my life easier on most of my projects, and being able to move commits to another branch after the fact and shared-by-default branches are good features. Also not having a misanthropic command line interface. </blockquote> <b>Stephan Beal writes on 2009-01-11:</b> <blockquote> Sometime in late 2007 I came across a link to fossil on <a href="http://www.sqlite.org/">sqlite.org</a>. It was a good thing I bookmarked it, because I was never able to find the link again (it might have been in a bug report or something). The reasons I first took a close look at it were (A) it stemmed from the sqlite project, which I've held in high regards for years (e.g. I wrote JavaScript bindings for it: |
︙ | ︙ | |||
133 134 135 136 137 138 139 | I remember my first reaction to fossil being, "this will be an excellent solution for small projects (like the dozens we've all got sitting on our hard drives but which don't justify the hassle of version control)." A year of daily use in over 15 source trees has confirmed that, and I continue to heartily recommend fossil to other developers I know who also have their own collection of "unhosted" pet projects. | | | 133 134 135 136 137 138 139 140 | I remember my first reaction to fossil being, "this will be an excellent solution for small projects (like the dozens we've all got sitting on our hard drives but which don't justify the hassle of version control)." A year of daily use in over 15 source trees has confirmed that, and I continue to heartily recommend fossil to other developers I know who also have their own collection of "unhosted" pet projects. </blockquote> |
Changes to www/scgi.wiki.
1 2 3 4 5 6 | <title>Fossil SCGI</title> To run Fossil using SCGI, start the [/help/server|fossil server] command with the --scgi command-line option. You will probably also want to specific an alternative TCP/IP port using --port. For example: | | | | | | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 | <title>Fossil SCGI</title> To run Fossil using SCGI, start the [/help/server|fossil server] command with the --scgi command-line option. You will probably also want to specific an alternative TCP/IP port using --port. For example: <blockquote><pre> fossil server $REPOSITORY --port 9000 --scgi </pre></blockquote> Then configure your SCGI-aware web-server to send SCGI requests to port 9000 on the machine where Fossil is running. A typical configuration for this in Nginx is: <blockquote><pre> location ~ ^/demo_project/ { include scgi_params; scgi_pass localhost:9000; scgi_param SCRIPT_NAME "/demo_project"; scgi_param HTTPS "on"; } </pre></blockquote> Note that Nginx does not normally send either the PATH_INFO or SCRIPT_NAME variables via SCGI, but Fossil needs one or the other. So the configuration above needs to add SCRIPT_NAME. If you do not do this, Fossil returns an error. |
Changes to www/selfcheck.wiki.
1 2 3 4 5 6 7 8 | <title>Fossil Repository Integrity Self-Checks</title> Fossil is designed with features to give it a high level of integrity so that users can have confidence that content will never be mangled or lost by Fossil. This note describes the defensive measures that Fossil uses to help prevent information loss due to bugs. | > > | 1 2 3 4 5 6 7 8 9 10 | <title>Fossil Repository Integrity Self-Checks</title> <h1 align="center">Fossil Repository Integrity Self-Checks</h1> Fossil is designed with features to give it a high level of integrity so that users can have confidence that content will never be mangled or lost by Fossil. This note describes the defensive measures that Fossil uses to help prevent information loss due to bugs. |
︙ | ︙ |
Changes to www/selfhost.wiki.
︙ | ︙ | |||
28 29 30 31 32 33 34 | dozen other smaller projects. This demonstrates that Fossil can run on a low-power host processor. Multiple fossil-based projects can easily be hosted on the same machine, even if that machine is itself one of several dozen virtual machines on single physical box. The CGI script that runs the canonical Fossil self-hosting repository is as follows: | | | | | | | | 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 | dozen other smaller projects. This demonstrates that Fossil can run on a low-power host processor. Multiple fossil-based projects can easily be hosted on the same machine, even if that machine is itself one of several dozen virtual machines on single physical box. The CGI script that runs the canonical Fossil self-hosting repository is as follows: <blockquote><pre> #!/usr/bin/fossil repository: /fossil/fossil.fossil </pre></blockquote> Server (3) ran for 10 years as a CGI script on a shared hosting account at <a href="http://www.he.net/">Hurricane Electric</a> in Fremont, CA. This server demonstrated the ability of Fossil to run on an economical shared-host web account with no privileges beyond port 80 HTTP access and CGI. It is not necessary to have a dedicated computer with administrator privileges to run Fossil. As far as we are aware, Fossil is the only full-featured configuration management system that can run in such a restricted environment. The CGI script that ran on the Hurricane Electric server was the same as the CGI script shown above, except that the pathnames are modified to suit the environment: <blockquote><pre> #!/home/hwaci/bin/fossil repository: /home/hwaci/fossil/fossil.fossil </pre></blockquote> In recent years, virtual private servers have become a more flexible and less expensive hosting option compared to shared hosting accounts. So on 2017-07-25, server (3) was moved onto a $5/month "droplet" [https://en.wikipedia.org/wiki/Virtual_private_server|VPS] from [https://www.digitalocean.com|Digital Ocean] located in San Francisco. Server (3) is synchronized with the canonical server (1) by running a command similar to the following via cron: <blockquote><pre> /usr/local/bin/fossil all sync -u </pre></blockquote> Server (2) is a <a href="http://www.linode.com/">Linode 4096</a> located in Newark, NJ and set up just like the canonical server (1) with the addition of a cron job for synchronization. The same cron job also runs the [/help?cmd=git|fossil git export] command after each sync in order to [./mirrortogithub.md#ex1|mirror all changes to GitHub]. |
Changes to www/server/any/cgi.md.
1 2 3 4 5 6 7 8 9 10 11 12 | # Serving via CGI A Fossil server can be run from most ordinary web servers as a CGI program. This feature allows Fossil to seamlessly integrate into a larger website. The [self-hosting Fossil repository web site](../../selfhost.wiki) is implemented using CGI. See the [How CGI Works](../../aboutcgi.wiki) page for background information on the CGI protocol. To run Fossil as CGI, create a CGI script (here called "repo") in the CGI directory of your web server with content like this: | | | | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 | # Serving via CGI A Fossil server can be run from most ordinary web servers as a CGI program. This feature allows Fossil to seamlessly integrate into a larger website. The [self-hosting Fossil repository web site](../../selfhost.wiki) is implemented using CGI. See the [How CGI Works](../../aboutcgi.wiki) page for background information on the CGI protocol. To run Fossil as CGI, create a CGI script (here called "repo") in the CGI directory of your web server with content like this: #!/usr/bin/fossil repository: /home/fossil/repo.fossil Adjust the paths appropriately. It may be necessary to set certain permissions on this file or to modify an `.htaccess` file or make other server-specific changes. Consult the documentation for your particular web server. The following permissions are *normally* required, but, again, may be different for a particular configuration: |
︙ | ︙ | |||
55 56 57 58 59 60 61 | for scripts like our “`repo`” example. To serve multiple repositories from a directory using CGI, use the "directory:" tag in the CGI script rather than "repository:". You might also want to add a "notfound:" tag to tell where to redirect if the particular repository requested by the URL is not found: | | | | | | | | < < < < < < < < < < < < < < < < < < < < < < < | 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 | for scripts like our “`repo`” example. To serve multiple repositories from a directory using CGI, use the "directory:" tag in the CGI script rather than "repository:". You might also want to add a "notfound:" tag to tell where to redirect if the particular repository requested by the URL is not found: #!/usr/bin/fossil directory: /home/fossil/repos notfound: http://url-to-go-to-if-repo-not-found/ Once deployed, a URL like: <b>http://mydomain.org/cgi-bin/repo/XYZ</b> will serve up the repository `/home/fossil/repos/XYZ.fossil` if it exists. Additional options available to the CGI script are [documented separately](../../cgi.wiki). #### CGI with Apache behind an Nginx proxy For the case where the Fossil repositories live on a computer, itself behind an Internet-facing machine that employs Nginx to reverse proxy HTTP(S) requests and take care of the TLS part of the connections in a transparent manner for the downstream web servers, the CGI parameter `HTTPS=on` might not be set. However, Fossil in CGI mode needs it in order to generate the correct links. Apache can be instructed to pass this parameter further to the CGI scripts for TLS connections with a stanza like SetEnvIf X-Forwarded-Proto "https" HTTPS=on in its config file section for CGI, provided that proxy_set_header X-Forwarded-Proto $scheme; has been be added in the relevant proxying section of the Nginx config file. *[Return to the top-level Fossil server article.](../)* |
Changes to www/server/any/http-over-ssh.md.
︙ | ︙ | |||
13 14 15 16 17 18 19 | ## 1. Force remote Fossil access through a wrapper script <a id="sshd"></a> Put something like the following into the `sshd_config` file on the Fossil repository server: ``` ssh-config | | | | | | | | | 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 | ## 1. Force remote Fossil access through a wrapper script <a id="sshd"></a> Put something like the following into the `sshd_config` file on the Fossil repository server: ``` ssh-config Match Group fossil X11Forwarding no AllowTcpForwarding no AllowAgentForwarding no ForceCommand /home/fossil/bin/wrapper ``` This file is usually found in `/etc/ssh`, but some OSes put it elsewhere. The first line presumes that we will put all users who need to use our Fossil repositories into the `fossil` group, as we will do [below](#perms). You could instead say something like: ``` ssh-config Match User alice,bob,carol,dave ``` You have to list the users allowed to use Fossil in this case because your system likely has a system administrator that uses SSH for remote shell access, so you want to *exclude* that user from the list. For the same reason, you don’t want to put the `ForceCommand` directive outside a `Match` block of some sort. You could instead list the exceptions: ``` ssh-config Match User !evi ``` This would permit only [Evi the System Administrator][evi] to bypass this mechanism. [evi]: https://en.wikipedia.org/wiki/Evi_Nemeth |
︙ | ︙ | |||
66 67 68 69 70 71 72 | instance with certain parameters in order to set up the HTTP-based sync protocol over that SSH tunnel. We need to preserve some of this command and rewrite other parts to make this work. Here is a simpler variant of Andy’s original wrapper script: ``` sh | | | | | | | | 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 | instance with certain parameters in order to set up the HTTP-based sync protocol over that SSH tunnel. We need to preserve some of this command and rewrite other parts to make this work. Here is a simpler variant of Andy’s original wrapper script: ``` sh #!/bin/bash set -- $SSH_ORIGINAL_COMMAND while [ $# -gt 1 ] ; do shift ; done export REMOTE_USER="$USER" ROOT=/home/fossil exec "$ROOT/bin/fossil" http "$ROOT/museum/$(/bin/basename "$1")" ``` The substantive changes are: 1. Move the command rewriting bits to the start. 2. Be explicit about executable paths. You might extend this idea by |
︙ | ︙ | |||
102 103 104 105 106 107 108 | is not the case everywhere. If the script fails to run on your system, try changing this line to point at `bash`, `dash`, `ksh`, or `zsh`. Also check the absolute paths for local correctness: is `/bin/basename` installed on your system, for example? Under this scheme, you clone with a command like: | | | 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 | is not the case everywhere. If the script fails to run on your system, try changing this line to point at `bash`, `dash`, `ksh`, or `zsh`. Also check the absolute paths for local correctness: is `/bin/basename` installed on your system, for example? Under this scheme, you clone with a command like: $ fossil clone ssh://HOST/repo.fossil This will clone the remote `/home/fossil/museum/repo.fossil` repository to your local machine under the same name and open it into a “`repo/`” subdirectory. Notice that we didn’t have to give the `museum/` part of the path: it’s implicit per point #3 above. This presumes your local user name matches the remote user name. Unlike |
︙ | ︙ | |||
127 128 129 130 131 132 133 | the wrapper script from where you placed it and execute it, and that they have read/write access on the directory where the Fossil repositories are stored. You can achieve all of this on a Linux box with: ``` shell | | | | | | | | | | | | | | 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 | the wrapper script from where you placed it and execute it, and that they have read/write access on the directory where the Fossil repositories are stored. You can achieve all of this on a Linux box with: ``` shell sudo adduser fossil for u in alice bob carol dave ; do sudo adduser $u sudo gpasswd -a fossil $u done sudo -i -u fossil chmod 710 . mkdir -m 750 bin mkdir -m 770 museum ln -s /usr/local/bin/fossil bin ``` You then need to copy the Fossil repositories into `~fossil/museum` and make them readable and writable by group `fossil`. These repositories presumably already have Fossil users configured, with the necessary [user capabilities](../../caps/), the point of this article being to show you how to make Fossil-over-SSH pay attention to those caps. You must also permit use of `REMOTE_USER` on each shared repository. Fossil only pays attention to this environment variable in certain contexts, of which “`fossil http`” is not one. Run this command against each repo to allow that: ``` shell echo "INSERT OR REPLACE INTO config VALUES ('remote_user_ok',1,strftime('%s','now'));" | fossil sql -R museum/repo.fossil ``` Now you can configure SSH authentication for each user. Since Fossil’s password-saving feature doesn’t work in this case, I suggest setting up SSH keys via `~USER/.ssh/authorized_keys` since the SSH authentication occurs on each sync, which Fossil’s default-enabled autosync setting makes frequent. |
︙ | ︙ |
Changes to www/server/any/inetd.md.
1 2 3 4 5 6 | # Serving via inetd A Fossil server can be launched on-demand by `inetd` by using the [`fossil http`](/help/http) command. To do so, add a line like the following to its configuration file, typically `/etc/inetd.conf`: | | | | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 | # Serving via inetd A Fossil server can be launched on-demand by `inetd` by using the [`fossil http`](/help/http) command. To do so, add a line like the following to its configuration file, typically `/etc/inetd.conf`: 80 stream tcp nowait.1000 root /usr/bin/fossil /usr/bin/fossil http /home/fossil/repo.fossil In this example, you are telling `inetd` that when an incoming connection appears on TCP port 80 that it should launch the program `/usr/bin/fossil` with the arguments shown. Obviously you will need to modify the pathnames for your particular setup. The final argument is either the name of the fossil repository to be served or a directory containing multiple repositories. If you use a non-standard TCP port on systems where the port specification must be a symbolic name and cannot be numeric, add the desired name and port to `/etc/services`. For example, if you want your Fossil server running on TCP port 12345 instead of 80, you will need to add: fossil 12345/tcp # fossil server and use the symbolic name “`fossil`” instead of the numeric TCP port number (“12345” in the above example) in `inetd.conf`. Notice that we configured `inetd` to launch Fossil as root. See the top-level section on “[The Fossil Chroot Jail](../../chroot.md)” for the consequences of this and |
︙ | ︙ |
Changes to www/server/any/none.md.
︙ | ︙ | |||
26 27 28 29 30 31 32 | * “`ui`” launches a local web browser pointed at this URL. You can omit the _REPOSITORY_ argument if you run one of the above commands from within a Fossil checkout directory to serve that repository: | | | | | 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 | * “`ui`” launches a local web browser pointed at this URL. You can omit the _REPOSITORY_ argument if you run one of the above commands from within a Fossil checkout directory to serve that repository: $ fossil ui # or... $ fossil server You can abbreviate Fossil sub-commands as long as they are unambiguous. “`server`” can currently be as short as “`ser`”. You can serve a directory containing multiple `*.fossil` files like so: $ fossil server --port 9000 --repolist /path/to/repo/dir There is an [example script](/file/tools/fslsrv) in the Fossil distribution that wraps `fossil server` to produce more complicated effects. Feel free to take it, study it, and modify it to suit your local needs. See the [online documentation](/help/server) for more information on the |
︙ | ︙ |
Changes to www/server/any/scgi.md.
1 2 3 4 5 6 7 8 9 10 11 12 13 | # Serving via SCGI There is an alternative to running Fossil as a [standalone HTTP server](./none.md), which is to run it in SimpleCGI (a.k.a. SCGI) mode, which uses the same [`fossil server`](/help/server) command as for HTTP service. Simply add the `--scgi` command-line option and the stand-alone server will speak the SCGI protocol rather than raw HTTP. This can be used with a web server such as [nginx](http://nginx.org) which does not support [Fossil’s CGI mode](./cgi.md). A basic nginx configuration to support SCGI with Fossil looks like this: | | | | | | | | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 | # Serving via SCGI There is an alternative to running Fossil as a [standalone HTTP server](./none.md), which is to run it in SimpleCGI (a.k.a. SCGI) mode, which uses the same [`fossil server`](/help/server) command as for HTTP service. Simply add the `--scgi` command-line option and the stand-alone server will speak the SCGI protocol rather than raw HTTP. This can be used with a web server such as [nginx](http://nginx.org) which does not support [Fossil’s CGI mode](./cgi.md). A basic nginx configuration to support SCGI with Fossil looks like this: location /code/ { include scgi_params; scgi_param SCRIPT_NAME "/code"; scgi_pass localhost:9000; } The `scgi_params` file comes with nginx, and it simply translates nginx internal variables to `scgi_param` directives to create SCGI environment variables for the proxied program; in this case, Fossil. Our explicit `scgi_param` call to define `SCRIPT_NAME` adds one more variable to this set, which is necessary for this configuration to work properly, because our repo isn’t at the root of the URL hierarchy. Without it, when Fossil generates absolute URLs, they’ll be missing the `/code` part at the start, which will typically cause [404 errors][404]. The final directive simply tells nginx to proxy all calls to URLs under `/code` down to an SCGI program on TCP port 9000. We can temporarily set Fossil up as a server on that port like so: $ fossil server /path/to/repo.fossil --scgi --localhost --port 9000 & The `--scgi` option switches Fossil into SCGI mode from its default, which is [stand-alone HTTP server mode](./none.md). All of the other options discussed in that linked document — such as the ability to serve a directory full of Fossil repositories rather than just a single repository — work the same way in SCGI mode. |
︙ | ︙ |
Changes to www/server/any/xinetd.md.
1 2 3 4 5 6 7 8 9 10 | # Serving via xinetd Some operating systems have replaced the old Unix `inetd` daemon with `xinetd`, which has a similar mission but with a very different configuration file format. The typical configuration file is either `/etc/xinetd.conf` or a subfile in the `/etc/xinetd.d` directory. You need a configuration something like this for Fossil: | | | | | | | | | | | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 | # Serving via xinetd Some operating systems have replaced the old Unix `inetd` daemon with `xinetd`, which has a similar mission but with a very different configuration file format. The typical configuration file is either `/etc/xinetd.conf` or a subfile in the `/etc/xinetd.d` directory. You need a configuration something like this for Fossil: service http { port = 80 socket_type = stream wait = no user = root server = /usr/bin/fossil server_args = http /home/fossil/repos/ } This example configures Fossil to serve multiple repositories under the `/home/fossil/repos/` directory. Beyond this, see the general commentary in our article on [the `inetd` method](./inetd.md) as they also apply to service via `xinetd`. |
︙ | ︙ |
Changes to www/server/debian/nginx.md.
︙ | ︙ | |||
99 100 101 102 103 104 105 | ## <a id="deps"></a>Installing the Dependencies The first step is to install some non-default packages we’ll need. SSH into your server, then say: | | | 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 | ## <a id="deps"></a>Installing the Dependencies The first step is to install some non-default packages we’ll need. SSH into your server, then say: $ sudo apt install fossil nginx You can leave “`fossil`” out of that if you’re building Fossil from source to get a more up-to-date version than is shipped with the host OS. ## <a id="scgi"></a>Running Fossil in SCGI Mode |
︙ | ︙ | |||
129 130 131 132 133 134 135 | ## <a id="config"></a>Configuration On Debian and Ubuntu systems the primary user-level configuration file for nginx is `/etc/nginx/sites-enabled/default`. I recommend that this file contain only a list of include statements, one for each site that server hosts: | | | | 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 | ## <a id="config"></a>Configuration On Debian and Ubuntu systems the primary user-level configuration file for nginx is `/etc/nginx/sites-enabled/default`. I recommend that this file contain only a list of include statements, one for each site that server hosts: include local/example.com include local/foo.net Those files then each define one domain’s configuration. Here, `/etc/nginx/local/example.com` contains the configuration for `*.example.com` and its alias `*.example.net`; and `local/foo.net` contains the configuration for `*.foo.net`. The configuration for our `example.com` web site, stored in |
︙ | ︙ | |||
195 196 197 198 199 200 201 | As you can see, this is a pure extension of [the basic nginx service configuration for SCGI][scgii], showing off a few ideas you might want to try on your own site, such as static asset proxying. You also need a `local/code` file containing: | | | | | | | | | 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 | As you can see, this is a pure extension of [the basic nginx service configuration for SCGI][scgii], showing off a few ideas you might want to try on your own site, such as static asset proxying. You also need a `local/code` file containing: include scgi_params; scgi_pass 127.0.0.1:12345; scgi_param SCRIPT_NAME "/code"; We separate that out because nginx refuses to inherit certain settings between nested location blocks, so rather than repeat them, we extract them to this separate file and include it from both locations where it’s needed. You see this above where we set far-future expiration dates on files served by Fossil via URLs that contain hashes that change when the content changes. It tells your browser that the content of these URLs can never change without the URL itself changing, which makes your Fossil-based site considerably faster. Similarly, the `local/generic` file referenced above helps us reduce unnecessary repetition among the multiple sites this configuration hosts: root /var/www/$host; listen 80; listen [::]:80; charset utf-8; There are some configuration directives that nginx refuses to substitute variables into, citing performance considerations, so there is a limit to how much repetition you can squeeze out this way. One such example are the `access_log` and `error_log` directives, which follow an obvious pattern from one host to the next. Sadly, you must tolerate some repetition across `server { }` blocks when setting up multiple domains |
︙ | ︙ | |||
244 245 246 247 248 249 250 | encryption for Fossil](#tls), proxying HTTP instead of SCGI provides no benefit. However, it is still worth showing the proper method of proxying Fossil’s HTTP server through nginx if only to make reading nginx documentation on other sites easier: | | | | | | | | | | | > > > | | | | 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 311 312 313 314 315 316 317 318 | encryption for Fossil](#tls), proxying HTTP instead of SCGI provides no benefit. However, it is still worth showing the proper method of proxying Fossil’s HTTP server through nginx if only to make reading nginx documentation on other sites easier: location /code { rewrite ^/code(/.*) $1 break; proxy_pass http://127.0.0.1:12345; } The most common thing people get wrong when hand-rolling a configuration like this is to get the slashes wrong. Fossil is sensitive to this. For instance, Fossil will not collapse double slashes down to a single slash, as some other HTTP servers will. ## <a id="large-uv"></a> Allowing Large Unversioned Files By default, nginx only accepts HTTP messages [up to a meg](http://nginx.org/en/docs/http/ngx_http_core_module.html#client_max_body_size) in size. Fossil chunks its sync protocol such that this is not normally a problem, but when sending [unversioned content][uv], it uses a single message for the entire file. Therefore, if you will be storing files larger than this limit as unversioned content, you need to raise the limit. Within the `location` block: # Allow large unversioned file uploads, such as PDFs client_max_body_size 20M; [uv]: ../../unvers.wiki ## <a id="fail2ban"></a> Integrating `fail2ban` One of the nice things that falls out of proxying Fossil behind nginx is that it makes it easier to configure `fail2ban` to recognize attacks on Fossil and automatically block them. Fossil logs the sorts of errors we want to detect, but it does so in places like the repository’s admin log, a SQL table, which `fail2ban` doesn’t know how to query. By putting Fossil behind an nginx proxy, we convert these failures to log file form, which `fail2ban` is designed to handle. First, install `fail2ban`, if you haven’t already: sudo apt install fail2ban We’d like `fail2ban` to react to Fossil `/login` failures. The stock configuration of `fail2ban` only detects a few common sorts of SSH attacks by default, and its included (but disabled) nginx attack detectors don’t include one that knows how to detect an attack on Fossil. We have to teach it by putting the following into `/etc/fail2ban/filter.d/nginx-fossil-login.conf`: [Definition] failregex = ^<HOST> - .*POST .*/login HTTP/..." 401 That teaches `fail2ban` how to recognize the errors logged by Fossil [as of 2.14](/info/39d7eb0e22). (Earlier versions of Fossil returned HTTP status code 200 for this, so you couldn’t distinguish a successful login from a failure.) Then in `/etc/fail2ban/jail.local`, add this section: [nginx-fossil-login] enabled = true logpath = /var/log/nginx/*-https-access.log The last line is the key: it tells `fail2ban` where we’ve put all of our per-repo access logs in the nginx config above. There’s a [lot more you can do][dof2b], but that gets us out of scope of this guide. |
︙ | ︙ | |||
333 334 335 336 337 338 339 | has gotten smarter or our nginx configurations have gotten simpler, so we have removed the manual instructions we used to have here. You may wish to include something like this from each `server { }` block in your configuration to enable TLS in a common, secure way: ``` | | | | | | | | | | | | | | | | | | | | > > | 336 337 338 339 340 341 342 343 344 345 346 347 348 349 350 351 352 353 354 355 356 357 358 359 360 361 362 363 364 365 366 367 368 369 370 371 372 373 374 375 376 377 378 379 380 381 382 383 | has gotten smarter or our nginx configurations have gotten simpler, so we have removed the manual instructions we used to have here. You may wish to include something like this from each `server { }` block in your configuration to enable TLS in a common, secure way: ``` # Tell nginx to accept TLS-encrypted HTTPS on the standard TCP port. listen 443 ssl; listen [::]:443 ssl; # Reference the TLS cert files produced by Certbot. ssl_certificate /etc/letsencrypt/live/example.com/fullchain.pem; ssl_certificate_key /etc/letsencrypt/live/example.com/privkey.pem; # Load the Let's Encrypt Diffie-Hellman parameters generated for # this server. Without this, the server is vulnerable to Logjam. ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem; # Tighten things down further, per Qualys’ and Certbot’s advice. ssl_session_cache shared:le_nginx_SSL:1m; ssl_protocols TLSv1.2 TLSv1.3; ssl_prefer_server_ciphers on; ssl_session_timeout 1440m; # Offer OCSP certificate stapling. ssl_stapling on; ssl_stapling_verify on; # Enable HSTS. include local/enable-hsts; ``` The [HSTS] step is optional and should be applied only after due consideration, since it has the potential to lock users out of your site if you later change your mind on the TLS configuration. The `local/enable-hsts` file it references is simply: ``` add_header Strict-Transport-Security "max-age=31536000; includeSubDomains" always; ``` It’s a separate file because nginx requires that headers like this be applied separately for each `location { }` block. We’ve therefore factored this out so you can `include` it everywhere you need it. The [OCSP] step is optional, but recommended. |
︙ | ︙ |
Changes to www/server/debian/service.md.
︙ | ︙ | |||
51 52 53 54 55 56 57 | create a listener socket on a high-numbered (≥ 1024) TCP port, suitable for sharing a Fossil repo to a workgroup on a private LAN. To do this, write the following in `~/.local/share/systemd/user/fossil.service`: ```dosini | | | | | | | | | | | | 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 | create a listener socket on a high-numbered (≥ 1024) TCP port, suitable for sharing a Fossil repo to a workgroup on a private LAN. To do this, write the following in `~/.local/share/systemd/user/fossil.service`: ```dosini [Unit] Description=Fossil user server After=network-online.target [Service] WorkingDirectory=/home/fossil/museum ExecStart=/home/fossil/bin/fossil server --port 9000 repo.fossil Restart=always RestartSec=3 [Install] WantedBy=multi-user.target ``` Unlike with `inetd` and `xinetd`, we don’t need to tell `systemd` which user and group to run this service as, because we’ve installed it under the account we’re logged into, which `systemd` will use as the service’s owner. |
︙ | ︙ | |||
88 89 90 91 92 93 94 | follows that it doesn’t need to run as a system service. A user service works perfectly well for this. Because we’ve set this up as a user service, the commands you give to manipulate the service vary somewhat from the sort you’re more likely to find online: | | | | | | | | 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 | follows that it doesn’t need to run as a system service. A user service works perfectly well for this. Because we’ve set this up as a user service, the commands you give to manipulate the service vary somewhat from the sort you’re more likely to find online: $ systemctl --user daemon-reload $ systemctl --user enable fossil $ systemctl --user start fossil $ systemctl --user status fossil -l $ systemctl --user stop fossil That is, we don’t need to talk to `systemd` with `sudo` privileges, but we do need to tell it to look at the user configuration rather than the system-level configuration. This scheme isolates the permissions needed by the Fossil server, which reduces the amount of damage it can do if there is ever a remotely-triggerable security flaw found in Fossil. On some `systemd` based OSes, user services only run while that user is logged in interactively. This is common on systems aiming to provide desktop environments, where this is the behavior you often want. To allow background services to continue to run after logout, say: $ sudo loginctl enable-linger $USER You can paste the command just like that into your terminal, since `$USER` will expand to your login name. [scgi]: ../any/scgi.md |
︙ | ︙ | |||
163 164 165 166 167 168 169 | roughly equivalent to [the ancient `inetd` method](../any/inetd.md). It’s more complicated, but it has some nice properties. We first need to define the privileged socket listener by writing `/etc/systemd/system/fossil.socket`: ```dosini | | | | | | | | | | | | | | | | | | | | | | | | | | 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 | roughly equivalent to [the ancient `inetd` method](../any/inetd.md). It’s more complicated, but it has some nice properties. We first need to define the privileged socket listener by writing `/etc/systemd/system/fossil.socket`: ```dosini [Unit] Description=Fossil socket [Socket] Accept=yes ListenStream=80 NoDelay=true [Install] WantedBy=sockets.target ``` Note the change of configuration directory from the `~/.local` directory to the system level. We need to start this socket listener at the root level because of the low-numbered TCP port restriction we brought up above. This configuration says more or less the same thing as the socket part of an `inted` entry [exemplified elsewhere in this documentation](../any/inetd.md). Next, create the service definition file in that same directory as `fossil@.service`: ```dosini [Unit] Description=Fossil socket server After=network-online.target [Service] WorkingDirectory=/home/fossil/museum ExecStart=/home/fossil/bin/fossil http repo.fossil StandardInput=socket [Install] WantedBy=multi-user.target ``` Notice that we haven’t told `systemd` which user and group to run Fossil under. Since this is a system-level service definition, that means it will run as root, which then causes Fossil to [automatically drop into a `chroot(2)` jail](../../chroot.md) rooted at the `WorkingDirectory` we’ve configured above, shortly after each `fossil http` call starts. The `Restart*` directives we had in the user service configuration above are unnecessary for this method, since Fossil isn’t supposed to remain running under it. Each HTTP hit starts one Fossil instance, which handles that single client’s request and then immediately shuts down. Next, you need to tell `systemd` to reload its system-level configuration files and enable the listening socket: $ sudo systemctl daemon-reload $ sudo systemctl enable fossil.socket And now you can manipulate the socket listener: $ sudo systemctl start fossil.socket $ sudo systemctl status -l fossil.socket $ sudo systemctl stop fossil.socket Notice that we’re working with the *socket*, not the *service*. The fact that we’ve given them the same base name and marked the service as an instantiated service with the “`@`” notation allows `systemd` to automatically start an instance of the service each time a hit comes in on the socket that `systemd` is monitoring on Fossil’s behalf. To see this service instantiation at work, visit a long-running Fossil page (e.g. `/tarball`) and then give a command like this: $ sudo systemctl --full | grep fossil This will show information about the `fossil` socket and service instances, which should show your `/tarball` hit handler, if it’s still running: fossil@20-127.0.0.1:80-127.0.0.1:38304.service You can feed that service instance description to a `systemctl kill` command to stop that single instance without restarting the whole `fossil` service, for example. In all of this, realize that we’re able to manipulate a single socket listener or single service instance at a time, rather than reload the |
︙ | ︙ |
Changes to www/server/index.html.
1 2 3 | <div class='fossil-doc' data-title="How To Configure A Fossil Server"> <style type="text/css"> | > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > | > | > | | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 | <div class='fossil-doc' data-title="How To Configure A Fossil Server"> <style type="text/css"> p { margin-left: 4em; margin-right: 3em; } li p { margin-left: 0; } h2 { margin-left: 1em; } h3 { margin-left: 3em; } ol, ul { margin-left: 3em; } a#all { font-size: 90%; margin-left: 1em; } div#tutpick.show { max-height: 99em; transition: max-height 1000ms ease-in; } div#tutpick { max-height: 0; overflow: hidden; } th.fep { background-color: #e8e8e8; font-family: "Helvetica Neue", "Arial Narrow", "Myriad Pro", "Avenir Next Condensed"; font-stretch: condensed; min-width: 3em; padding: 0.4em; white-space: nowrap; } th.host { background-color: #e8e8e8; font-family: "Helvetica Neue", "Arial Narrow", "Myriad Pro", "Avenir Next Condensed"; font-stretch: condensed; padding: 0.4em; text-align: right; } td.doc { text-align: center; } </style> <h2>No Server Required</h2> |
︙ | ︙ | |||
161 162 163 164 165 166 167 | <h2 id="matrix">Activation Tutorials</h2> <p>We've broken the configuration for each method out into a series of sub-articles. Some of these are generic, while others depend on particular operating systems or front-end software:</p> | > | > | 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 | <h2 id="matrix">Activation Tutorials</h2> <p>We've broken the configuration for each method out into a series of sub-articles. Some of these are generic, while others depend on particular operating systems or front-end software:</p> <div id="tutpick" class="show"></div> <table style="margin-left: 6em;"> <tr> <th class="host">⇩ OS / Method ⇨</th> <th class="fep">direct</th> <th class="fep">inetd</th> <th class="fep">stunnel</th> <th class="fep">CGI</th> <th class="fep">SCGI</th> |
︙ | ︙ | |||
239 240 241 242 243 244 245 | <td class="doc"><a href="windows/cgi.md">✅</a></td> <td class="doc">❌</td> <td class="doc">❌</td> <td class="doc">❌</td> <td class="doc"><a href="windows/iis.md">✅</a></td> <td class="doc"><a href="windows/service.md">✅</a></td> </tr> | | | 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 | <td class="doc"><a href="windows/cgi.md">✅</a></td> <td class="doc">❌</td> <td class="doc">❌</td> <td class="doc">❌</td> <td class="doc"><a href="windows/iis.md">✅</a></td> <td class="doc"><a href="windows/service.md">✅</a></td> </tr> </table> <p>Where there is a check mark in the "<b>Any</b>" row, the method for that is generic enough that it works across OSes that Fossil is known to work on. The check marks below that usually just link to this generic documentation.</p> <p>The method in the "<b>proxy</b>" column is for the platform's default |
︙ | ︙ |
Changes to www/server/macos/service.md.
︙ | ︙ | |||
16 17 18 19 20 21 22 | However, we will still give two different configurations, just as in the `systemd` article: one for a standalone HTTP server, and one using socket activation. For more information on `launchd`, the single best resource we’ve found is [launchd.info](https://launchd.info). The next best is: | | | | < | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | < | 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 | However, we will still give two different configurations, just as in the `systemd` article: one for a standalone HTTP server, and one using socket activation. For more information on `launchd`, the single best resource we’ve found is [launchd.info](https://launchd.info). The next best is: $ man launchd.plist [la]: http://www.grivet-tools.com/blog/2014/launchdaemons-vs-launchagents/ [ldhome]: https://developer.apple.com/library/archive/documentation/MacOSX/Conceptual/BPSystemStartup/Chapters/CreatingLaunchdJobs.html [wpa]: https://en.wikipedia.org/wiki/Launchd ## Standalone HTTP Server To configure `launchd` to start Fossil as a standalone HTTP server, write the following as `com.example.dev.FossilHTTP.plist`: ```xml <?xml version="1.0" encoding="UTF-8"?> <!DOCTYPE plist PUBLIC "-//Apple//DTD PLIST 1.0//EN" "http://www.apple.com/DTDs/PropertyList-1.0.dtd"> <plist version="1.0"> <dict> <key>Label</key> <string>com.example.dev.FossilHTTP</string> <key>ProgramArguments</key> <array> <string>/usr/local/bin/fossil</string> <string>server</string> <string>--port</string> <string>9000</string> <string>repo.fossil</string> </array> <key>WorkingDirectory</key> <string>/Users/you/museum</string> <key>KeepAlive</key> <true/> <key>RunAtLoad</key> <true/> <key>StandardErrorPath</key> <string>/tmp/fossil-error.log</string> <key>StandardOutPath</key> <string>/tmp/fossil-info.log</string> <key>UserName</key> <string>you</string> <key>GroupName</key> <string>staff</string> <key>InitGroups</key> <true/> </dict> </plist> ``` In this example, we’re assuming your development organization uses the domain name “`dev.example.org`”, that your short macOS login name is “`you`”, and that you store your Fossils in “`~/museum`”. Adjust these elements of the plist file to suit your local situation. You might be wondering about the use of `UserName`: isn’t Fossil supposed to drop privileges and enter [a `chroot(2)` jail](../../chroot.md) when it’s started as root like this? Why do we need to give it a user name? Won’t Fossil use the owner of the repository file to set that? All I can tell you is that in testing here, if you leave the user and group configuration at the tail end of that plist file out, Fossil will remain running as root! Install that file and set it to start with: $ sudo install -o root -g wheel -m 644 com.example.dev.FossilHTTP.plist \ /Library/LaunchDaemons/ $ sudo launchctl load -w /Library/LaunchDaemons/com.example.dev.FossilHTTP.plist Because we set the `RunAtLoad` key, this will also launch the daemon. Stop the daemon with: $ sudo launchctl unload -w /Library/LaunchDaemons/com.example.dev.FossilHTTP.plist ## Socket Listener Another useful method to serve a Fossil repo via `launchd` is by setting up a socket listener: ```xml <?xml version="1.0" encoding="UTF-8"?> <!DOCTYPE plist PUBLIC "-//Apple//DTD PLIST 1.0//EN" "http://www.apple.com/DTDs/PropertyList-1.0.dtd"> <plist version="1.0"> <dict> <key>Label</key> <string>com.example.dev.FossilSocket</string> <key>ProgramArguments</key> <array> <string>/usr/local/bin/fossil</string> |
︙ | ︙ |
Changes to www/server/openbsd/fastcgi.md.
︙ | ︙ | |||
16 17 18 19 20 21 22 | ## <a id="fslinstall"></a>Install Fossil Use the OpenBSD package manager `pkg_add` to install Fossil, making sure to select the statically linked binary. ```console | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 | ## <a id="fslinstall"></a>Install Fossil Use the OpenBSD package manager `pkg_add` to install Fossil, making sure to select the statically linked binary. ```console $ doas pkg_add fossil quirks-3.325 signed on 2020-06-12T06:24:53Z Ambiguous: choose package for fossil 0: <None> 1: fossil-2.10v0 2: fossil-2.10v0-static Your choice: 2 fossil-2.10v0-static: ok ``` This installs Fossil into the chroot. To facilitate local use, create a symbolic link of the fossil executable into `/usr/local/bin`. ```console $ doas ln -s /var/www/bin/fossil /usr/local/bin/fossil ``` As a privileged user, create the file `/var/www/cgi-bin/scm` with the following contents to make the CGI script that `httpd` will execute in response to `fsl.domain.tld` requests; all paths are relative to the `/var/www` chroot. ```sh #!/bin/fossil directory: /htdocs/fsl.domain.tld notfound: https://domain.tld repolist errorlog: /logs/fossil.log ``` The `directory` directive instructs Fossil to serve all repositories found in `/var/www/htdocs/fsl.domain.tld`, while `errorlog` sets logging to be saved to `/var/www/logs/fossil.log`; create the repository directory and log file—making the latter owned by the `www` user, and the script executable. ```console $ doas mkdir /var/www/htdocs/fsl.domain.tld $ doas touch /var/www/logs/fossil.log $ doas chown www /var/www/logs/fossil.log $ doas chmod 660 /var/www/logs/fossil.log $ doas chmod 755 /var/www/cgi-bin/scm ``` ## <a id="chroot"></a>Setup chroot Fossil needs both `/dev/random` and `/dev/null`, which aren't accessible from within the chroot, so need to be constructed; `/var`, however, is mounted with the `nodev` option. Rather than removing this default setting, create a small memory filesystem and then mount it on to `/var/www/dev` with [`mount_mfs(8)`][mfs] so that the `random` and `null` device files can be created. In order to avoid necessitating a startup script to recreate the device files at boot, create a template of the needed ``/dev`` tree to automatically populate the memory filesystem. ```console $ doas mkdir /var/www/dev $ doas install -d -g daemon /template/dev $ cd /template/dev $ doas /dev/MAKEDEV urandom $ doas mknod -m 666 null c 2 2 $ doas mount_mfs -s 1M -P /template/dev /dev/sd0b /var/www/dev $ ls -l total 0 crw-rw-rw- 1 root daemon 2, 2 Jun 20 08:56 null lrwxr-xr-x 1 root daemon 7 Jun 18 06:30 random@ -> urandom crw-r--r-- 1 root wheel 45, 0 Jun 18 06:30 urandom ``` [mfs]: https://man.openbsd.org/mount_mfs.8 To make the mountable memory filesystem permanent, open `/etc/fstab` as a privileged user and add the following line to automate creation of the filesystem at startup: ```console swap /var/www/dev mfs rw,-s=1048576,-P=/template/dev 0 0 ``` The same user that executes the fossil binary must have writable access to the repository directory that resides within the chroot; on OpenBSD this is `www`. In addition, grant repository directory ownership to the user who will push to, pull from, and create repositories. ```console $ doas chown -R user:www /var/www/htdocs/fsl.domain.tld $ doas chmod 770 /var/www/htdocs/fsl.domain.tld ``` ## <a id="httpdconfig"></a>Configure httpd On OpenBSD, [httpd.conf(5)][httpd] is the configuration file for `httpd`. To setup the server to serve all Fossil repositores within the directory specified in the CGI script, and automatically redirect standard HTTP requests to HTTPS—apart from [Let's Encrypt][LE] challenges issued in response to [acme-client(1)][acme] certificate requests—create `/etc/httpd.conf` as a privileged user with the following contents. [LE]: https://letsencrypt.org [acme]: https://man.openbsd.org/acme-client.1 [httpd.conf(5)]: https://man.openbsd.org/httpd.conf.5 ```apache server "fsl.domain.tld" { listen on * port http root "/htdocs/fsl.domain.tld" location "/.well-known/acme-challenge/*" { root "/acme" request strip 2 } location * { block return 301 "https://$HTTP_HOST$REQUEST_URI" } location "/*" { fastcgi { param SCRIPT_FILENAME "/cgi-bin/scm" } } } server "fsl.domain.tld" { listen on * tls port https root "/htdocs/fsl.domain.tld" tls { certificate "/etc/ssl/domain.tld.fullchain.pem" key "/etc/ssl/private/domain.tld.key" } hsts { max-age 15768000 preload subdomains } connection max request body 104857600 location "/*" { fastcgi { param SCRIPT_FILENAME "/cgi-bin/scm" } } location "/.well-known/acme-challenge/*" { root "/acme" request strip 2 } } ``` [The default limit][dlim] for HTTP messages in OpenBSD’s `httpd` server is 1 MiB. Fossil chunks its sync protocol such that this is not normally a problem, but when sending [unversioned content][uv], it uses a single message for the entire file. Therefore, if you will be storing files larger than this limit as unversioned content, you need to raise |
︙ | ︙ | |||
185 186 187 188 189 190 191 | In order for `httpd` to serve HTTPS, secure a free certificate from Let's Encrypt using `acme-client`. Before issuing the request, however, ensure you have a zone record for the subdomain with your registrar or nameserver. Then open `/etc/acme-client.conf` as a privileged user to configure the request. ```dosini | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 | In order for `httpd` to serve HTTPS, secure a free certificate from Let's Encrypt using `acme-client`. Before issuing the request, however, ensure you have a zone record for the subdomain with your registrar or nameserver. Then open `/etc/acme-client.conf` as a privileged user to configure the request. ```dosini authority letsencrypt { api url "https://acme-v02.api.letsencrypt.org/directory" account key "/etc/acme/letsencrypt-privkey.pem" } authority letsencrypt-staging { api url "https://acme-staging.api.letsencrypt.org/directory" account key "/etc/acme/letsencrypt-staging-privkey.pem" } domain domain.tld { alternative names { www.domain.tld fsl.domain.tld } domain key "/etc/ssl/private/domain.tld.key" domain certificate "/etc/ssl/domain.tld.crt" domain full chain certificate "/etc/ssl/domain.tld.fullchain.pem" sign with letsencrypt } ``` Start `httpd` with the new configuration file, and issue the certificate request. ```console $ doas rcctl start httpd $ doas acme-client -vv domain.tld acme-client: /etc/acme/letsencrypt-privkey.pem: account key exists (not creating) acme-client: /etc/acme/letsencrypt-privkey.pem: loaded RSA account key acme-client: /etc/ssl/private/domain.tld.key: generated RSA domain key acme-client: https://acme-v01.api.letsencrypt.org/directory: directories acme-client: acme-v01.api.letsencrypt.org: DNS: 172.65.32.248 ... N(Q????Z???j?j?>W#????b???? H????eb??T??*? DNosz(???n{L}???D???4[?B] (1174 bytes) acme-client: /etc/ssl/domain.tld.crt: created acme-client: /etc/ssl/domain.tld.fullchain.pem: created ``` A successful result will output the public certificate, full chain of trust, and private key into the `/etc/ssl` directory as specified in `acme-client.conf`. ```console $ doas ls -lR /etc/ssl -r--r--r-- 1 root wheel 2.3K Mar 2 01:31:03 2018 domain.tld.crt -r--r--r-- 1 root wheel 3.9K Mar 2 01:31:03 2018 domain.tld.fullchain.pem /etc/ssl/private: -r-------- 1 root wheel 3.2K Mar 2 01:31:03 2018 domain.tld.key ``` Make sure to reopen `/etc/httpd.conf` to uncomment the second server block responsible for serving HTTPS requests before proceeding. ## <a id="starthttpd"></a>Start `httpd` With `httpd` configured to serve Fossil repositories out of `/var/www/htdocs/fsl.domain.tld`, and the certificates and key in place, enable and start `slowcgi`—OpenBSD's FastCGI wrapper server that will execute the above Fossil CGI script—before checking that the syntax of the `httpd.conf` configuration file is correct, and (re)starting the server (if still running from requesting a Let's Encrypt certificate). ```console $ doas rcctl enable slowcgi $ doas rcctl start slowcgi slowcgi(ok) $ doas httpd -vnf /etc/httpd.conf configuration OK $ doas rcctl start httpd httpd(ok) ``` ## <a id="clientconfig"></a>Configure Client To facilitate creating new repositories and pushing them to the server, add the following function to your `~/.cshrc` or `~/.zprofile` or the config file for whichever shell you are using on your development box. ```sh finit() { fossil init $1.fossil && \ chmod 664 $1.fossil && \ fossil open $1.fossil && \ fossil user password $USER $PASSWD && \ fossil remote-url https://$USER:$PASSWD@fsl.domain.tld/$1 && \ rsync --perms $1.fossil $USER@fsl.domain.tld:/var/www/htdocs/fsl.domain.tld/ >/dev/null && \ chmod 644 $1.fossil && \ fossil ui } ``` This enables a new repository to be made with `finit repo`, which will create the fossil repository file `repo.fossil` in the current working directory; by default, the repository user is set to the environment variable `$USER`. It then opens the repository and sets the user password to the `$PASSWD` environment variable (which you can either set |
︙ | ︙ |
Changes to www/server/openbsd/service.wiki.
1 2 3 4 5 6 | <title>Serving via rc on OpenBSD</title> OpenBSD provides [https://man.openbsd.org/rc.subr.8|rc.subr(8)], a framework for writing [https://man.openbsd.org/rc.8|rc(8)] scripts. <h2>Creating the daemon</h2> | < | | < | | | | | | | | < | | < < | | < | | | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 | <title>Serving via rc on OpenBSD</title> OpenBSD provides [https://man.openbsd.org/rc.subr.8|rc.subr(8)], a framework for writing [https://man.openbsd.org/rc.8|rc(8)] scripts. <h2>Creating the daemon</h2> Create the file /etc/rc.d/fossil with contents like the following. <blockquote><pre> #!/bin/ksh daemon="/usr/local/bin/fossil" # fossil executable daemon_user="_fossil" # user to run fossil as daemon_flags="server /home/_fossil/example --repolist --port 8888" # fossil command . /etc/rc.d/rc.subr # pexp="$daemon server .*" # See below. rc_reload=NO # Unsupported by Fossil; 'rcctl reload fossil' kills the process. rc_bg=YES # Run in the background, since fossil serve does not daemonize itself rc_cmd $1 </pre></blockquote> <h3>pexp</h3> You may need to uncomment the "pexp=". rc.subr typically finds the daemon process based by matching the process name and argument list. Without the "pexp=" line, rc.subr would look for this exact command: <blockquote><pre> /usr/local/bin/fossil server /home/_fossil/example --repolist --port 8888 </pre></blockquote> Depending on the arguments and their order, fossil may rewrite the arguments for display in the process listing ([https://man.openbsd.org/ps.1|ps(1)]), so rc.subr may fail to find the process through the default match. The example above does not get rewritten, but the same commands in a different order can be rewritten. For example, when I switch the order of the arguments in "daemon_flags", <blockquote><pre> /usr/local/bin/fossil server --repolist --port 8888 /home/_fossil/example </pre></blockquote> the process command is changed to this. <blockquote><pre> /usr/local/bin/fossil server /home/_fossil/example /home/_fossil/example 8888 /home/_fossil/example </pre></blockquote> The commented "pexp=" line instructs rc.subr to choose the process whose command and arguments text starts with this: <blockquote><pre> /usr/local/bin/fossil server </pre></blockquote> <h2>Enabling the daemon</h2> Once you have created /etc/rc.d/fossil, run these commands. <blockquote><pre> rcctl enable fossil # add fossil to pkg_scripts in /etc/rc.conf.local rcctl start fossil # start the daemon now </pre></blockquote> The daemon should now be running and set to start at boot. <h2>Multiple daemons</h2> You may want to serve multiple fossil instances with different options. For example, * If different users own different repositories, you may want different users to serve different repositories. * You may want to serve different repositories on different ports so you can control them differently with, for example, HTTP reverse proxies or [https://man.openbsd.org/pf.4|pf(4)]. To run multiple fossil daemons, create multiple files in /etc/rc.d, and enable each of them. Here are two approaches for creating the files in /etc/rc.d: Symbolic links and copies. <h3>Symbolic links</h3> Suppose you want to run one fossil daemon as user "user1" on port 8881 and another as user "user2" on port 8882. Create the files with [https://man.openbsd.org/ln.1|ln(1)], and configure them to run different fossil commands. <blockquote><pre> cd /etc/rc.d ln -s fossil fossil1 ln -s fossil fossil2 rcctl enable fossil1 fossil2 rcctl set fossil1 user user1 rcctl set fossil2 user user2 rcctl set fossil1 flags 'server /home/user1/repo1.fossil --port 8881' rcctl set fossil2 flags 'server /home/user2/repo2.fossil --port 8882' rcctl start fossil1 fossil2 </pre></blockquote> <h3>Copies</h3> You may want to run fossil daemons that are too different to configure just with [https://man.openbsd.org/rcctl.8|rcctl(8)]. In particular, you can't change the "pexp" with rcctl. If you want to run fossil commands that are more different, you may prefer to create separate files in /etc/rc.d. Replace "ln -s" above with "cp" to accomplish this. <blockquote><pre> cp /etc/rc.d/fossil /etc/rc.d/fossil-user1 cp /etc/rc.d/fossil /etc/rc.d/fossil-user2 </pre></blockquote> You can still use commands like "rcctl set fossil-user1 flags", but you can also edit the "/etc/rc.d/fossil-user1" file. |
Changes to www/server/windows/iis.md.
︙ | ︙ | |||
30 31 32 33 34 35 36 | ## Background Fossil Service Setup You will need to have the Fossil HTTP server running in the background, serving some local repository, bound to localhost on a fixed high-numbered TCP port. For the purposes of testing, simply start it by hand in your command shell of choice: | | | 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 | ## Background Fossil Service Setup You will need to have the Fossil HTTP server running in the background, serving some local repository, bound to localhost on a fixed high-numbered TCP port. For the purposes of testing, simply start it by hand in your command shell of choice: fossil serve --port 9000 --localhost repo.fossil That command assumes you’ve got `fossil.exe` in your `%PATH%` and you’re in a directory holding `repo.fossil`. See [the platform-independent instructions](../any/none.md) for further details. For a more robust setup, we recommend that you [install Fossil as a Windows service](./service.md), which will allow Fossil to start at |
︙ | ︙ |
Changes to www/server/windows/service.md.
1 2 3 4 5 6 7 8 9 10 11 12 13 | # Fossil as a Windows Service If you need Fossil to start automatically on Windows, it is suggested to install Fossil as a Windows Service. ## Assumptions 1. You have Administrative access to a Windows 2012r2 or above server. 2. You have PowerShell 5.1 or above installed. ## Place Fossil on Server However you obtained your copy of Fossil, it is recommended that you follow | | | | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 | # Fossil as a Windows Service If you need Fossil to start automatically on Windows, it is suggested to install Fossil as a Windows Service. ## Assumptions 1. You have Administrative access to a Windows 2012r2 or above server. 2. You have PowerShell 5.1 or above installed. ## Place Fossil on Server However you obtained your copy of Fossil, it is recommended that you follow Windows conventions and place it within `\Program Files\FossilSCM`. Since Fossil 2.10 is a 64bit binary, this is the proper location for the executable. This way Fossil is at an expected location and you will have minimal issues with Windows interfering in your ability to run Fossil as a service. You will need Administrative rights to place fossil at the recommended location. If you will only be running Fossil as a service, you do not need to add this location to the path, though you may do so if you wish. ## Installing Fossil as a Service |
︙ | ︙ |
Changes to www/server/windows/stunnel.md.
︙ | ︙ | |||
8 9 10 11 12 13 14 15 16 17 18 19 20 21 | extra step of configuring stunnel to provide a proper HTTPS setup. ## Assumptions 1. You have Administrative access to a Windows 2012r2 or above server. 2. You have PowerShell 5.1 or above installed. 3. You have acquired a certificate either from a Public CA or an Internal CA. ## Configure Fossil Service for https Due to the need for the `--https` option for successfully using Fossil with stunnel, we will use [Advanced service installation using PowerShell](./service.md#PowerShell). We will need to change the command to install the Fossil Service to configure it properly for use with stunnel as an https proxy. Run the following: | > > > > > > | 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 | extra step of configuring stunnel to provide a proper HTTPS setup. ## Assumptions 1. You have Administrative access to a Windows 2012r2 or above server. 2. You have PowerShell 5.1 or above installed. 3. You have acquired a certificate either from a Public CA or an Internal CA. These instructions were tested with Fossil 2.10 and stunnel 5.55. Other versions may not function in a similar manner. There is a bug in Fossil 2.9 and earlier that prevents these versions of Fossil from properly constructing https URLs when used with stunnel as a proxy. Please make sure you are using Fossil 2.10 or later on Windows. ## Configure Fossil Service for https Due to the need for the `--https` option for successfully using Fossil with stunnel, we will use [Advanced service installation using PowerShell](./service.md#PowerShell). We will need to change the command to install the Fossil Service to configure it properly for use with stunnel as an https proxy. Run the following: |
︙ | ︙ |
Changes to www/serverext.wiki.
︙ | ︙ | |||
29 30 31 32 33 34 35 | An administrator activates the CGI extension mechanism by specifying an "Extension Root Directory" or "extroot" as part of the [./server/index.html|server setup]. If the Fossil server is itself run as [./server/any/cgi.md|CGI], then add a line to the [./cgi.wiki#extroot|CGI script file] that says: | | | | | | | | 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 | An administrator activates the CGI extension mechanism by specifying an "Extension Root Directory" or "extroot" as part of the [./server/index.html|server setup]. If the Fossil server is itself run as [./server/any/cgi.md|CGI], then add a line to the [./cgi.wiki#extroot|CGI script file] that says: <blockquote><pre> extroot: <i>DIRECTORY</i> </pre></blockquote> Or, if the Fossil server is being run using the "[./server/any/none.md|fossil server]" or "[./server/any/none.md|fossil ui]" or "[./server/any/inetd.md|fossil http]" commands, then add an extra "--extroot <i>DIRECTORY</i>" option to that command. The <i>DIRECTORY</i> is the DOCUMENT_ROOT for the CGI. Files in the DOCUMENT_ROOT are accessed via URLs like this: <blockquote> https://example-project.org/ext/<i>FILENAME</i> </blockquote> In other words, access files in DOCUMENT_ROOT by appending the filename relative to DOCUMENT_ROOT to the [/help?cmd=/ext|/ext] page of the Fossil server. Files that are readable but not executable are returned as static content. Files that are executable are run as CGI. <h3>2.1 Example #1</h3> The source code repository for SQLite is a Fossil server that is run as CGI. The URL for the source code repository is [https://sqlite.org/src]. The CGI script looks like this: <blockquote><verbatim> #!/usr/bin/fossil repository: /fossil/sqlite.fossil errorlog: /logs/errors.txt extroot: /sqlite-src-ext </verbatim></blockquote> The "extroot: /sqlite-src-ext" line tells Fossil that it should look for extension CGIs in the /sqlite-src-ext directory. (All of this is happening inside of a chroot jail, so putting the document root in a top-level directory is a reasonable thing to do.) When a URL like "https://sqlite.org/src/ext/checklist" is received by the |
︙ | ︙ | |||
99 100 101 102 103 104 105 | main web server which in turn relays the result back to the original client. <h3>2.2 Example #2</h3> The [https://fossil-scm.org/home|Fossil self-hosting repository] is also a CGI that looks like this: | | | | 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 | main web server which in turn relays the result back to the original client. <h3>2.2 Example #2</h3> The [https://fossil-scm.org/home|Fossil self-hosting repository] is also a CGI that looks like this: <blockquote><verbatim> #!/usr/bin/fossil repository: /fossil/fossil.fossil errorlog: /logs/errors.txt extroot: /fossil-extroot </verbatim></blockquote> The extroot for this Fossil server is /fossil-extroot and in that directory is an executable file named "fileup1" - another [https://wapp.tcl.tk|Wapp] script. (The extension mechanism is not required to use Wapp. You can use any kind of program you like. But the creator of SQLite and Fossil is fond of [https://www.tcl.tk|Tcl/Tk] and so he tends to gravitate toward Tcl-based technologies like Wapp.) The fileup1 script is a demo program that lets |
︙ | ︙ | |||
199 200 201 202 203 204 205 | header and footer, then the inserted header will include a Content Security Policy (CSP) restriction on the use of javascript within the webpage. Any <script>...</script> elements within the CGI output must include a nonce or else they will be suppressed by the web browser. The FOSSIL_NONCE variable contains the value of that nonce. So, in other words, to get javascript to work, it must be enclosed in: | | | | | | | | 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 | header and footer, then the inserted header will include a Content Security Policy (CSP) restriction on the use of javascript within the webpage. Any <script>...</script> elements within the CGI output must include a nonce or else they will be suppressed by the web browser. The FOSSIL_NONCE variable contains the value of that nonce. So, in other words, to get javascript to work, it must be enclosed in: <blockquote><verbatim> <script nonce='$FOSSIL_NONCE'>...</script> </verbatim></blockquote> Except, of course, the $FOSSIL_NONCE is replaced by the value of the FOSSIL_NONCE environment variable. <h3>3.1 Input Content</h3> If the HTTP request includes content (for example if this is a POST request) then the CONTENT_LENGTH value will be positive and the data for the content will be readable on standard input. <h2>4.0 CGI Outputs</h2> CGI programs construct a reply by writing to standard output. The first few lines of output are parameters intended for the web server that invoked the CGI. These are followed by a blank line and then the content. Typical parameter output looks like this: <blockquote><verbatim> Status: 200 OK Content-Type: text/html </verbatim></blockquote> CGI programs can return any content type they want - they are not restricted to text replies. It is OK for a CGI program to return (for example) image/png. The fields of the CGI response header can be any valid HTTP header fields. Those that Fossil does not understand are simply relayed back to up the line to the requester. Fossil takes special action with some content types. If the Content-Type is "text/x-fossil-wiki" or "text/x-markdown" then Fossil converts the content from [/wiki_rules|Fossil-Wiki] or [/md_rules|Markdown] into HTML, adding its own header and footer text according to the repository skin. Content of type "text/html" is normally passed straight through unchanged. However, if the text/html content is of the form: <blockquote><verbatim> <div class='fossil-doc' data-title='DOCUMENT TITLE'> ... HTML content there ... </div> </verbatim></blockquote> In other words, if the outer-most markup of the HTML is a <div> element with a single class of "fossil-doc", then Fossil will adds its own header and footer to the HTML. The page title contained in the added header will be extracted from the "data-title" attribute. |
︙ | ︙ |
Changes to www/ssl-server.md.
︙ | ︙ | |||
28 29 30 31 32 33 34 | obtaining a CA-signed certificate. ## Usage To put any of the Fossil server commands into SSL/TLS mode, simply add the "--cert" command-line option. | > | > | 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 | obtaining a CA-signed certificate. ## Usage To put any of the Fossil server commands into SSL/TLS mode, simply add the "--cert" command-line option. > ~~~ fossil ui --cert unsafe-builtin ~~~ The --cert option is what tells Fossil to use TLS encryption. Normally, the argument to --cert is the name of a file containing the certificate (the "fullchain.pem" file) for the website. In this example, the magic name "unsafe-builtin" is used, which causes Fossil to use a self-signed cert rather than a real cert obtained from a [Certificate Authority](https://en.wikipedia.org/wiki/Certificate_authority) |
︙ | ︙ | |||
84 85 86 87 88 89 90 | key and cert. Fossil wants to read certs and public keys in the [PEM format](https://en.wikipedia.org/wiki/Privacy-Enhanced_Mail). PEM is a pure ASCII text format. The private key consists of text like this: | > | | | > | | | > | > > | > | 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 | key and cert. Fossil wants to read certs and public keys in the [PEM format](https://en.wikipedia.org/wiki/Privacy-Enhanced_Mail). PEM is a pure ASCII text format. The private key consists of text like this: > `-----BEGIN PRIVATE KEY-----` *base-64 encoding of the private key* `-----END PRIVATE KEY-----` Similarly, a PEM-encoded cert will look like this: > `-----BEGIN CERTIFICATE-----` *base-64 encoding of the certificate* `-----END CERTIFICATE-----` In both formats, text outside of the delimiters is ignored. That means that if you have a PEM-formatted private key and a separate PEM-formatted certificate, you can concatenate the two into a single file and the individual components will still be easily accessible. If you have a single file that holds both your private key and your cert, you can hand it off to the "[fossil server](/help?cmd=server)" command using the --cert option. Like this: > ~~~ fossil server --port 443 --cert mycert.pem /home/www/myproject.fossil ~~~ The command above is sufficient to run a fully-encrypted web site for the "myproject.fossil" Fossil repository. This command must be run as root, since it wants to listen on TCP port 443, and only root processes are allowed to do that. This is safe, however, since before reading any information off of the wire, Fossil will put itself inside a chroot jail at /home/www and drop all root privileges. ### Keeping The Cert And Private Key In Separate Files If you do not want to combine your cert and private key into a single big PEM file, you can keep them separate using the --pkey option to Fossil. > ~~~ fossil server --port 443 --cert fullchain.pem --pkey privkey.pem /home/www/myproject.fossil ~~~ ## The ACME Protocol The [ACME Protocol][2] is used to prove to a CA that you control a website. CAs require proof that you control a domain before they will issue a cert for that domain. The usual means of dealing with ACME is to run the separate [certbot](https://certbot.eff.org) tool. |
︙ | ︙ | |||
163 164 165 166 167 168 169 | the repository file. If the "server" or "http" command are run against a directory full of Fossil repositories, then the ".well-known" sub-directory should be in that top-level directory. Thus, to set up a project website, you should first run Fossil in ordinary unencrypted HTTP mode like this: | > | > | 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 | the repository file. If the "server" or "http" command are run against a directory full of Fossil repositories, then the ".well-known" sub-directory should be in that top-level directory. Thus, to set up a project website, you should first run Fossil in ordinary unencrypted HTTP mode like this: > ~~~ fossil server --port 80 --acme /home/www/myproject.fossil ~~~ Then you create your public/private key pair and run certbot, giving it a --webroot of /home/www. Certbot will create the sub-directory named "/home/www/.well-known" and put token files there, which the CA will verify. Then certbot will store your new cert in a particular file. Once certbot has obtained your cert, then you can concatenate that cert with your private key and run Fossil in SSL/TLS mode as shown above. [2]: https://en.wikipedia.org/wiki/Automated_Certificate_Management_Environment |
Changes to www/ssl.wiki.
︙ | ︙ | |||
80 81 82 83 84 85 86 | passing the <tt>--with-openssl</tt> option to the <tt>configure</tt> script. Type <tt>./configure --help</tt> for details. Another option is to download the source code to OpenSSL and build Fossil against that private version of OpenSSL: <pre> | | | | | | | | | | | 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 | passing the <tt>--with-openssl</tt> option to the <tt>configure</tt> script. Type <tt>./configure --help</tt> for details. Another option is to download the source code to OpenSSL and build Fossil against that private version of OpenSSL: <pre> cd compat # relative to the Fossil source tree root tar xf /path/to/openssl-*.tar.gz ln -fs openssl-x.y.z openssl cd openssl ./config # or, e.g. ./Configure darwin64-x86_64-cc make -j11 cd ../.. ./configure --with-openssl=tree make -j11 </pre> That will get you a Fossil binary statically linked to this in-tree version of OpenSSL. Beware, taking this path typically opens you up to new problems, which are conveniently covered in the next section! |
︙ | ︙ | |||
119 120 121 122 123 124 125 | to accept the certificate the first time you communicate with the server. Verify the certificate fingerprint is correct, then answer "always" if you want Fossil to remember your decision. If you are cloning from or syncing to Fossil servers that use a certificate signed by a well-known CA or one of its delegates, Fossil still has to know which CA roots to trust. When this fails, you get an | | | | | | | | | | 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 | to accept the certificate the first time you communicate with the server. Verify the certificate fingerprint is correct, then answer "always" if you want Fossil to remember your decision. If you are cloning from or syncing to Fossil servers that use a certificate signed by a well-known CA or one of its delegates, Fossil still has to know which CA roots to trust. When this fails, you get an error message that looks like this in Fossil 2.11 and newer: <pre> Unable to verify SSL cert from fossil-scm.org subject: CN = sqlite.org issuer: C = US, O = Let's Encrypt, CN = Let's Encrypt Authority X3 sha256: bf26092dd97df6e4f7bf1926072e7e8d200129e1ffb8ef5276c1e5dd9bc95d52 accept this cert and continue (y/N)? </pre> In older versions, the message was much longer and began with this line: <pre> SSL verification failed: unable to get local issuer certificate </pre> Fossil relies on the OpenSSL library to have some way to check a trusted list of CA signing keys. There are two common ways this fails: # The OpenSSL library Fossil is linked to doesn't have a CA signing key set at all, so that it initially trusts no certificates at all. # The OpenSSL library does have a CA cert set, but your Fossil server's TLS certificate was signed by a CA that isn't in that set. A common reason to fall into the second trap is that you're using certificates signed by a local private CA, as often happens in large enterprises. You can solve this sort of problem by getting your local CA's signing certificate in PEM format and pointing OpenSSL at it: <pre> fossil set --global ssl-ca-location /path/to/local-ca.pem </pre> The use of <tt>--global</tt> with this option is common, since you may have multiple repositories served under certificates signed by that same CA. However, if you have a mix of publicly-signed and locally-signed certificates, you might want to drop the <tt>--global</tt> flag and set this option on a per-repository basis instead. |
︙ | ︙ | |||
180 181 182 183 184 185 186 | may find it acceptable to use the same Mozilla NSS cert set. I do not know of a way to easily get this from Mozilla themselves, but I did find a [https://curl.se/docs/caextract.html | third party source] for the <tt>cacert.pem</tt> file. I suggest placing the file into your Windows user home directory so that you can then point Fossil at it like so: <pre> | | | 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 | may find it acceptable to use the same Mozilla NSS cert set. I do not know of a way to easily get this from Mozilla themselves, but I did find a [https://curl.se/docs/caextract.html | third party source] for the <tt>cacert.pem</tt> file. I suggest placing the file into your Windows user home directory so that you can then point Fossil at it like so: <pre> fossil set --global ssl-ca-location %userprofile%\cacert.pem </pre> This can also happen if you've linked Fossil to a version of OpenSSL [#openssl-src|built from source]. That same <tt>cacert.pem</tt> fix can work in that case, too. When you build Fossil on Linux platforms against the binary OpenSSL |
︙ | ︙ | |||
224 225 226 227 228 229 230 | If you attempt to connect to a server which requests a client certificate, but don't provide one, fossil will show an error message which explains what to do to authenticate with the server. <h2 id="server">Server-Side Configuration</h2> | | > | | 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 | If you attempt to connect to a server which requests a client certificate, but don't provide one, fossil will show an error message which explains what to do to authenticate with the server. <h2 id="server">Server-Side Configuration</h2> Fossil's built-in HTTP server got [./ssl-server.md | TLS support] in December 2021, released as version 2.18 in early 2022. Prior to that, system administrators that wanted to add TLS support to a Fossil server had to put it behind a reverse proxy that would do the translation. Since advantages remain for delegating TLS to another layer in the stack, instructions for doing so continue to be included in our documentation, such as: * <a id="stunnel" href="./server/any/stunnel.md">Serving via stunnel</a> * <a id="althttpd" href="./server/any/althttpd.md">Serving via stunnel + althttpd</a> |
︙ | ︙ |
Changes to www/stats.wiki.
1 2 3 4 5 6 7 8 9 10 11 | <title>Fossil Performance</title> The questions will inevitably arise: How does Fossil perform? Does it use a lot of disk space or bandwidth? Is it scalable? In an attempt to answers these questions, this report looks at several projects that use fossil for configuration management and examines how well they are working. The following table is a summary of the results. (Last updated on 2018-06-04.) Explanation and analysis follows the table. | > | | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 | <title>Fossil Performance</title> <h1 align="center">Performance Statistics</h1> The questions will inevitably arise: How does Fossil perform? Does it use a lot of disk space or bandwidth? Is it scalable? In an attempt to answers these questions, this report looks at several projects that use fossil for configuration management and examines how well they are working. The following table is a summary of the results. (Last updated on 2018-06-04.) Explanation and analysis follows the table. <table border=1> <tr> <th>Project</th> <th>Number Of Artifacts</th> <th>Number Of Check-ins</th> <th>Project Duration<br>(as of 2018-06-04)</th> <th>Uncompressed Size</th> <th>Repository Size</th> |
︙ | ︙ |
Changes to www/sync.wiki.
︙ | ︙ | |||
46 47 48 49 50 51 52 | peer-to-peer communication and without any kind of central authority. If you are already familiar with CRDTs and were wondering if Fossil used them, the answer is "yes". We just don't call them by that name. | | | 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 | peer-to-peer communication and without any kind of central authority. If you are already familiar with CRDTs and were wondering if Fossil used them, the answer is "yes". We just don't call them by that name. <h2>2.0 Transport</h2> All communication between client and server is via HTTP requests. The server is listening for incoming HTTP requests. The client issues one or more HTTP requests and receives replies for each request. The server might be running as an independent server |
︙ | ︙ | |||
80 81 82 83 84 85 86 | to represent the listener and initiator of the interaction, respectively. Nothing in this protocol requires that the server actually be a back-room processor housed in a datacenter, nor does the client need to be a desktop or handheld device. For the purposes of this article "client" simply means the repository that initiates the conversation and "server" is the repository that responds. Nothing more. | | | | < < < < < < < < < < < < < < < < < < < < | | > | > > | > | | < | > | < | > | | | > | > | > | | | | | 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 | to represent the listener and initiator of the interaction, respectively. Nothing in this protocol requires that the server actually be a back-room processor housed in a datacenter, nor does the client need to be a desktop or handheld device. For the purposes of this article "client" simply means the repository that initiates the conversation and "server" is the repository that responds. Nothing more. <h4>2.0.1 HTTPS Transport</h4> HTTPS differs from HTTP only in that the HTTPS protocol is encrypted as it travels over the wire. The underlying protocol is the same. This document describes only the underlying, unencrypted messages that go client to server and back to client. Whether or not those messages are encrypted does not come into play in this document. Fossil includes built-in [./ssl-server.md|support for HTTPS encryption] in both client and server. <h4>2.0.2 SSH Transport</h4> When doing a sync using an "ssh:..." URL, the same HTTP transport protocol is used. Fossil simply uses [https://en.wikipedia.org/wiki/Secure_Shell|ssh] to start an instance of the [/help?cmd=test-http|fossil test-http] command running on the remote machine. It then sends HTTP requests and gets back HTTP replies over the SSH connection, rather than sending and receiving over an internet socket. To see the specific "ssh" command that the Fossil client runs in order to set up a connection, add either of the the "--httptrace" or "--sshtrace" options to the "fossil sync" command line. <h4>2.0.3 FILE Transport</h4> When doing a sync using a "file:..." URL, the same HTTP protocol is still used. But instead of sending each HTTP request over a socket or via SSH, the HTTP request is written into a temporary file. The client then invokes the [/help?cmd=http|fossil http] command in a subprocess to process the request and and generate a reply. The client then reads the HTTP reply out of a temporary file on disk, and deletes the two temporary files. To see the specific "fossil http" command that is run in order to implement the "file:" transport, add the "--httptrace" option to the "fossil sync" command. <h3>2.1 Server Identification</h3> The server is identified by a URL argument that accompanies the push, pull, or sync command on the client. (As a convenience to users, the URL can be omitted on the client command and the same URL from the most recent push, pull, or sync will be reused. This saves typing in the common case where the client does multiple syncs to the same server.) The client modifies the URL by appending the method name "<b>/xfer</b>" to the end. For example, if the URL specified on the client command line is <blockquote> https://fossil-scm.org/fossil </blockquote> Then the URL that is really used to do the synchronization will be: <blockquote> https://fossil-scm.org/fossil/xfer </blockquote> <h3>2.2 HTTP Request Format</h3> The client always sends a POST request to the server. The general format of the POST request is as follows: <blockquote><pre> POST /fossil/xfer HTTP/1.0 Host: fossil-scm.hwaci.com:80 Content-Type: application/x-fossil Content-Length: 4216 <i>content...</i> </pre></blockquote> In the example above, the pathname given after the POST keyword on the first line is a copy of the URL pathname. The Host: parameter is also taken from the URL. The content type is always either "application/x-fossil" or "application/x-fossil-debug". The "x-fossil" content type is the default. The only difference is that "x-fossil" content is compressed using zlib whereas "x-fossil-debug" is sent uncompressed. A typical reply from the server might look something like this: <blockquote><pre> HTTP/1.0 200 OK Date: Mon, 10 Sep 2007 12:21:01 GMT Connection: close Cache-control: private Content-Type: application/x-fossil; charset=US-ASCII Content-Length: 265 <i>content...</i> </pre></blockquote> The content type of the reply is always the same as the content type of the request. <h2>3.0 Fossil Synchronization Content</h2> A synchronization request between a client and server consists of one or more HTTP requests as described in the previous section. This section details the "x-fossil" content type. <h3>3.1 Line-oriented Format</h3> The x-fossil content type consists of zero or more "cards". Cards are separated by the newline character ("\n"). Leading and trailing whitespace on a card is ignored. Blank cards are ignored. Each card is divided into zero or more space separated tokens. The first token on each card is the operator. Subsequent tokens are arguments. The set of operators understood by servers is slightly different from the operators understood by clients, though the two are very similar. <h3>3.2 Login Cards</h3> Every message from client to server begins with one or more login cards. Each login card has the following format: <blockquote> <b>login</b> <i>userid nonce signature</i> </blockquote> The userid is the name of the user that is requesting service from the server. The nonce is the SHA1 hash of the remainder of the message - all text that follows the newline character that terminates the login card. The signature is the SHA1 hash of the concatenation of the nonce and the users password. For each login card, the server looks up the user and verifies that the nonce matches the SHA1 hash of the remainder of the message. It then checks the signature hash to make sure the signature matches. If everything checks out, then the client is granted all privileges of the specified user. Privileges are cumulative. There can be multiple successful login cards. The session privilege is the union of all privileges from all login cards. <h3>3.3 File Cards</h3> Artifacts are transferred using either "file" cards, or "cfile" or "uvfile" cards. The name "file" card comes from the fact that most artifacts correspond to files that are under version control. The "cfile" name is an abbreviation for "compressed file". The "uvfile" name is an abbreviation for "unversioned file". <h4>3.3.1 Ordinary File Cards</h4> For sync protocols, artifacts are transferred using "file" cards. File cards come in two different formats depending on whether the artifact is sent directly or as a [./delta_format.wiki|delta] from some other artifact. <blockquote> <b>file</b> <i>artifact-id size</i> <b>\n</b> <i>content</i><br> <b>file</b> <i>artifact-id delta-artifact-id size</i> <b>\n</b> <i>content</i> </blockquote> File cards are followed by in-line "payload" data. The content of the artifact or the artifact delta is the first <i>size</i> bytes of the x-fossil content that immediately follow the newline that terminates the file card. |
︙ | ︙ | |||
280 281 282 283 284 285 286 | the ID of another artifact that is the source of the delta. File cards are sent in both directions: client to server and server to client. A delta might be sent before the source of the delta, so both client and server should remember deltas and be able to apply them when their source arrives. | | | | | | | > > | > | > | > | 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324 325 326 327 328 329 330 331 332 333 334 335 336 337 | the ID of another artifact that is the source of the delta. File cards are sent in both directions: client to server and server to client. A delta might be sent before the source of the delta, so both client and server should remember deltas and be able to apply them when their source arrives. <h4>3.3.2 Compressed File Cards</h4> A client that sends a clone protocol version "3" or greater will receive artifacts as "cfile" cards while cloning. This card was introduced to improve the speed of the transfer of content by sending the compressed artifact directly from the server database to the client. Compressed File cards are similar to File cards, sharing the same in-line "payload" data characteristics and also the same treatment of direct content or delta content. Cfile cards come in two different formats depending on whether the artifact is sent directly or as a delta from some other artifact. <blockquote> <b>cfile</b> <i>artifact-id usize csize</i> <b>\n</b> <i>content</i><br> <b>cfile</b> <i>artifact-id delta-artifact-id usize csize</i> <b>\n</b> <i>content</i><br> </blockquote> The first argument of the cfile card is the ID of the artifact that is being transferred. The artifact ID is the lower-case hexadecimal representation of the name hash for the artifact. The second argument of the cfile card is the original size in bytes of the artifact. The last argument of the cfile card is the number of compressed bytes of payload that immediately follow the cfile card. If the cfile card has only three arguments, that means the payload is the complete content of the artifact. If the cfile card has four arguments, then the payload is a delta and the second argument is the ID of another artifact that is the source of the delta and the third argument is the original size of the delta artifact. Unlike file cards, cfile cards are only sent in one direction during a clone from server to client for clone protocol version "3" or greater. <h4>3.3.3 Private artifacts</h4> "Private" content consist of artifacts that are not normally synced. However, private content will be synced when the the [/help?cmd=sync|fossil sync] command includes the "--private" option. Private content is marked by a "private" card: <blockquote> <b>private</b> </blockquote> The private card has no arguments and must directly precede a file card that contains the private content. <h4>3.3.4 Unversioned File Cards</h4> Unversioned content is sent in both directions (client to server and server to client) using "uvfile" cards in the following format: <blockquote> <b>uvfile</b> <i>name mtime hash size flags</i> <b>\n</b> <i>content</i> </blockquote> The <i>name</i> field is the name of the unversioned file. The <i>mtime</i> is the last modification time of the file in seconds since 1970. The <i>hash</i> field is the hash of the content for the unversioned file, or "<b>-</b>" for deleted content. The <i>size</i> field is the (uncompressed) size of the content in bytes. The <i>flags</i> field is an integer which is interpreted |
︙ | ︙ | |||
358 359 360 361 362 363 364 | A server will only accept uvfile cards if the login user has the "y" write-unversioned permission. Servers send uvfile cards in response to uvgimme cards received from the client. Clients send uvfile cards when they determine that the server needs the content based on uvigot cards previously received from the server. | | | | | | | | | | | | 350 351 352 353 354 355 356 357 358 359 360 361 362 363 364 365 366 367 368 369 370 371 372 373 374 375 376 377 378 379 380 381 382 383 384 385 386 387 388 389 390 391 392 393 394 395 396 397 398 399 400 401 402 403 404 405 406 407 408 409 410 411 412 413 414 415 416 417 418 419 420 421 422 423 | A server will only accept uvfile cards if the login user has the "y" write-unversioned permission. Servers send uvfile cards in response to uvgimme cards received from the client. Clients send uvfile cards when they determine that the server needs the content based on uvigot cards previously received from the server. <h3>3.4 Push and Pull Cards</h3> Among the first cards in a client-to-server message are the push and pull cards. The push card tells the server that the client is pushing content. The pull card tells the server that the client wants to pull content. In the event of a sync, both cards are sent. The format is as follows: <blockquote> <b>push</b> <i>servercode projectcode</i><br> <b>pull</b> <i>servercode projectcode</i> </blockquote> The <i>servercode</i> argument is the repository ID for the client. The <i>projectcode</i> is the identifier of the software project that the client repository contains. The projectcode for the client and server must match in order for the transaction to proceed. The server will also send a push card back to the client during a clone. This is how the client determines what project code to put in the new repository it is constructing. The <i>servercode</i> argument is currently unused. <h3>3.5 Clone Cards</h3> A clone card works like a pull card in that it is sent from client to server in order to tell the server that the client wants to pull content. The clone card comes in two formats. Older clients use the no-argument format and newer clients use the two-argument format. <blockquote> <b>clone</b><br> <b>clone</b> <i>protocol-version sequence-number</i> </blockquote> <h4>3.5.1 Protocol 3</h4> The latest clients send a two-argument clone message with a protocol version of "3". (Future versions of Fossil might use larger protocol version numbers.) Version "3" of the protocol enhanced version "2" by introducing the "cfile" card which is intended to speed up clone operations. Instead of sending "file" cards, the server will send "cfile" cards <h4>3.5.2 Protocol 2</h4> The sequence-number sent is the number of artifacts received so far. For the first clone message, the sequence number is 0. The server will respond by sending file cards for some number of artifacts up to the maximum message size. The server will also send a single "clone_seqno" card to the client so that the client can know where the server left off. <blockquote> <b>clone_seqno</b> <i>sequence-number</i> </blockquote> The clone message in subsequent HTTP requests for the same clone operation will use the sequence-number from the clone_seqno of the previous reply. In response to an initial clone message, the server also sends the client a push message so that the client can discover the projectcode for |
︙ | ︙ | |||
440 441 442 443 444 445 446 | The legacy protocol works well for smaller repositories (50MB with 50,000 artifacts) but is too slow and unwieldy for larger repositories. The version 2 protocol is an effort to improve performance. Further performance improvements with higher-numbered clone protocols are possible in future versions of Fossil. | | | | | | | 432 433 434 435 436 437 438 439 440 441 442 443 444 445 446 447 448 449 450 451 452 453 454 455 456 457 458 459 460 461 462 463 464 465 466 467 468 469 470 471 472 473 474 475 476 477 | The legacy protocol works well for smaller repositories (50MB with 50,000 artifacts) but is too slow and unwieldy for larger repositories. The version 2 protocol is an effort to improve performance. Further performance improvements with higher-numbered clone protocols are possible in future versions of Fossil. <h3>3.6 Igot Cards</h3> An igot card can be sent from either client to server or from server to client in order to indicate that the sender holds a copy of a particular artifact. The format is: <blockquote> <b>igot</b> <i>artifact-id</i> ?<i>flag</i>? </blockquote> The first argument of the igot card is the ID of the artifact that the sender possesses. The receiver of an igot card will typically check to see if it also holds the same artifact and if not it will request the artifact using a gimme card in either the reply or in the next message. If the second argument exists and is "1", then the artifact identified by the first argument is private on the sender and should be ignored unless a "--private" [/help?cmd=sync|sync] is occurring. The name "igot" comes from the English slang expression "I got" meaning "I have". <h4>3.6.1 Unversioned Igot Cards</h4> Zero or more "uvigot" cards are sent from server to client when synchronizing unversioned content. The format of a uvigot card is as follows: <blockquote> <b>uvigot</b> <i>name mtime hash size</i> </blockquote> The <i>name</i> argument is the name of an unversioned file. The <i>mtime</i> is the last modification time of the unversioned file in seconds since 1970. The <i>hash</i> is the SHA1 or SHA3-256 hash of the unversioned file content, or "<b>-</b>" if the file has been deleted. The <i>size</i> is the uncompressed size of the file in bytes. |
︙ | ︙ | |||
495 496 497 498 499 500 501 | When a client receives a "uvigot" card, it checks to see if the file needs to be transferred from client to server or from server to client. If a client-to-server transmission is needed, the client schedules that transfer to occur on a subsequent HTTP request. If a server-to-client transfer is needed, then the client sends a "uvgimme" card back to the server to request the file content. | | | | | | | | | | | | | 487 488 489 490 491 492 493 494 495 496 497 498 499 500 501 502 503 504 505 506 507 508 509 510 511 512 513 514 515 516 517 518 519 520 521 522 523 524 525 526 527 528 529 530 531 532 533 534 535 536 537 538 539 540 541 542 543 544 545 546 547 548 549 550 551 552 553 554 555 556 557 558 559 560 561 562 563 564 565 566 567 568 569 570 571 572 | When a client receives a "uvigot" card, it checks to see if the file needs to be transferred from client to server or from server to client. If a client-to-server transmission is needed, the client schedules that transfer to occur on a subsequent HTTP request. If a server-to-client transfer is needed, then the client sends a "uvgimme" card back to the server to request the file content. <h3>3.7 Gimme Cards</h3> A gimme card is sent from either client to server or from server to client. The gimme card asks the receiver to send a particular artifact back to the sender. The format of a gimme card is this: <blockquote> <b>gimme</b> <i>artifact-id</i> </blockquote> The argument to the gimme card is the ID of the artifact that the sender wants. The receiver will typically respond to a gimme card by sending a file card in its reply or in the next message. The "gimme" name means "give me". The imperative "give me" is pronounced as if it were a single word "gimme" in some dialects of English (including the dialect spoken by the original author of Fossil). <h4>3.7.1 Unversioned Gimme Cards</h4> Sync synchronizing unversioned content, the client may send "uvgimme" cards to the server. A uvgimme card requests that the server send unversioned content to the client. The format of a uvgimme card is as follows: <blockquote> <b>uvgimme</b> <i>name</i> </blockquote> The <i>name</i> is the name of the unversioned file found on the server that the client would like to have. When a server sees a uvgimme card, it normally responses with a uvfile card, though it might also send another uvigot card if the HTTP reply is already oversized. <h3>3.8 Cookie Cards</h3> A cookie card can be used by a server to record a small amount of state information on a client. The server sends a cookie to the client. The client sends the same cookie back to the server on its next request. The cookie card has a single argument which is its payload. <blockquote> <b>cookie</b> <i>payload</i> </blockquote> The client is not required to return the cookie to the server on its next request. Or the client might send a cookie from a different server on the next request. So the server must not depend on the cookie and the server must structure the cookie payload in such a way that it can tell if the cookie it sees is its own cookie or a cookie from another server. (Typically the server will embed its servercode as part of the cookie.) <h3>3.9 Request-Configuration Cards</h3> A request-configuration or "reqconfig" card is sent from client to server in order to request that the server send back "configuration" data. "Configuration" data is information about users or website appearance or other administrative details which are not part of the persistent and versioned state of the project. For example, the "name" of the project, the default Cascading Style Sheet (CSS) for the web-interface, and the project logo displayed on the web-interface are all configuration data elements. The reqconfig card is normally sent in response to the "fossil configuration pull" command. The format is as follows: <blockquote> <b>reqconfig</b> <i>configuration-name</i> </blockquote> As of 2018-06-04, the configuration-name must be one of the following values: <table border=0 align="center"> <tr><td valign="top"> <ul> |
︙ | ︙ | |||
609 610 611 612 613 614 615 | <li> ignore-glob <li> keep-glob <li> crlf-glob <ul></td><td valign="top"><ul> <li> crnl-glob <li> encoding-glob <li> empty-dirs | | | 601 602 603 604 605 606 607 608 609 610 611 612 613 614 615 | <li> ignore-glob <li> keep-glob <li> crlf-glob <ul></td><td valign="top"><ul> <li> crnl-glob <li> encoding-glob <li> empty-dirs <li> <s>allow-symlinks</s> (removed 2020-08, version 2.12.1) <li> dotfiles <li> parent-project-code <li> parent-projet-name <li> hash-policy <li> mv-rm-files <li> ticket-table <li> ticket-common |
︙ | ︙ | |||
659 660 661 662 663 664 665 | values instead of a single value. The content of these configuration items is returned in a "config" card that contains pure SQL text that is intended to be evaluated by the client. The @user and @concealed configuration items contain sensitive information and are ignored for clients without sufficient privilege. | | | | | | | | 651 652 653 654 655 656 657 658 659 660 661 662 663 664 665 666 667 668 669 670 671 672 673 674 675 676 677 678 679 680 681 682 683 684 685 686 687 688 689 690 | values instead of a single value. The content of these configuration items is returned in a "config" card that contains pure SQL text that is intended to be evaluated by the client. The @user and @concealed configuration items contain sensitive information and are ignored for clients without sufficient privilege. <h3>3.10 Configuration Cards</h3> A "config" card is used to send configuration information from client to server (in response to a "fossil configuration push" command) or from server to client (in response to a "fossil configuration pull" or "fossil clone" command). The format is as follows: <blockquote> <b>config</b> <i>configuration-name size</i> <b>\n</b> <i>content</i> </blockquote> The server will only accept a config card if the user has "Admin" privilege. A client will only accept a config card if it had sent a corresponding reqconfig card in its request. The content of the configuration item is used to overwrite the corresponding configuration data in the receiver. <h3>3.11 Pragma Cards</h3> The client may try to influence the behavior of the server by issuing a pragma card: <blockquote> <b>pragma</i> <i>name value...</i> </blockquote> The "pragma" card has at least one argument which is the pragma name. The pragma name defines what the pragma does. A pragma might have zero or more "value" arguments depending on the pragma name. New pragma names may be added to the protocol from time to time |
︙ | ︙ | |||
769 770 771 772 773 774 775 | a successful commit. This instructs the server to release any lock on any check-in previously held by that client. The ci-unlock pragma helps to avoid false-positive lock warnings that might arise if a check-in is aborted and then restarted on a branch. </ol> | | | | | | | | | | 761 762 763 764 765 766 767 768 769 770 771 772 773 774 775 776 777 778 779 780 781 782 783 784 785 786 787 788 789 790 791 792 793 794 795 796 797 798 799 800 801 802 803 804 805 806 807 808 809 810 811 812 813 814 | a successful commit. This instructs the server to release any lock on any check-in previously held by that client. The ci-unlock pragma helps to avoid false-positive lock warnings that might arise if a check-in is aborted and then restarted on a branch. </ol> <h3>3.12 Comment Cards</h3> Any card that begins with "#" (ASCII 0x23) is a comment card and is silently ignored. <h3>3.13 Message and Error Cards</h3> If the server discovers anything wrong with a request, it generates an error card in its reply. When the client sees the error card, it displays an error message to the user and aborts the sync operation. An error card looks like this: <blockquote> <b>error</b> <i>error-message</i> </blockquote> The error message is English text that is encoded in order to be a single token. A space (ASCII 0x20) is represented as "\s" (ASCII 0x5C, 0x73). A newline (ASCII 0x0a) is "\n" (ASCII 0x6C, x6E). A backslash (ASCII 0x5C) is represented as two backslashes "\\". Apart from space and newline, no other whitespace characters nor any unprintable characters are allowed in the error message. The server can also send a message card that also prints a message on the client console, but which is not an error: <blockquote> <b>message</b> <i>message-text</i> </blockquote> The message-text uses the same format as an error message. <h3>3.14 Unknown Cards</h3> If either the client or the server sees a card that is not described above, then it generates an error and aborts. <h2>4.0 Phantoms And Clusters</h2> When a repository knows that an artifact exists and knows the ID of that artifact, but it does not know the artifact content, then it stores that artifact as a "phantom". A repository will typically create a phantom when it receives an igot card for an artifact that it does not hold or when it receives a file card that references a delta source that it does not hold. When a server is generating its reply or when a client is |
︙ | ︙ | |||
843 844 845 846 847 848 849 | Any artifact that does not match the specifications of a cluster exactly is not a cluster. There must be no extra whitespace in the artifact. There must be one or more M cards. There must be a single Z card with a correct MD5 checksum. And all cards must be in strict lexicographical order. | | | | | 835 836 837 838 839 840 841 842 843 844 845 846 847 848 849 850 851 852 853 854 855 856 857 858 859 860 861 862 | Any artifact that does not match the specifications of a cluster exactly is not a cluster. There must be no extra whitespace in the artifact. There must be one or more M cards. There must be a single Z card with a correct MD5 checksum. And all cards must be in strict lexicographical order. <h3>4.1 The Unclustered Table</h3> Every repository maintains a table named "<b>unclustered</b>" which records the identity of every artifact and phantom it holds that is not mentioned in a cluster. The entries in the unclustered table can be thought of as leaves on a tree of artifacts. Some of the unclustered artifacts will be other clusters. Those clusters may contain other clusters, which might contain still more clusters, and so forth. Beginning with the artifacts in the unclustered table, one can follow the chain of clusters to find every artifact in the repository. <h2>5.0 Synchronization Strategies</h2> <h3>5.1 Pull</h3> A typical pull operation proceeds as shown below. Details of the actual implementation may very slightly but the gist of a pull is captured in the following steps: <ol> <li>The client sends login and pull cards. |
︙ | ︙ | |||
908 909 910 911 912 913 914 | amount of overlap between clusters in the common configuration where there is a single server and many clients. The same synchronization protocol will continue to work even if there are multiple servers or if servers and clients sometimes change roles. The only negative effects of these unusual arrangements is that more than the minimum number of clusters might be generated. | | | 900 901 902 903 904 905 906 907 908 909 910 911 912 913 914 | amount of overlap between clusters in the common configuration where there is a single server and many clients. The same synchronization protocol will continue to work even if there are multiple servers or if servers and clients sometimes change roles. The only negative effects of these unusual arrangements is that more than the minimum number of clusters might be generated. <h3>5.2 Push</h3> A typical push operation proceeds roughly as shown below. As with a pull, the actual implementation may vary slightly. <ol> <li>The client sends login and push cards. <li>The client sends file cards for any artifacts that it holds that have |
︙ | ︙ | |||
942 943 944 945 946 947 948 | As with a pull, the steps of a push operation repeat until the server knows all artifacts that exist on the client. Also, as with pull, the client attempts to keep the size of the request from growing too large by suppressing file cards once the size of the request reaches 1MB. | | | | 934 935 936 937 938 939 940 941 942 943 944 945 946 947 948 949 950 951 952 953 954 955 956 | As with a pull, the steps of a push operation repeat until the server knows all artifacts that exist on the client. Also, as with pull, the client attempts to keep the size of the request from growing too large by suppressing file cards once the size of the request reaches 1MB. <h3 id="sync">5.3 Sync</h3> A sync is just a pull and a push that happen at the same time. The first three steps of a pull are combined with the first five steps of a push. Steps (4) through (7) of a pull are combined with steps (5) through (8) of a push. And steps (8) through (10) of a pull are combined with step (9) of a push. <h3>5.4 Unversioned File Sync</h3> "Unversioned files" are files held in the repository where only the most recent version of the file is kept rather than the entire change history. Unversioned files are intended to be used to store ephemeral content, such as compiled binaries of the most recent release. |
︙ | ︙ | |||
992 993 994 995 996 997 998 | cards and answers "uvgimme" cards with "uvfile" cards in its reply. </ol> The last two steps might be repeated multiple times if there is more unversioned content to be transferred than will fit comfortably in a single HTTP request. | | | 984 985 986 987 988 989 990 991 992 993 994 995 996 997 998 | cards and answers "uvgimme" cards with "uvfile" cards in its reply. </ol> The last two steps might be repeated multiple times if there is more unversioned content to be transferred than will fit comfortably in a single HTTP request. <h2>6.0 Summary</h2> Here are the key points of the synchronization protocol: <ol> <li>The client sends one or more PUSH HTTP requests to the server. The request and reply content type is "application/x-fossil". <li>HTTP request content is compressed using zlib. |
︙ | ︙ | |||
1036 1037 1038 1039 1040 1041 1042 | <li>Clusters are created automatically on the server during a pull. <li>Repositories keep track of all artifacts that are not named in any cluster and send igot messages for those artifacts. <li>Repositories keep track of all the phantoms they hold and send gimme messages for those artifacts. </ol> | | > | 1028 1029 1030 1031 1032 1033 1034 1035 1036 1037 1038 1039 1040 1041 1042 1043 | <li>Clusters are created automatically on the server during a pull. <li>Repositories keep track of all artifacts that are not named in any cluster and send igot messages for those artifacts. <li>Repositories keep track of all the phantoms they hold and send gimme messages for those artifacts. </ol> <h2>7.0 Troubleshooting And Debugging Hints</h2> If you run the [/help?cmd=sync|fossil sync] command (or [/help?cmd=pull|pull] or [/help?cmd=push|push] or [/help?cmd=clone|clone]) with the --httptrace option, Fossil will keep a copy of each HTTP request and reply in files named: <ul> |
︙ | ︙ |
Changes to www/tech_overview.wiki.
|
| | > > > | 1 2 3 4 5 6 7 8 9 10 11 | <title>Technical Overview</title> <h2 align="center"> A Technical Overview<br>Of The Design And Implementation<br>Of Fossil </h2> <h2>1.0 Introduction</h2> At its lowest level, a Fossil repository consists of an unordered set of immutable "artifacts". You might think of these artifacts as "files", since in many cases the artifacts are exactly that. But other "structural artifacts" are also included in the mix. |
︙ | ︙ | |||
48 49 50 51 52 53 54 | file that people are normally referring to when they say "a Fossil repository". The checkout database is found in the working checkout for a project and contains state information that is unique to that working checkout. Fossil does not always use all three database files. The web interface, for example, typically only uses the repository database. And the | | | > | | | < < < | | > > > | | > > > | | > | 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 | file that people are normally referring to when they say "a Fossil repository". The checkout database is found in the working checkout for a project and contains state information that is unique to that working checkout. Fossil does not always use all three database files. The web interface, for example, typically only uses the repository database. And the [/help/all | fossil settings] command only opens the configuration database when the --global option is used. But other commands use all three databases at once. For example, the [/help/status | fossil status] command will first locate the checkout database, then use the checkout database to find the repository database, then open the configuration database. Whenever multiple databases are used at the same time, they are all opened on the same SQLite database connection using SQLite's [http://www.sqlite.org/lang_attach.html | ATTACH] command. The chart below provides a quick summary of how each of these database files are used by Fossil, with detailed discussion following. <table border="1" width="80%" cellpadding="0" align="center"> <tr> <td width="33%" valign="top"> <h3 align="center">Configuration Database<br>"~/.fossil" or<br> "~/.config/fossil.db"</h3> <ul> <li>Global [/help/settings |settings] <li>List of active repositories used by the [/help/all | all] command </ul> </td> <td width="34%" valign="top"> <h3 align="center">Repository Database<br>"<i>project</i>.fossil"</h3> <ul> <li>[./fileformat.wiki | Global state of the project] encoded using delta-compression <li>Local [/help/settings|settings] <li>Web interface display preferences <li>User credentials and permissions <li>Metadata about the global state to facilitate rapid queries </ul> </td> <td width="33%" valign="top"> <h3 align="center">Checkout Database<br>"_FOSSIL_" or ".fslckout"</h3> <ul> <li>The repository database used by this checkout <li>The version currently checked out <li>Other versions [/help/merge | merged] in but not yet [/help/commit | committed] <li>Changes from the [/help/add | add], [/help/delete | delete], and [/help/rename | rename] commands that have not yet been committed <li>"mtime" values and other information used to efficiently detect local edits <li>The "[/help/stash | stash]" <li>Information needed to "[/help/undo|undo]" or "[/help/redo|redo]" </ul> </td> </tr> </table> <h3 id="configdb">2.1 The Configuration Database</h3> The configuration database holds cross-repository preferences and a list of all repositories for a single user. |
︙ | ︙ | |||
119 120 121 122 123 124 125 | operations such as "sync" or "rebuild" on all repositories managed by a user. <h4 id="configloc">2.1.1 Location Of The Configuration Database</h4> On Unix systems, the configuration database is named by the following algorithm: | | | | < | | 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 | operations such as "sync" or "rebuild" on all repositories managed by a user. <h4 id="configloc">2.1.1 Location Of The Configuration Database</h4> On Unix systems, the configuration database is named by the following algorithm: <blockquote><table border="0"> <tr><td>1. if environment variable FOSSIL_HOME exists <td> → <td>$FOSSIL_HOME/.fossil <tr><td>2. if file ~/.fossil exists<td> →<td>~/.fossil <tr><td>3. if environment variable XDG_CONFIG_HOME exists <td> →<td>$XDG_CONFIG_HOME/fossil.db <tr><td>4. if the directory ~/.config exists <td> →<td>~/.config/fossil.db <tr><td>5. Otherwise<td> →<td>~/.fossil </table></blockquote> Another way of thinking of this algorithm is the following: * Use "$FOSSIL_HOME/.fossil" if the FOSSIL_HOME variable is defined * Use the XDG-compatible name (usually ~/.config/fossil.db) on XDG systems if the ~/.fossil file does not already exist * Otherwise, use the traditional unix name of "~/.fossil" |
︙ | ︙ | |||
154 155 156 157 158 159 160 | * %FOSSIL_HOME%/_fossil * %LOCALAPPDATA%/_fossil * %APPDATA%/_fossil * %USERPROFILES%/_fossil * %HOMEDRIVE%%HOMEPATH%/_fossil | | | 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 | * %FOSSIL_HOME%/_fossil * %LOCALAPPDATA%/_fossil * %APPDATA%/_fossil * %USERPROFILES%/_fossil * %HOMEDRIVE%%HOMEPATH%/_fossil The second case is the one that usually determines the name Note that the FOSSIL_HOME environment variable can always be set to determine the location of the configuration database. Note also that the configuration database file itself is called ".fossil" or "fossil.db" on unix but "_fossil" on windows. The [/help?cmd=info|fossil info] command will show the location of the configuration database on a line that starts with "config-db:". |
︙ | ︙ |
Changes to www/th1.md.
︙ | ︙ | |||
52 53 54 55 56 57 58 | that a TH1 script is really just a list of text commands, not a context-free language with a grammar like C/C++. This can be confusing to long-time C/C++ programmers because TH1 does look a lot like C/C++, but the semantics of TH1 are closer to FORTH or Lisp than they are to C. Consider the `if` command in TH1. | | | | | | | 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 | that a TH1 script is really just a list of text commands, not a context-free language with a grammar like C/C++. This can be confusing to long-time C/C++ programmers because TH1 does look a lot like C/C++, but the semantics of TH1 are closer to FORTH or Lisp than they are to C. Consider the `if` command in TH1. if {$current eq "dev"} { puts "hello" } else { puts "world" } The example above is a single command. The first token, and the name of the command, is `if`. The second token is `$current eq "dev"` - an expression. (The outer {...} are removed from each token by the command parser.) The third token is the `puts "hello"`, with its whitespace and newlines. The fourth token is `else` and the fifth and last token is `puts "world"`. |
︙ | ︙ | |||
81 82 83 84 85 86 87 | All of this also explains the emphasis on *unescaped* characters above: the curly braces `{ }` are string quoting characters in Tcl/TH1, not block delimiters as in C. This is how we can have a command that extends over multiple lines. It is also why the `else` keyword must be cuddled up with the closing brace for the `if` clause's scriptlet. The following is invalid Tcl/TH1: | | | | | | | | | | | 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 | All of this also explains the emphasis on *unescaped* characters above: the curly braces `{ }` are string quoting characters in Tcl/TH1, not block delimiters as in C. This is how we can have a command that extends over multiple lines. It is also why the `else` keyword must be cuddled up with the closing brace for the `if` clause's scriptlet. The following is invalid Tcl/TH1: if {$current eq "dev"} { puts "hello" } else { puts "world" } If you try to run this under either Tcl or TH1, the interpreter will tell you that there is no `else` command, because with the newline on the third line, you terminated the `if` command. Occasionally in Tcl/TH1 scripts, you may need to use a backslash at the end of a line to allow a command to extend over multiple lines without being considered two separate commands. Here's an example from one of Fossil's test scripts: return [lindex [regexp -line -inline -nocase -- \ {^uuid:\s+([0-9A-F]{40}) } [eval [getFossilCommand \ $repository "" info trunk]]] end] Those backslashes allow the command to wrap nicely within a standard terminal width while telling the interpreter to consider those three lines as a single command. Summary of Core TH1 Commands |
︙ | ︙ | |||
278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 | This command causes that javascript file to be appended to the delivered document. <a id="capexpr"></a>TH1 capexpr Command ----------------------------------------------------- * capexpr CAPABILITY-EXPR The capability expression is a list. Each term of the list is a cluster of [capability letters](./caps/ref.html). The overall expression is true if any one term is true. A single term is true if all letters within that term are true. Or, if the term begins with "!", then the term is true if none of the terms or true. Or, if the term begins with "@" then the term is true if all of the capability letters in that term are available to the "anonymous" user. Or, if the term is "*" then it is always true. Examples: ``` | > > | | | | | | | 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 311 312 313 314 315 | This command causes that javascript file to be appended to the delivered document. <a id="capexpr"></a>TH1 capexpr Command ----------------------------------------------------- Added in Fossil 2.15. * capexpr CAPABILITY-EXPR The capability expression is a list. Each term of the list is a cluster of [capability letters](./caps/ref.html). The overall expression is true if any one term is true. A single term is true if all letters within that term are true. Or, if the term begins with "!", then the term is true if none of the terms or true. Or, if the term begins with "@" then the term is true if all of the capability letters in that term are available to the "anonymous" user. Or, if the term is "*" then it is always true. Examples: ``` capexpr {j o r} True if any one of j, o, or r are available capexpr {oh} True if both o and h are available capexpr {@2 @3 4 5 6} 2 or 3 available for anonymous or one of 4, 5 or 6 is available for the user capexpr L True if the user is logged in capexpr !L True if the user is not logged in ``` The `L` pseudo-capability is intended only to be used on its own or with the `!` prefix for implementing login/logout menus via the `mainmenu` site configuration option: ``` |
︙ | ︙ | |||
679 680 681 682 683 684 685 | 1. **w** -- _Wiki_ To be clear, only one of the document classes identified by each STRING needs to be searchable in order for that argument to be true. But all arguments must be true for this routine to return true. Hence, to see if ALL document classes are searchable: | | | | 681 682 683 684 685 686 687 688 689 690 691 692 693 694 695 696 697 698 699 | 1. **w** -- _Wiki_ To be clear, only one of the document classes identified by each STRING needs to be searchable in order for that argument to be true. But all arguments must be true for this routine to return true. Hence, to see if ALL document classes are searchable: if {[searchable c d t w]} {...} But to see if ANY document class is searchable: if {[searchable cdtw]} {...} This command is useful for enabling or disabling a "Search" entry on the menu bar. <a id="setParameter"></a>TH1 setParameter Command --------------------------------------------------- |
︙ | ︙ |
Changes to www/theory1.wiki.
1 2 3 4 5 6 7 8 | <title>Thoughts On The Design Of The Fossil DVCS</title> Two questions (or criticisms) that arise frequently regarding Fossil can be summarized as follows: 1. Why is Fossil based on SQLite instead of a distributed NoSQL database? 2. Why is Fossil written in C instead of a modern high-level language? | > | 1 2 3 4 5 6 7 8 9 | <title>Thoughts On The Design Of The Fossil DVCS</title> <h1 align="center">Thoughts On The Design Of The Fossil DVCS</h1> Two questions (or criticisms) that arise frequently regarding Fossil can be summarized as follows: 1. Why is Fossil based on SQLite instead of a distributed NoSQL database? 2. Why is Fossil written in C instead of a modern high-level language? |
︙ | ︙ |
Changes to www/tickets.wiki.
︙ | ︙ | |||
45 46 47 48 49 50 51 | <h3>2.1 Ticket Table Schema</h3> The two ticket tables are called TICKET and TICKETCHNG. The default schema (as of this writing) for these two tables is shown below: | | | 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 | <h3>2.1 Ticket Table Schema</h3> The two ticket tables are called TICKET and TICKETCHNG. The default schema (as of this writing) for these two tables is shown below: <blockquote><verbatim> CREATE TABLE ticket( -- Do not change any column that begins with tkt_ tkt_id INTEGER PRIMARY KEY, tkt_uuid TEXT UNIQUE, tkt_mtime DATE, tkt_ctime DATE, -- Add as many fields as required below this line |
︙ | ︙ | |||
76 77 78 79 80 81 82 | -- Add as many fields as required below this line login TEXT, username TEXT, mimetype TEXT, icomment TEXT ); CREATE INDEX ticketchng_idx1 ON ticketchng(tkt_id, tkt_mtime); | | | 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 | -- Add as many fields as required below this line login TEXT, username TEXT, mimetype TEXT, icomment TEXT ); CREATE INDEX ticketchng_idx1 ON ticketchng(tkt_id, tkt_mtime); </verbatim></blockquote> Generally speaking, there is one row in the TICKETCHNG table for each change to each ticket. In other words, there is one row in the TICKETCHNG table for each low-level ticket change artifact. The TICKET table, on the other hand, contains a summary of the current status of each ticket. |
︙ | ︙ |
Changes to www/unvers.wiki.
1 2 3 4 5 6 7 8 | <title>Unversioned Content</title> "Unversioned content" or "unversioned files" are files stored in a Fossil repository without history, meaning it retains the newest version of each such file, and that alone. Though it omits history, Fossil does sync unversioned content between repositories. In the event of a conflict during a sync, it retains | > | | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 | <title>Unversioned Content</title> <h1 align="center">Unversioned Content</h1> "Unversioned content" or "unversioned files" are files stored in a Fossil repository without history, meaning it retains the newest version of each such file, and that alone. Though it omits history, Fossil does sync unversioned content between repositories. In the event of a conflict during a sync, it retains the most recent version of each unversioned file, discrding older versions. Unversioned files are useful for storing ephemeral content such as builds or frequently changing web pages. We store the [https://fossil-scm.org/home/uv/download.html|download] page of the self-hosting Fossil repository as unversioned content, for example. |
︙ | ︙ | |||
31 32 33 34 35 36 37 | the [/help?cmd=/uvlist|/uvlist] URL. ([/uvlist|example]). <h2>Syncing Unversioned Files</h2> Unversioned content does not sync between repositories by default. One must request it via commands such as: | | | | 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 | the [/help?cmd=/uvlist|/uvlist] URL. ([/uvlist|example]). <h2>Syncing Unversioned Files</h2> Unversioned content does not sync between repositories by default. One must request it via commands such as: <blockquote><pre> fossil sync <b>-u</b> fossil clone <b>-u</b> <i>URL local-repo-name</i> fossil unversioned sync </pre></blockquote> The [/help?cmd=sync|fossil sync] and [/help?cmd=clone|fossil clone] commands will synchronize unversioned content if and only if they're given the "-u" (or "--unversioned") command-line option. The [/help?cmd=unversioned|fossil unversioned sync] command synchronizes the unversioned content without synchronizing anything else. |
︙ | ︙ | |||
69 70 71 72 73 74 75 | <i>(This section outlines the current implementation of unversioned files. This is not an interface spec and hence subject to change.)</i> Unversioned content is stored in the repository in the "unversioned" table: | | | | | | | | | 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 | <i>(This section outlines the current implementation of unversioned files. This is not an interface spec and hence subject to change.)</i> Unversioned content is stored in the repository in the "unversioned" table: <blockquote><pre> CREATE TABLE unversioned( uvid INTEGER PRIMARY KEY AUTOINCREMENT, -- unique ID for this file name TEXT UNIQUE, -- Name of the file rcvid INTEGER, -- From whence this file was received mtime DATETIME, -- Last change (seconds since 1970) hash TEXT, -- SHA1 hash of uncompressed content sz INTEGER, -- Size of uncompressed content encoding INT, -- 0: plaintext 1: zlib compressed content BLOB -- File content ); </pre></blockquote> Fossil does not create the table ahead of need. If there are no unversioned files in the repository, the "unversioned" table will not exist. Consequently, one simple way to purge all unversioned content from a repository is to run: <blockquote><pre> fossil sql "DROP TABLE unversioned; VACUUM;" </pre></blockquote> Lacking history for unversioned files, Fossil does not attempt delta compression on them. Fossil servers exchange unversioned content whole; it does not attempt to "diff" your local version against the remote and send only the changes. We point tihs out because one use-case for unversioned content is to send large, frequently-changing files. Appreciate the consequences before making each change. There are two bandwidth-saving measures in "<tt>fossil uv sync</tt>". The first is the regular HTTP payload compression step, done on all syncs. The second is that Fossil sends SHA1 hash exchanges to determine when it can avoid sending duplicate content over the wire unnecessarily. See the [./sync.wiki|synchronization protocol documentation] for further information. |
Changes to www/webui.wiki.
︙ | ︙ | |||
30 31 32 33 34 35 36 | As an example of how useful this web interface can be, the entire [./index.wiki | Fossil website], including the document you are now reading, is rendered using the Fossil web interface, with no enhancements, and little customization. | | | | | | 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 | As an example of how useful this web interface can be, the entire [./index.wiki | Fossil website], including the document you are now reading, is rendered using the Fossil web interface, with no enhancements, and little customization. <blockquote> <b>Key point:</b> <i>The Fossil website is just a running instance of Fossil! </blockquote> Note also that because Fossil is a distributed system, you can run the web interface on your local machine while off network (for example, while on an airplane) including making changes to wiki pages and/or trouble ticket, then synchronize with your co-workers after you reconnect. When you clone a Fossil repository, you don't just get the project source code, you get the entire project management website. <h2>Very Simple Startup</h2> To start using the built-in Fossil web interface on an existing Fossil repository, simply type this: <b>fossil ui existing-repository.fossil</b> Substitute the name of your repository, of course. The "ui" command will start a web server running (it figures out an available TCP port to use on its own) and then automatically launches your web browser to point at that server. If you run the "ui" command from within an open check-out, you can omit the repository name: <b>fossil ui</b> The latter case is a very useful short-cut when you are working on a Fossil project and you want to quickly do some work with the web interface. Notice that Fossil automatically finds an unused TCP port to run the server on and automatically points your web browser to the correct URL. So there is never any fumbling around trying to find an open port or to type arcane strings into your browser URL entry box. |
︙ | ︙ | |||
151 152 153 154 155 156 157 | available to a distributed team by simply copying the single repository file up to a web server that supports CGI or SCGI. To run Fossil as CGI, just put the <b>sample-project.fossil</b> file in a directory where CGI scripts have both read and write permission on the file and the directory that contains the file, then add a CGI script that looks something like this: | > | | > > | | > | 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 | available to a distributed team by simply copying the single repository file up to a web server that supports CGI or SCGI. To run Fossil as CGI, just put the <b>sample-project.fossil</b> file in a directory where CGI scripts have both read and write permission on the file and the directory that contains the file, then add a CGI script that looks something like this: <verbatim> #!/usr/local/bin/fossil repository: /home/www/sample-project.fossil </verbatim> Adjust the script above so that the paths are correct for your system, of course, and also make sure the Fossil binary is installed on the server. But that is <u>all</u> you have to do. You now have everything you need to host a distributed software development project in less than five minutes using a two-line CGI script. Instructions for setting up an SCGI server are [./scgi.wiki | available separately]. You don't have a CGI- or SCGI-capable web server running on your server machine? Not a problem. The Fossil interface can also be launched via inetd or xinetd. An inetd configuration line sufficient to launch the Fossil web interface looks like this: <verbatim> 80 stream tcp nowait.1000 root /usr/local/bin/fossil \ /usr/local/bin/fossil http /home/www/sample-project.fossil </verbatim> As always, you'll want to adjust the pathnames to whatever is appropriate for your system. The xinetd setup uses a different syntax but follows the same idea. |
Changes to www/whyusefossil.wiki.
|
| | | | | > | < | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 | <title>Why Use Fossil</title> <h1 align='center'>Why You Should Use Fossil</h1> <p align='center'><b>Or, if not Fossil, at least some kind of modern version control<br>such as Git, Mercurial, or Subversion.</b></p> <p align='center'>(Presented in outline form, for people in a hurry)</p> <b>I. Benefits of Version Control</b> <ol type='A'> <li><p><b>Immutable file and version identification</b> <ol type='i'> <li>Simplified and unambiguous communication between developers <li>Detect accidental or surreptitious changes <li>Locate the origin of discovered files </ol> |
︙ | ︙ | |||
35 36 37 38 39 40 41 | <li>Everyone always has the latest code <li>Failed disk-drives cause no loss of work <li>Avoid wasting time doing manual file copying <li>Avoid human errors during manual backups </ol> </ol> | | | | | 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 | <li>Everyone always has the latest code <li>Failed disk-drives cause no loss of work <li>Avoid wasting time doing manual file copying <li>Avoid human errors during manual backups </ol> </ol> <p id="definitions"><b>II. Definitions</b></p> Moved to [./glossary.md | a separate document]. <p><b>III. Basic Fossil commands</b> <ul> <li><p><b>clone</b> → Make a copy of a repository. The original repository is usually (but not always) on a remote machine and the copy is on the local machine. The copy remembers the network location from which it was copied and (by default) tries to keep itself synchronized |
︙ | ︙ | |||
85 86 87 88 89 90 91 | <li><p><b>rm/mv</b> → Short for 'remove' and 'move', these commands are like "add" in that they specify pending changes to the structure of the check-out. As with "add", no changes are made to the repository until the next "commit". </ul> | | | 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 | <li><p><b>rm/mv</b> → Short for 'remove' and 'move', these commands are like "add" in that they specify pending changes to the structure of the check-out. As with "add", no changes are made to the repository until the next "commit". </ul> <b>IV. The history of a project is a Directed Acyclic Graph (DAG)</b> <ul> <li><p>Fossil (and other distributed VCSes like Git and Mercurial, but not Subversion) represent the history of a project as a directed acyclic graph (DAG). <ul> <li><p>Each check-in is a node in the graph |
︙ | ︙ | |||
140 141 142 143 144 145 146 | humans, so best practice is to give each branch a unique name. <li><p>The name of a branch can be changed by adding special tags to the first check-in of a branch. The name assigned by this special tag automatically propagates to all direct children. </ul> </ul> | | | 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 | humans, so best practice is to give each branch a unique name. <li><p>The name of a branch can be changed by adding special tags to the first check-in of a branch. The name assigned by this special tag automatically propagates to all direct children. </ul> </ul> <b>V. Why version control is important (reprise)</b> <ol> <li><p>Every check-in and every individual file has a unique name - its SHA1 or SHA3-256 hash. Team members can unambiguously identify any specific version of the overall project or any specific version of an individual file. |
︙ | ︙ |