Fossil

Check-in Differences
Login

Check-in Differences

Many hyperlinks are disabled.
Use anonymous login to enable hyperlinks.

Difference From trunk To release

2024-05-08
12:27
Add the d2=, p2=, and dp2= query parameters to /timeline. ... (Leaf check-in: a080751e user: drh tags: trunk)
11:59
Import SQLite 3.46.0-beta-1 for testing. ... (check-in: c09fea32 user: drh tags: trunk)
2024-04-24
21:25
Fix or disable brittle test cases that were broken by changes in 2.23. ... (check-in: 5ad70808 user: drh tags: branch-2.24)
2024-04-23
13:43
Update the homepage for the 2.24 release. ... (check-in: dee02ab6 user: drh tags: trunk)
13:25
Version 2.24 ... (check-in: 8be0372c user: drh tags: trunk, release, version-2.24)
2024-04-22
16:29
cgi.md: be less specific about the Apache version in which the Content-Length change happened because a new forum post reports that it happens at least as far back as 2.4.41. ... (check-in: 9af5f386 user: stephan tags: trunk)

Changes to VERSION.

1


1
-
+
2.25
2.24

Changes to extsrc/shell.c.

6000
6001
6002
6003
6004
6005
6006
6007
6008
6009
6010
6011







6012
6013
6014
6015
6016
6017
6018
6019
6020
6021
6022
6023
6024
6025
6026
6027

6028
6029
6030
6031
6032

6033
6034
6035
6036
6037

6038
6039
6040
6041
6042

6043
6044
6045
6046
6047
6048
6049
6050
6051
6052
6053
6054
6055
6056
6057
6058
6059
6060
6061
6062
6063
6064
6065
6066
6067
6068
6069
6070
6071
6072
6073
6074
6075

6076
6077
6078
6079
6080
6081
6082
6083
6084
6085
6086
6087
6088
6089
6090
6091
6092
6093
6094
6095
6096
6097
6098




6099
6100
6101
6102
6103
6104
6105
6106
6107
6108
6109
6110
6111
6112
6113
6114

6115
6116
6117
6118
6119
6120
6121
6122

6123
6124
6125
6126
6127
6128
6129
6130
6131
6132
6133
6134
6135
6136
6137
6138
6139
6140
6141
6142
6143
6144
6145
6146
6147
6148
6149

6150
6151
6152
6153
6154
6155

6156
6157
6158
6159
6160
6161
6162
6163
6164
6165

6166
6167
6168

6169
6170
6171
6172
6173
6174
6175
6176
6177
6178
6179
6180
6181
6182
6183
6184
6185
6186
6187
6188
6189
6190

6191
6192
6193
6194
6195
6196
6197

6198
6199

6200
6201
6202
6203
6204
6205
6206
6207
6208
6209
6210
6211
6212
6000
6001
6002
6003
6004
6005
6006





6007
6008
6009
6010
6011
6012
6013


6014
6015
6016
6017
6018
6019
6020
6021
6022
6023
6024
6025
6026

6027
6028
6029
6030
6031

6032
6033
6034
6035
6036

6037
6038
6039
6040
6041

6042
6043
6044
6045
6046


















6047
6048
6049
6050
6051
6052
6053
6054
6055
6056

6057
6058
6059
6060
6061
6062
6063
6064
6065
6066
6067
6068
6069
6070
6071
6072
6073
6074
6075
6076




6077
6078
6079
6080



6081
6082
6083
6084
6085
6086
6087

6088

6089
6090

6091
6092
6093
6094
6095
6096
6097
6098

6099
6100
6101
6102
6103
















6104
6105
6106
6107



6108


6109
6110
6111

6112
6113
6114
6115
6116






6117
6118
6119

6120

6121
6122
6123
6124
6125
6126
6127
6128
6129
6130
6131
6132
6133
6134
6135
6136
6137
6138
6139
6140

6141
6142
6143
6144
6145
6146
6147

6148
6149

6150
6151
6152
6153



6154
6155
6156
6157
6158
6159
6160







-
-
-
-
-
+
+
+
+
+
+
+
-
-













-
+




-
+




-
+




-
+




-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-










-
+



















-
-
-
-
+
+
+
+
-
-
-







-

-


-
+







-
+




-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-




-
-
-
+
-
-



-
+




-
-
-
-
-
-
+


-
+
-




















-
+






-
+

-
+



-
-
-







** once prior to any call to seriesColumn() or seriesRowid() or
** seriesEof().
**
** The query plan selected by seriesBestIndex is passed in the idxNum
** parameter.  (idxStr is not used in this implementation.)  idxNum
** is a bitmask showing which constraints are available:
**
**   0x01:    start=VALUE
**   0x02:    stop=VALUE
**   0x04:    step=VALUE
**   0x08:    descending order
**   0x10:    ascending order
**    1:    start=VALUE
**    2:    stop=VALUE
**    4:    step=VALUE
**
** Also, if bit 8 is set, that means that the series should be output
** in descending order rather than in ascending order.  If bit 16 is
** set, then output must appear in ascending order.
**   0x20:    LIMIT  VALUE
**   0x40:    OFFSET  VALUE
**
** This routine should initialize the cursor and position it so that it
** is pointing at the first row, or pointing off the end of the table
** (so that seriesEof() will return true) if the table is empty.
*/
static int seriesFilter(
  sqlite3_vtab_cursor *pVtabCursor,
  int idxNum, const char *idxStrUnused,
  int argc, sqlite3_value **argv
){
  series_cursor *pCur = (series_cursor *)pVtabCursor;
  int i = 0;
  (void)idxStrUnused;
  if( idxNum & 0x01 ){
  if( idxNum & 1 ){
    pCur->ss.iBase = sqlite3_value_int64(argv[i++]);
  }else{
    pCur->ss.iBase = 0;
  }
  if( idxNum & 0x02 ){
  if( idxNum & 2 ){
    pCur->ss.iTerm = sqlite3_value_int64(argv[i++]);
  }else{
    pCur->ss.iTerm = 0xffffffff;
  }
  if( idxNum & 0x04 ){
  if( idxNum & 4 ){
    pCur->ss.iStep = sqlite3_value_int64(argv[i++]);
    if( pCur->ss.iStep==0 ){
      pCur->ss.iStep = 1;
    }else if( pCur->ss.iStep<0 ){
      if( (idxNum & 0x10)==0 ) idxNum |= 0x08;
      if( (idxNum & 16)==0 ) idxNum |= 8;
    }
  }else{
    pCur->ss.iStep = 1;
  }
  if( idxNum & 0x20 ){
    sqlite3_int64 iLimit = sqlite3_value_int64(argv[i++]);
    sqlite3_int64 iTerm;
    if( idxNum & 0x40 ){
      sqlite3_int64 iOffset = sqlite3_value_int64(argv[i++]);
      if( iOffset>0 ){
        pCur->ss.iBase += pCur->ss.iStep*iOffset;
      }
    }
    if( iLimit>=0 ){
      iTerm = pCur->ss.iBase + (iLimit - 1)*pCur->ss.iStep;
      if( pCur->ss.iStep<0 ){
        if( iTerm>pCur->ss.iTerm ) pCur->ss.iTerm = iTerm;
      }else{
        if( iTerm<pCur->ss.iTerm ) pCur->ss.iTerm = iTerm;
      }
    }
  }
  for(i=0; i<argc; i++){
    if( sqlite3_value_type(argv[i])==SQLITE_NULL ){
      /* If any of the constraints have a NULL value, then return no rows.
      ** See ticket https://www.sqlite.org/src/info/fac496b61722daf2 */
      pCur->ss.iBase = 1;
      pCur->ss.iTerm = 0;
      pCur->ss.iStep = 1;
      break;
    }
  }
  if( idxNum & 0x08 ){
  if( idxNum & 8 ){
    pCur->ss.isReversing = pCur->ss.iStep > 0;
  }else{
    pCur->ss.isReversing = pCur->ss.iStep < 0;
  }
  setupSequence( &pCur->ss );
  return SQLITE_OK;
}

/*
** SQLite will invoke this method one or more times while planning a query
** that uses the generate_series virtual table.  This routine needs to create
** a query plan for each invocation and compute an estimated cost for that
** plan.
**
** In this implementation idxNum is used to represent the
** query plan.  idxStr is unused.
**
** The query plan is represented by bits in idxNum:
**
**   0x01  start = $value  -- constraint exists
**   0x02  stop = $value   -- constraint exists
**   0x04  step = $value   -- constraint exists
**   0x08  output is in descending order
**  (1)  start = $value  -- constraint exists
**  (2)  stop = $value   -- constraint exists
**  (4)  step = $value   -- constraint exists
**  (8)  output in descending order
**   0x10  output is in ascending order
**   0x20  LIMIT $value    -- constraint exists
**   0x40  OFFSET $value   -- constraint exists
*/
static int seriesBestIndex(
  sqlite3_vtab *pVTab,
  sqlite3_index_info *pIdxInfo
){
  int i, j;              /* Loop over constraints */
  int idxNum = 0;        /* The query plan bitmask */
#ifndef ZERO_ARGUMENT_GENERATE_SERIES
  int bStartSeen = 0;    /* EQ constraint seen on the START column */
#endif
  int unusableMask = 0;  /* Mask of unusable constraints */
  int nArg = 0;          /* Number of arguments that seriesFilter() expects */
  int aIdx[5];           /* Constraints on start, stop, step, LIMIT, OFFSET */
  int aIdx[3];           /* Constraints on start, stop, and step */
  const struct sqlite3_index_constraint *pConstraint;

  /* This implementation assumes that the start, stop, and step columns
  ** are the last three columns in the virtual table. */
  assert( SERIES_COLUMN_STOP == SERIES_COLUMN_START+1 );
  assert( SERIES_COLUMN_STEP == SERIES_COLUMN_START+2 );

  aIdx[0] = aIdx[1] = aIdx[2] = aIdx[3] = aIdx[4] = -1;
  aIdx[0] = aIdx[1] = aIdx[2] = -1;
  pConstraint = pIdxInfo->aConstraint;
  for(i=0; i<pIdxInfo->nConstraint; i++, pConstraint++){
    int iCol;    /* 0 for start, 1 for stop, 2 for step */
    int iMask;   /* bitmask for those column */
    int op = pConstraint->op;
    if( op>=SQLITE_INDEX_CONSTRAINT_LIMIT
     && op<=SQLITE_INDEX_CONSTRAINT_OFFSET
    ){
      if( pConstraint->usable==0 ){
        /* do nothing */
      }else if( op==SQLITE_INDEX_CONSTRAINT_LIMIT ){
        aIdx[3] = i;
        idxNum |= 0x20;
      }else{
        assert( op==SQLITE_INDEX_CONSTRAINT_OFFSET );
        aIdx[4] = i;
        idxNum |= 0x40;
      }
      continue;
    }
    if( pConstraint->iColumn<SERIES_COLUMN_START ) continue;
    iCol = pConstraint->iColumn - SERIES_COLUMN_START;
    assert( iCol>=0 && iCol<=2 );
    iMask = 1 << iCol;
#ifndef ZERO_ARGUMENT_GENERATE_SERIES
    if( iCol==0 && op==SQLITE_INDEX_CONSTRAINT_EQ ){
      bStartSeen = 1;
    if( iCol==0 ) bStartSeen = 1;
    }
#endif
    if( pConstraint->usable==0 ){
      unusableMask |=  iMask;
      continue;
    }else if( op==SQLITE_INDEX_CONSTRAINT_EQ ){
    }else if( pConstraint->op==SQLITE_INDEX_CONSTRAINT_EQ ){
      idxNum |= iMask;
      aIdx[iCol] = i;
    }
  }
  if( aIdx[3]==0 ){
    /* Ignore OFFSET if LIMIT is omitted */
    idxNum &= ~0x60;
    aIdx[4] = 0;
  }
  for(i=0; i<5; i++){
  for(i=0; i<3; i++){
    if( (j = aIdx[i])>=0 ){
      pIdxInfo->aConstraintUsage[j].argvIndex = ++nArg;
      pIdxInfo->aConstraintUsage[j].omit =
      pIdxInfo->aConstraintUsage[j].omit = !SQLITE_SERIES_CONSTRAINT_VERIFY;
         !SQLITE_SERIES_CONSTRAINT_VERIFY || i>=3;
    }
  }
  /* The current generate_column() implementation requires at least one
  ** argument (the START value).  Legacy versions assumed START=0 if the
  ** first argument was omitted.  Compile with -DZERO_ARGUMENT_GENERATE_SERIES
  ** to obtain the legacy behavior */
#ifndef ZERO_ARGUMENT_GENERATE_SERIES
  if( !bStartSeen ){
    sqlite3_free(pVTab->zErrMsg);
    pVTab->zErrMsg = sqlite3_mprintf(
        "first argument to \"generate_series()\" missing or unusable");
    return SQLITE_ERROR;
  }
#endif
  if( (unusableMask & ~idxNum)!=0 ){
    /* The start, stop, and step columns are inputs.  Therefore if there
    ** are unusable constraints on any of start, stop, or step then
    ** this plan is unusable */
    return SQLITE_CONSTRAINT;
  }
  if( (idxNum & 0x03)==0x03 ){
  if( (idxNum & 3)==3 ){
    /* Both start= and stop= boundaries are available.  This is the 
    ** the preferred case */
    pIdxInfo->estimatedCost = (double)(2 - ((idxNum&4)!=0));
    pIdxInfo->estimatedRows = 1000;
    if( pIdxInfo->nOrderBy>=1 && pIdxInfo->aOrderBy[0].iColumn==0 ){
      if( pIdxInfo->aOrderBy[0].desc ){
        idxNum |= 0x08;
        idxNum |= 8;
      }else{
        idxNum |= 0x10;
        idxNum |= 16;
      }
      pIdxInfo->orderByConsumed = 1;
    }
  }else if( (idxNum & 0x21)==0x21 ){
    /* We have start= and LIMIT */
    pIdxInfo->estimatedRows = 2500;
  }else{
    /* If either boundary is missing, we have to generate a huge span
    ** of numbers.  Make this case very expensive so that the query
    ** planner will work hard to avoid it. */
    pIdxInfo->estimatedRows = 2147483647;
  }
  pIdxInfo->idxNum = idxNum;
7525
7526
7527
7528
7529
7530
7531
7532
7533
7534

7535
7536
7537
7538
7539
7540
7541
7473
7474
7475
7476
7477
7478
7479



7480
7481
7482
7483
7484
7485
7486
7487







-
-
-
+







  mode_t mode,                    /* MODE parameter passed to writefile() */
  sqlite3_int64 mtime             /* MTIME parameter (or -1 to not set time) */
){
  if( zFile==0 ) return 1;
#if !defined(_WIN32) && !defined(WIN32)
  if( S_ISLNK(mode) ){
    const char *zTo = (const char*)sqlite3_value_text(pData);
    if( zTo==0 ) return 1;
    unlink(zFile);
    if( symlink(zTo, zFile)<0 ) return 1;
    if( zTo==0 || symlink(zTo, zFile)<0 ) return 1;
  }else
#endif
  {
    if( S_ISDIR(mode) ){
      if( mkdir(zFile, mode) ){
        /* The mkdir() call to create the directory failed. This might not
        ** be an error though - if there is already a directory at the same
7613
7614
7615
7616
7617
7618
7619
7620

7621
7622
7623
7624
7625
7626
7627
7628
7629
7630
7631






7632
7633
7634
7635
7636
7637
7638
7639
7559
7560
7561
7562
7563
7564
7565

7566











7567
7568
7569
7570
7571
7572

7573
7574
7575
7576
7577
7578
7579







-
+
-
-
-
-
-
-
-
-
-
-
-
+
+
+
+
+
+
-







    times[0].tv_nsec = times[1].tv_nsec = 0;
    times[0].tv_sec = time(0);
    times[1].tv_sec = mtime;
    if( utimensat(AT_FDCWD, zFile, times, AT_SYMLINK_NOFOLLOW) ){
      return 1;
    }
#else
    /* Legacy unix. 
    /* Legacy unix */
    **
    ** Do not use utimes() on a symbolic link - it sees through the link and
    ** modifies the timestamps on the target. Or fails if the target does 
    ** not exist.  */
    if( 0==S_ISLNK(mode) ){
      struct timeval times[2];
      times[0].tv_usec = times[1].tv_usec = 0;
      times[0].tv_sec = time(0);
      times[1].tv_sec = mtime;
      if( utimes(zFile, times) ){
        return 1;
    struct timeval times[2];
    times[0].tv_usec = times[1].tv_usec = 0;
    times[0].tv_sec = time(0);
    times[1].tv_sec = mtime;
    if( utimes(zFile, times) ){
      return 1;
      }
    }
#endif
  }

  return 0;
}

11697
11698
11699
11700
11701
11702
11703
11704

11705
11706
11707
11708
11709
11710
11711
11712
11713
11714
11715
11716
11717

11718
11719
11720

11721
11722
11723
11724
11725
11726
11727
11637
11638
11639
11640
11641
11642
11643

11644
11645
11646
11647
11648
11649
11650
11651

11652
11653
11654
11655

11656
11657
11658

11659
11660
11661
11662
11663
11664
11665
11666







-
+







-




-
+


-
+







*/
static void sqlarUncompressFunc(
  sqlite3_context *context,
  int argc,
  sqlite3_value **argv
){
  uLong nData;
  sqlite3_int64 sz;
  uLongf sz;

  assert( argc==2 );
  sz = sqlite3_value_int(argv[1]);

  if( sz<=0 || sz==(nData = sqlite3_value_bytes(argv[0])) ){
    sqlite3_result_value(context, argv[0]);
  }else{
    uLongf szf = sz;
    const Bytef *pData= sqlite3_value_blob(argv[0]);
    Bytef *pOut = sqlite3_malloc(sz);
    if( pOut==0 ){
      sqlite3_result_error_nomem(context);
    }else if( Z_OK!=uncompress(pOut, &szf, pData, nData) ){
    }else if( Z_OK!=uncompress(pOut, &sz, pData, nData) ){
      sqlite3_result_error(context, "error in uncompress()", -1);
    }else{
      sqlite3_result_blob(context, pOut, szf, SQLITE_TRANSIENT);
      sqlite3_result_blob(context, pOut, sz, SQLITE_TRANSIENT);
    }
    sqlite3_free(pOut);
  }
}

#ifdef _WIN32

15535
15536
15537
15538
15539
15540
15541
15542
15543
15544
15545
15546
15547
15548
15549
15550
15551
15552
15553
15554
15555
15556
15557
15558
15559
15560
15561
15562
15563
15564
15565
15566
15567

15568
15569
15570
15571
15572
15573
15574
15474
15475
15476
15477
15478
15479
15480









15481
15482
15483
15484
15485
15486
15487
15488
15489
15490
15491
15492
15493
15494
15495
15496

15497
15498
15499
15500
15501
15502
15503
15504







-
-
-
-
-
-
-
-
-
















-
+








#ifndef SQLITE_OMIT_VIRTUALTABLE

#define DBDATA_PADDING_BYTES 100 

typedef struct DbdataTable DbdataTable;
typedef struct DbdataCursor DbdataCursor;
typedef struct DbdataBuffer DbdataBuffer;

/*
** Buffer type.
*/
struct DbdataBuffer {
  u8 *aBuf;
  sqlite3_int64 nBuf;
};

/* Cursor object */
struct DbdataCursor {
  sqlite3_vtab_cursor base;       /* Base class.  Must be first */
  sqlite3_stmt *pStmt;            /* For fetching database pages */

  int iPgno;                      /* Current page number */
  u8 *aPage;                      /* Buffer containing page */
  int nPage;                      /* Size of aPage[] in bytes */
  int nCell;                      /* Number of cells on aPage[] */
  int iCell;                      /* Current cell number */
  int bOnePage;                   /* True to stop after one page */
  int szDb;
  sqlite3_int64 iRowid;

  /* Only for the sqlite_dbdata table */
  DbdataBuffer rec;
  u8 *pRec;                       /* Buffer containing current record */
  sqlite3_int64 nRec;             /* Size of pRec[] in bytes */
  sqlite3_int64 nHdr;             /* Size of header in bytes */
  int iField;                     /* Current field number */
  u8 *pHdrPtr;
  u8 *pPtr;
  u32 enc;                        /* Text encoding */
  
15605
15606
15607
15608
15609
15610
15611
15612
15613
15614
15615
15616
15617
15618
15619
15620
15621
15622
15623
15624
15625
15626
15627
15628
15629
15630
15631
15632
15633
15634
15635
15636
15637
15638
15639
15640
15641
15642
15643
15535
15536
15537
15538
15539
15540
15541

























15542
15543
15544
15545
15546
15547
15548







-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-







#define DBPTR_SCHEMA              \
      "CREATE TABLE x("           \
      "  pgno INTEGER,"           \
      "  child INTEGER,"          \
      "  schema TEXT HIDDEN"      \
      ")"

/*
** Ensure the buffer passed as the first argument is at least nMin bytes
** in size. If an error occurs while attempting to resize the buffer,
** SQLITE_NOMEM is returned. Otherwise, SQLITE_OK.
*/
static int dbdataBufferSize(DbdataBuffer *pBuf, sqlite3_int64 nMin){
  if( nMin>pBuf->nBuf ){
    sqlite3_int64 nNew = nMin+16384;
    u8 *aNew = (u8*)sqlite3_realloc64(pBuf->aBuf, nNew);

    if( aNew==0 ) return SQLITE_NOMEM;
    pBuf->aBuf = aNew;
    pBuf->nBuf = nNew;
  }
  return SQLITE_OK;
}

/*
** Release the allocation managed by buffer pBuf.
*/
static void dbdataBufferFree(DbdataBuffer *pBuf){
  sqlite3_free(pBuf->aBuf);
  memset(pBuf, 0, sizeof(*pBuf));
}

/*
** Connect to an sqlite_dbdata (pAux==0) or sqlite_dbptr (pAux!=0) virtual 
** table.
*/
static int dbdataConnect(
  sqlite3 *db,
  void *pAux,
15770
15771
15772
15773
15774
15775
15776

15777

15778
15779
15780
15781
15782
15783
15784
15675
15676
15677
15678
15679
15680
15681
15682

15683
15684
15685
15686
15687
15688
15689
15690







+
-
+







  }
  pCsr->pStmt = 0;
  pCsr->iPgno = 1;
  pCsr->iCell = 0;
  pCsr->iField = 0;
  pCsr->bOnePage = 0;
  sqlite3_free(pCsr->aPage);
  sqlite3_free(pCsr->pRec);
  dbdataBufferFree(&pCsr->rec);
  pCsr->pRec = 0;
  pCsr->aPage = 0;
}

/*
** Close an sqlite_dbdata or sqlite_dbptr cursor.
*/
static int dbdataClose(sqlite3_vtab_cursor *pCursor){
16031
16032
16033
16034
16035
16036
16037
16038

16039
16040
16041
16042
16043
16044
16045
15937
15938
15939
15940
15941
15942
15943

15944
15945
15946
15947
15948
15949
15950
15951







-
+







        if( pCsr->bOnePage ) return SQLITE_OK;
        pCsr->iPgno++;
      }else{
        return SQLITE_OK;
      }
    }else{
      /* If there is no record loaded, load it now. */
      if( pCsr->nRec==0 ){
      if( pCsr->pRec==0 ){
        int bHasRowid = 0;
        int nPointer = 0;
        sqlite3_int64 nPayload = 0;
        sqlite3_int64 nHdr = 0;
        int iHdr;
        int U, X;
        int nLocal;
16075
16076
16077
16078
16079
16080
16081
16082
16083
16084
16085
16086
16087
16088
16089
15981
15982
15983
15984
15985
15986
15987

15988
15989
15990
15991
15992
15993
15994







-







    
          /* Load the "byte of payload including overflow" field */
          if( bNextPage || iOff>pCsr->nPage || iOff<=iCellPtr ){
            bNextPage = 1;
          }else{
            iOff += dbdataGetVarintU32(&pCsr->aPage[iOff], &nPayload);
            if( nPayload>0x7fffff00 ) nPayload &= 0x3fff;
            if( nPayload==0 ) nPayload = 1;
          }
    
          /* If this is a leaf intkey cell, load the rowid */
          if( bHasRowid && !bNextPage && iOff<pCsr->nPage ){
            iOff += dbdataGetVarint(&pCsr->aPage[iOff], &pCsr->iIntkey);
          }
    
16110
16111
16112
16113
16114
16115
16116
16117
16118
16119




16120
16121
16122

16123
16124
16125
16126
16127
16128
16129
16130
16131
16132
16133
16134
16135
16136
16137
16138
16139
16140

16141
16142
16143
16144
16145
16146
16147
16148
16149
16150
16151

16152
16153
16154
16155


16156
16157
16158
16159
16160
16161
16162
16163

16164
16165
16166
16167
16168
16169
16170
16171
16172


16173
16174
16175
16176
16177
16178
16179
16180
16181

16182
16183

16184
16185
16186
16187

16188
16189
16190
16191
16192

16193

16194
16195
16196
16197
16198
16199
16200
16015
16016
16017
16018
16019
16020
16021



16022
16023
16024
16025
16026
16027

16028
16029
16030
16031
16032
16033
16034
16035
16036
16037
16038
16039
16040
16041
16042
16043
16044
16045

16046
16047
16048
16049
16050
16051

16052


16053

16054
16055
16056


16057
16058
16059
16060
16061
16062
16063
16064
16065

16066
16067
16068
16069
16070
16071
16072
16073


16074
16075
16076
16077
16078
16079
16080
16081
16082
16083
16084
16085
16086

16087
16088
16089
16090

16091
16092
16093
16094
16095
16096
16097

16098
16099
16100
16101
16102
16103
16104
16105







-
-
-
+
+
+
+


-
+

















-
+





-

-
-

-
+


-
-
+
+







-
+







-
-
+
+









+

-
+



-
+





+
-
+







          if( bNextPage || nLocal+iOff>pCsr->nPage ){
            bNextPage = 1;
          }else{

            /* Allocate space for payload. And a bit more to catch small buffer
            ** overruns caused by attempting to read a varint or similar from 
            ** near the end of a corrupt record.  */
            rc = dbdataBufferSize(&pCsr->rec, nPayload+DBDATA_PADDING_BYTES);
            if( rc!=SQLITE_OK ) return rc;
            assert( nPayload!=0 );
            pCsr->pRec = (u8*)sqlite3_malloc64(nPayload+DBDATA_PADDING_BYTES);
            if( pCsr->pRec==0 ) return SQLITE_NOMEM;
            memset(pCsr->pRec, 0, nPayload+DBDATA_PADDING_BYTES);
            pCsr->nRec = nPayload;

            /* Load the nLocal bytes of payload */
            memcpy(pCsr->rec.aBuf, &pCsr->aPage[iOff], nLocal);
            memcpy(pCsr->pRec, &pCsr->aPage[iOff], nLocal);
            iOff += nLocal;

            /* Load content from overflow pages */
            if( nPayload>nLocal ){
              sqlite3_int64 nRem = nPayload - nLocal;
              u32 pgnoOvfl = get_uint32(&pCsr->aPage[iOff]);
              while( nRem>0 ){
                u8 *aOvfl = 0;
                int nOvfl = 0;
                int nCopy;
                rc = dbdataLoadPage(pCsr, pgnoOvfl, &aOvfl, &nOvfl);
                assert( rc!=SQLITE_OK || aOvfl==0 || nOvfl==pCsr->nPage );
                if( rc!=SQLITE_OK ) return rc;
                if( aOvfl==0 ) break;

                nCopy = U-4;
                if( nCopy>nRem ) nCopy = nRem;
                memcpy(&pCsr->rec.aBuf[nPayload-nRem], &aOvfl[4], nCopy);
                memcpy(&pCsr->pRec[nPayload-nRem], &aOvfl[4], nCopy);
                nRem -= nCopy;

                pgnoOvfl = get_uint32(aOvfl);
                sqlite3_free(aOvfl);
              }
              nPayload -= nRem;
            }
            memset(&pCsr->rec.aBuf[nPayload], 0, DBDATA_PADDING_BYTES);
            pCsr->nRec = nPayload;
    
            iHdr = dbdataGetVarintU32(pCsr->rec.aBuf, &nHdr);
            iHdr = dbdataGetVarintU32(pCsr->pRec, &nHdr);
            if( nHdr>nPayload ) nHdr = 0;
            pCsr->nHdr = nHdr;
            pCsr->pHdrPtr = &pCsr->rec.aBuf[iHdr];
            pCsr->pPtr = &pCsr->rec.aBuf[pCsr->nHdr];
            pCsr->pHdrPtr = &pCsr->pRec[iHdr];
            pCsr->pPtr = &pCsr->pRec[pCsr->nHdr];
            pCsr->iField = (bHasRowid ? -1 : 0);
          }
        }
      }else{
        pCsr->iField++;
        if( pCsr->iField>0 ){
          sqlite3_int64 iType;
          if( pCsr->pHdrPtr>=&pCsr->rec.aBuf[pCsr->nRec] 
          if( pCsr->pHdrPtr>=&pCsr->pRec[pCsr->nRec] 
           || pCsr->iField>=DBDATA_MX_FIELD
          ){
            bNextPage = 1;
          }else{
            int szField = 0;
            pCsr->pHdrPtr += dbdataGetVarintU32(pCsr->pHdrPtr, &iType);
            szField = dbdataValueBytes(iType);
            if( (pCsr->nRec - (pCsr->pPtr - pCsr->rec.aBuf))<szField ){
              pCsr->pPtr = &pCsr->rec.aBuf[pCsr->nRec];
            if( (pCsr->nRec - (pCsr->pPtr - pCsr->pRec))<szField ){
              pCsr->pPtr = &pCsr->pRec[pCsr->nRec];
            }else{
              pCsr->pPtr += szField;
            }
          }
        }
      }

      if( bNextPage ){
        sqlite3_free(pCsr->aPage);
        sqlite3_free(pCsr->pRec);
        pCsr->aPage = 0;
        pCsr->nRec = 0;
        pCsr->pRec = 0;
        if( pCsr->bOnePage ) return SQLITE_OK;
        pCsr->iPgno++;
      }else{
        if( pCsr->iField<0 || pCsr->pHdrPtr<&pCsr->rec.aBuf[pCsr->nHdr] ){
        if( pCsr->iField<0 || pCsr->pHdrPtr<&pCsr->pRec[pCsr->nHdr] ){
          return SQLITE_OK;
        }

        /* Advance to the next cell. The next iteration of the loop will load
        ** the record and so on. */
        sqlite3_free(pCsr->pRec);
        pCsr->nRec = 0;
        pCsr->pRec = 0;
        pCsr->iCell++;
      }
    }
  }

  assert( !"can't get here" );
  return SQLITE_OK;
16376
16377
16378
16379
16380
16381
16382
16383

16384
16385
16386
16387
16388

16389
16390
16391
16392
16393
16394
16395
16281
16282
16283
16284
16285
16286
16287

16288
16289
16290
16291
16292

16293
16294
16295
16296
16297
16298
16299
16300







-
+




-
+







        break;
      case DBDATA_COLUMN_FIELD:
        sqlite3_result_int(ctx, pCsr->iField);
        break;
      case DBDATA_COLUMN_VALUE: {
        if( pCsr->iField<0 ){
          sqlite3_result_int64(ctx, pCsr->iIntkey);
        }else if( &pCsr->rec.aBuf[pCsr->nRec] >= pCsr->pPtr ){
        }else if( &pCsr->pRec[pCsr->nRec] >= pCsr->pPtr ){
          sqlite3_int64 iType;
          dbdataGetVarintU32(pCsr->pHdrPtr, &iType);
          dbdataValue(
              ctx, pCsr->enc, iType, pCsr->pPtr, 
              &pCsr->rec.aBuf[pCsr->nRec] - pCsr->pPtr
              &pCsr->pRec[pCsr->nRec] - pCsr->pPtr
          );
        }
        break;
      }
    }
  }
  return SQLITE_OK;

Changes to extsrc/sqlite3.c.

14
15
16
17
18
19
20
21

22
23
24
25
26
27
28
14
15
16
17
18
19
20

21
22
23
24
25
26
27
28







-
+







** the text of this file.  Search for "Begin file sqlite3.h" to find the start
** of the embedded sqlite3.h header file.) Additional code files may be needed
** if you want a wrapper to interface SQLite with your choice of programming
** language. The code for the "sqlite3" command-line shell is also in a
** separate file. This file contains only code for the core SQLite library.
**
** The content in this amalgamation comes from Fossil check-in
** 42d67c6fed3a5f21d7b71515aca471ba61d3.
** 8c0f69e0e4ae0a446838cc193bfd4395fd25.
*/
#define SQLITE_CORE 1
#define SQLITE_AMALGAMATION 1
#ifndef SQLITE_PRIVATE
# define SQLITE_PRIVATE static
#endif
/************** Begin file sqliteInt.h ***************************************/
457
458
459
460
461
462
463
464

465
466
467
468
469
470
471
457
458
459
460
461
462
463

464
465
466
467
468
469
470
471







-
+







**
** See also: [sqlite3_libversion()],
** [sqlite3_libversion_number()], [sqlite3_sourceid()],
** [sqlite_version()] and [sqlite_source_id()].
*/
#define SQLITE_VERSION        "3.46.0"
#define SQLITE_VERSION_NUMBER 3046000
#define SQLITE_SOURCE_ID      "2024-05-08 11:51:56 42d67c6fed3a5f21d7b71515aca471ba61d387e620022735a2e7929fa3a237cf"
#define SQLITE_SOURCE_ID      "2024-04-18 16:11:01 8c0f69e0e4ae0a446838cc193bfd4395fd251f3c7659b35ac388e5a0a7650a66"

/*
** CAPI3REF: Run-Time Library Version Numbers
** KEYWORDS: sqlite3_version sqlite3_sourceid
**
** These interfaces provide the same information as the [SQLITE_VERSION],
** [SQLITE_VERSION_NUMBER], and [SQLITE_SOURCE_ID] C preprocessor macros
12312
12313
12314
12315
12316
12317
12318
12319
12320
12321
12322
12323
12324
12325
12326
12327
12328
12329
12330
12331
12332
12333
12334
12335
12336
12337
12338
12339
12340
12341
12342
12343
12344
12345
12346
12347
12348
12349
12312
12313
12314
12315
12316
12317
12318
























12319
12320
12321
12322
12323
12324
12325







-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-







** detected, SQLITE_CORRUPT is returned. Or, if an out-of-memory condition
** occurs during processing, this function returns SQLITE_NOMEM.
**
** In all cases, if an error occurs the state of the final contents of the
** changegroup is undefined. If no error occurs, SQLITE_OK is returned.
*/
SQLITE_API int sqlite3changegroup_add(sqlite3_changegroup*, int nData, void *pData);

/*
** CAPI3REF: Add A Single Change To A Changegroup
** METHOD: sqlite3_changegroup
**
** This function adds the single change currently indicated by the iterator
** passed as the second argument to the changegroup object. The rules for
** adding the change are just as described for [sqlite3changegroup_add()].
**
** If the change is successfully added to the changegroup, SQLITE_OK is
** returned. Otherwise, an SQLite error code is returned.
**
** The iterator must point to a valid entry when this function is called.
** If it does not, SQLITE_ERROR is returned and no change is added to the
** changegroup. Additionally, the iterator must not have been opened with
** the SQLITE_CHANGESETAPPLY_INVERT flag. In this case SQLITE_ERROR is also
** returned.
*/
SQLITE_API int sqlite3changegroup_add_change(
  sqlite3_changegroup*,
  sqlite3_changeset_iter*
);



/*
** CAPI3REF: Obtain A Composite Changeset From A Changegroup
** METHOD: sqlite3_changegroup
**
** Obtain a buffer containing a changeset (or patchset) representing the
** current contents of the changegroup. If the inputs to the changegroup
14634
14635
14636
14637
14638
14639
14640
14641
14642


14643
14644
14645
14646
14647
14648
14649
14610
14611
14612
14613
14614
14615
14616


14617
14618
14619
14620
14621
14622
14623
14624
14625







-
-
+
+







#define TK_FILTER                         166
#define TK_COLUMN                         167
#define TK_AGG_FUNCTION                   168
#define TK_AGG_COLUMN                     169
#define TK_TRUEFALSE                      170
#define TK_ISNOT                          171
#define TK_FUNCTION                       172
#define TK_UPLUS                          173
#define TK_UMINUS                         174
#define TK_UMINUS                         173
#define TK_UPLUS                          174
#define TK_TRUTH                          175
#define TK_REGISTER                       176
#define TK_VECTOR                         177
#define TK_SELECT_COLUMN                  178
#define TK_IF_NULL_ROW                    179
#define TK_ASTERISK                       180
#define TK_SPAN                           181
31967
31968
31969
31970
31971
31972
31973
31974

31975
31976
31977
31978
31979
31980
31981
31943
31944
31945
31946
31947
31948
31949

31950
31951
31952
31953
31954
31955
31956
31957







-
+







            sqlite3_str_append(pAccum, ".", 1);
          }
          sqlite3_str_appendall(pAccum, pItem->zName);
        }else if( pItem->zAlias ){
          sqlite3_str_appendall(pAccum, pItem->zAlias);
        }else{
          Select *pSel = pItem->pSelect;
          assert( pSel!=0 ); /* Because of tag-20240424-1 */
          assert( pSel!=0 );
          if( pSel->selFlags & SF_NestedFrom ){
            sqlite3_str_appendf(pAccum, "(join-%u)", pSel->selId);
          }else if( pSel->selFlags & SF_MultiValue ){
            assert( !pItem->fg.isTabFunc && !pItem->fg.isIndexedBy );
            sqlite3_str_appendf(pAccum, "%u-ROW VALUES CLAUSE",
                                pItem->u1.nRow);
          }else{
32791
32792
32793
32794
32795
32796
32797
32798
32799
32800
32801
32802
32803
32804
32805
32806
32807
32808
32809
32810
32811
32812
32767
32768
32769
32770
32771
32772
32773

32774
32775
32776
32777
32778
32779

32780
32781
32782
32783
32784
32785
32786







-






-







    if( pItem->pSelect ) n++;
    if( pItem->fg.isTabFunc ) n++;
    if( pItem->fg.isUsing ) n++;
    if( pItem->fg.isUsing ){
      sqlite3TreeViewIdList(pView, pItem->u3.pUsing, (--n)>0, "USING");
    }
    if( pItem->pSelect ){
      sqlite3TreeViewPush(&pView, i+1<pSrc->nSrc);
      if( pItem->pTab ){
        Table *pTab = pItem->pTab;
        sqlite3TreeViewColumnList(pView, pTab->aCol, pTab->nCol, 1);
      }
      assert( (int)pItem->fg.isNestedFrom == IsNestedFrom(pItem->pSelect) );
      sqlite3TreeViewSelect(pView, pItem->pSelect, (--n)>0);
      sqlite3TreeViewPop(&pView);
    }
    if( pItem->fg.isTabFunc ){
      sqlite3TreeViewExprList(pView, pItem->u1.pFuncArg, 0, "func-args:");
    }
    sqlite3TreeViewPop(&pView);
  }
}
32902
32903
32904
32905
32906
32907
32908
32909

32910
32911
32912
32913
32914
32915
32916
32876
32877
32878
32879
32880
32881
32882

32883
32884
32885
32886
32887
32888
32889
32890







-
+







    if( p->pOrderBy ){
      sqlite3TreeViewExprList(pView, p->pOrderBy, (n--)>0, "ORDERBY");
    }
    if( p->pLimit ){
      sqlite3TreeViewItem(pView, "LIMIT", (n--)>0);
      sqlite3TreeViewExpr(pView, p->pLimit->pLeft, p->pLimit->pRight!=0);
      if( p->pLimit->pRight ){
        sqlite3TreeViewItem(pView, "OFFSET", 0);
        sqlite3TreeViewItem(pView, "OFFSET", (n--)>0);
        sqlite3TreeViewExpr(pView, p->pLimit->pRight, 0);
        sqlite3TreeViewPop(&pView);
      }
      sqlite3TreeViewPop(&pView);
    }
    if( p->pPrior ){
      const char *zOp = "UNION";
84701
84702
84703
84704
84705
84706
84707
84708
84709
84710
84711




84712
84713
84714
84715
84716
84717
84718
84675
84676
84677
84678
84679
84680
84681




84682
84683
84684
84685
84686
84687
84688
84689
84690
84691
84692







-
-
-
-
+
+
+
+







  sqlite3 *db,                    /* Database handle */
  const void *pRec,               /* Pointer to buffer containing record */
  int nRec,                       /* Size of buffer pRec in bytes */
  int iCol,                       /* Column to extract */
  sqlite3_value **ppVal           /* OUT: Extracted value */
){
  u32 t = 0;                      /* a column type code */
  u32 nHdr;                       /* Size of the header in the record */
  u32 iHdr;                       /* Next unread header byte */
  i64 iField;                     /* Next unread data byte */
  u32 szField = 0;                /* Size of the current data field */
  int nHdr;                       /* Size of the header in the record */
  int iHdr;                       /* Next unread header byte */
  int iField;                     /* Next unread data byte */
  int szField = 0;                /* Size of the current data field */
  int i;                          /* Column index */
  u8 *a = (u8*)pRec;              /* Typecast byte array */
  Mem *pMem = *ppVal;             /* Write result into this Mem object */

  assert( iCol>0 );
  iHdr = getVarint32(a, nHdr);
  if( nHdr>nRec || iHdr>=nHdr ) return SQLITE_CORRUPT_BKPT;
97665
97666
97667
97668
97669
97670
97671
97672

97673
97674
97675
97676
97677
97678
97679
97680
97639
97640
97641
97642
97643
97644
97645

97646

97647
97648
97649
97650
97651
97652
97653







-
+
-







**
** A pseudo-table created by this opcode is used to hold a single
** row output from the sorter so that the row can be decomposed into
** individual columns using the OP_Column opcode.  The OP_Column opcode
** is the only cursor opcode that works with a pseudo-table.
**
** P3 is the number of fields in the records that will be stored by
** the pseudo-table.  If P2 is 0 or negative then the pseudo-cursor
** the pseudo-table.
** will return NULL for every column.
*/
case OP_OpenPseudo: {
  VdbeCursor *pCx;

  assert( pOp->p1>=0 );
  assert( pOp->p3>=0 );
  pCx = allocateCursor(p, pOp->p1, pOp->p3, CURTYPE_PSEUDO);
105824
105825
105826
105827
105828
105829
105830
105831

105832
105833
105834

105835
105836
105837
105838
105839
105840
105841
105797
105798
105799
105800
105801
105802
105803

105804
105805
105806

105807
105808
105809
105810
105811
105812
105813
105814







-
+


-
+







         sqlite3_result_text(ctx, "(FK)", 4, SQLITE_STATIC);
      }
      break;
    }

#ifdef SQLITE_ENABLE_STMT_SCANSTATUS
    case 9:     /* nexec */
      sqlite3_result_int64(ctx, pOp->nExec);
      sqlite3_result_int(ctx, pOp->nExec);
      break;
    case 10:    /* ncycle */
      sqlite3_result_int64(ctx, pOp->nCycle);
      sqlite3_result_int(ctx, pOp->nCycle);
      break;
#else
    case 9:     /* nexec */
    case 10:    /* ncycle */
      sqlite3_result_int(ctx, 0);
      break;
#endif
107220
107221
107222
107223
107224
107225
107226
107227

107228
107229
107230
107231
107232
107233
107234
107235
107193
107194
107195
107196
107197
107198
107199

107200

107201
107202
107203
107204
107205
107206
107207







-
+
-







#ifndef SQLITE_OMIT_TRIGGER
      if( pParse->pTriggerTab!=0 ){
        int op = pParse->eTriggerOp;
        assert( op==TK_DELETE || op==TK_UPDATE || op==TK_INSERT );
        if( pParse->bReturning ){
          if( (pNC->ncFlags & NC_UBaseReg)!=0
           && ALWAYS(zTab==0
                     || sqlite3StrICmp(zTab,pParse->pTriggerTab->zName)==0
                     || sqlite3StrICmp(zTab,pParse->pTriggerTab->zName)==0)
                     || isValidSchemaTableName(zTab, pParse->pTriggerTab, 0))
          ){
            pExpr->iTable = op!=TK_DELETE;
            pTab = pParse->pTriggerTab;
          }
        }else if( op!=TK_DELETE && zTab && sqlite3StrICmp("new",zTab) == 0 ){
          pExpr->iTable = 1;
          pTab = pParse->pTriggerTab;
108582
108583
108584
108585
108586
108587
108588
108589
108590
108591
108592
108593
108594
108595
108596
108554
108555
108556
108557
108558
108559
108560

108561
108562
108563
108564
108565
108566
108567







-







    }

    /* Recursively resolve names in all subqueries in the FROM clause
    */
    if( pOuterNC ) pOuterNC->nNestedSelect++;
    for(i=0; i<p->pSrc->nSrc; i++){
      SrcItem *pItem = &p->pSrc->a[i];
      assert( pItem->zName!=0 || pItem->pSelect!=0 );/* Test of tag-20240424-1*/
      if( pItem->pSelect && (pItem->pSelect->selFlags & SF_Resolved)==0 ){
        int nRef = pOuterNC ? pOuterNC->nRef : 0;
        const char *zSavedContext = pParse->zAuthContext;

        if( pItem->zName ) pParse->zAuthContext = pItem->zName;
        sqlite3ResolveSelectNames(pParse, pItem->pSelect, pOuterNC);
        pParse->zAuthContext = zSavedContext;
110342
110343
110344
110345
110346
110347
110348
110349
110350
110351
110352
110353
110354
110355
110356
110357
110358
110359
110360
110361
110362
110363
110364

110365
110366
110367
110368
110369
110370
110371
110372
110373
110374
110375
110376
110377
110378
110379
110380
110381
110382
110383
110384
110385
110386
110387
110388
110389
110390
110391
110392
110393
110394
110395
110396
110397
110398
110313
110314
110315
110316
110317
110318
110319

110320
110321
110322
110323
110324
110325
110326
110327
110328
110329
110330
110331
110332
110333
110334
110335
110336
110337
110338
110339
110340
110341
110342
110343
110344
110345
110346
110347
110348
110349













110350
110351
110352
110353
110354
110355
110356







-















+














-
-
-
-
-
-
-
-
-
-
-
-
-








/*
** Recursively delete an expression tree.
*/
static SQLITE_NOINLINE void sqlite3ExprDeleteNN(sqlite3 *db, Expr *p){
  assert( p!=0 );
  assert( db!=0 );
exprDeleteRestart:
  assert( !ExprUseUValue(p) || p->u.iValue>=0 );
  assert( !ExprUseYWin(p) || !ExprUseYSub(p) );
  assert( !ExprUseYWin(p) || p->y.pWin!=0 || db->mallocFailed );
  assert( p->op!=TK_FUNCTION || !ExprUseYSub(p) );
#ifdef SQLITE_DEBUG
  if( ExprHasProperty(p, EP_Leaf) && !ExprHasProperty(p, EP_TokenOnly) ){
    assert( p->pLeft==0 );
    assert( p->pRight==0 );
    assert( !ExprUseXSelect(p) || p->x.pSelect==0 );
    assert( !ExprUseXList(p) || p->x.pList==0 );
  }
#endif
  if( !ExprHasProperty(p, (EP_TokenOnly|EP_Leaf)) ){
    /* The Expr.x union is never used at the same time as Expr.pRight */
    assert( (ExprUseXList(p) && p->x.pList==0) || p->pRight==0 );
    if( p->pLeft && p->op!=TK_SELECT_COLUMN ) sqlite3ExprDeleteNN(db, p->pLeft);
    if( p->pRight ){
      assert( !ExprHasProperty(p, EP_WinFunc) );
      sqlite3ExprDeleteNN(db, p->pRight);
    }else if( ExprUseXSelect(p) ){
      assert( !ExprHasProperty(p, EP_WinFunc) );
      sqlite3SelectDelete(db, p->x.pSelect);
    }else{
      sqlite3ExprListDelete(db, p->x.pList);
#ifndef SQLITE_OMIT_WINDOWFUNC
      if( ExprHasProperty(p, EP_WinFunc) ){
        sqlite3WindowDelete(db, p->y.pWin);
      }
#endif
    }
    if( p->pLeft && p->op!=TK_SELECT_COLUMN ){
      Expr *pLeft = p->pLeft;
      if( !ExprHasProperty(p, EP_Static)
       && !ExprHasProperty(pLeft, EP_Static)
      ){
        /* Avoid unnecessary recursion on unary operators */
        sqlite3DbNNFreeNN(db, p);
        p = pLeft;
        goto exprDeleteRestart;
      }else{
        sqlite3ExprDeleteNN(db, pLeft);
      }
    }
  }
  if( !ExprHasProperty(p, EP_Static) ){
    sqlite3DbNNFreeNN(db, p);
  }
}
SQLITE_PRIVATE void sqlite3ExprDelete(sqlite3 *db, Expr *p){
  if( p ) sqlite3ExprDeleteNN(db, p);
111380
111381
111382
111383
111384
111385
111386
111387

111388
111389
111390
111391
111392
111393
111394
111338
111339
111340
111341
111342
111343
111344

111345
111346
111347
111348
111349
111350
111351
111352







-
+







   || pDef->xFinalize!=0
   || (pDef->funcFlags & (SQLITE_FUNC_CONSTANT|SQLITE_FUNC_SLOCHNG))==0
   || ExprHasProperty(pExpr, EP_WinFunc)
  ){
    pWalker->eCode = 0;
    return WRC_Abort;
  }
  return WRC_Prune;
  return WRC_Continue;
}


/*
** These routines are Walker callbacks used to check expressions to
** see if they are "constant" for some definition of constant.  The
** Walker.eCode value determines the type of "constant" we are looking
124259
124260
124261
124262
124263
124264
124265



124266
124267
124268
124269
124270
124271
124272
124217
124218
124219
124220
124221
124222
124223
124224
124225
124226
124227
124228
124229
124230
124231
124232
124233







+
+
+







    /* Test for cycles in generated columns and illegal expressions
    ** in CHECK constraints and in DEFAULT clauses. */
    if( p->tabFlags & TF_HasGenerated ){
      sqlite3VdbeAddOp4(v, OP_SqlExec, 0x0001, 0, 0,
             sqlite3MPrintf(db, "SELECT*FROM\"%w\".\"%w\"",
                   db->aDb[iDb].zDbSName, p->zName), P4_DYNAMIC);
    }
    sqlite3VdbeAddOp4(v, OP_SqlExec, 0x0001, 0, 0,
           sqlite3MPrintf(db, "PRAGMA \"%w\".integrity_check(%Q)",
                 db->aDb[iDb].zDbSName, p->zName), P4_DYNAMIC);
  }

  /* Add the table to the in-memory representation of the database.
  */
  if( db->init.busy ){
    Table *pOld;
    Schema *pSchema = p->pSchema;
140388
140389
140390
140391
140392
140393
140394
140395

140396
140397
140398
140399
140400
140401
140402
140349
140350
140351
140352
140353
140354
140355

140356
140357
140358
140359
140360
140361
140362
140363







-
+








    /* Initialize the VDBE program */
    pParse->nMem = 6;

    /* Set the maximum error count */
    mxErr = SQLITE_INTEGRITY_CHECK_ERROR_MAX;
    if( zRight ){
      if( sqlite3GetInt32(pValue->z, &mxErr) ){
      if( sqlite3GetInt32(zRight, &mxErr) ){
        if( mxErr<=0 ){
          mxErr = SQLITE_INTEGRITY_CHECK_ERROR_MAX;
        }
      }else{
        pObjTab = sqlite3LocateTable(pParse, 0, zRight,
                      iDb>=0 ? db->aDb[iDb].zDbSName : 0);
      }
141596
141597
141598
141599
141600
141601
141602
141603
141604
141605
141606
141607
141608
141609
141610
141557
141558
141559
141560
141561
141562
141563

141564
141565
141566
141567
141568
141569
141570







-







}

/* Clear all content from pragma virtual table cursor. */
static void pragmaVtabCursorClear(PragmaVtabCursor *pCsr){
  int i;
  sqlite3_finalize(pCsr->pPragma);
  pCsr->pPragma = 0;
  pCsr->iRowid = 0;
  for(i=0; i<ArraySize(pCsr->azArg); i++){
    sqlite3_free(pCsr->azArg[i]);
    pCsr->azArg[i] = 0;
  }
}

/* Close a pragma virtual table cursor */
145182
145183
145184
145185
145186
145187
145188
145189
145190
145191
145192
145193
145194
145195
145196
145197
145198
145199
145200
145201
145202
145203
145204




145205
145206
145207
145208
145209
145210
145211
145142
145143
145144
145145
145146
145147
145148


145149
145150
145151
145152





145153
145154
145155


145156
145157
145158
145159
145160
145161
145162
145163
145164
145165
145166







-
-




-
-
-
-
-



-
-
+
+
+
+







  while( pSelect->pPrior ) pSelect = pSelect->pPrior;
  a = pSelect->pEList->a;
  memset(&sNC, 0, sizeof(sNC));
  sNC.pSrcList = pSelect->pSrc;
  for(i=0, pCol=pTab->aCol; i<pTab->nCol; i++, pCol++){
    const char *zType;
    i64 n;
    int m = 0;
    Select *pS2 = pSelect;
    pTab->tabFlags |= (pCol->colFlags & COLFLAG_NOINSERT);
    p = a[i].pExpr;
    /* pCol->szEst = ... // Column size est for SELECT tables never used */
    pCol->affinity = sqlite3ExprAffinity(p);
    while( pCol->affinity<=SQLITE_AFF_NONE && pS2->pNext!=0 ){
      m |= sqlite3ExprDataType(pS2->pEList->a[i].pExpr);
      pS2 = pS2->pNext;
      pCol->affinity = sqlite3ExprAffinity(pS2->pEList->a[i].pExpr);
    }
    if( pCol->affinity<=SQLITE_AFF_NONE ){
      pCol->affinity = aff;
    }
    if( pCol->affinity>=SQLITE_AFF_TEXT && (pS2->pNext || pS2!=pSelect) ){
      for(pS2=pS2->pNext; pS2; pS2=pS2->pNext){
    if( pCol->affinity>=SQLITE_AFF_TEXT && pSelect->pNext ){
      int m = 0;
      Select *pS2;
      for(m=0, pS2=pSelect->pNext; pS2; pS2=pS2->pNext){
        m |= sqlite3ExprDataType(pS2->pEList->a[i].pExpr);
      }
      if( pCol->affinity==SQLITE_AFF_TEXT && (m&0x01)!=0 ){
        pCol->affinity = SQLITE_AFF_BLOB;
      }else
      if( pCol->affinity>=SQLITE_AFF_NUMERIC && (m&0x02)!=0 ){
        pCol->affinity = SQLITE_AFF_BLOB;
152620
152621
152622
152623
152624
152625
152626
152627
152628
152629
152630
152631
152632
152633
152634
152635
152636
152637
152638
152639
152640
152641
152642
152643
152644
152645
152646
152647
152648
152649
152650
152651
152652
152653
152654
152655
152656
152657
152658
152659
152660
152661
152662
152663
152664
152665
152666
152667
152668
152669
152670
152671
152672
152673
152674
152675
152676
152677
152678
152679
152680
152681
152682
152683
152684
152685
152686
152687
152688
152689
152690
152691
152692
152693
152694
152695
152696
152697
152698
152699
152575
152576
152577
152578
152579
152580
152581


































































152582
152583
152584
152585
152586
152587
152588







-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-







        pItem->fg.eEName = pList->a[i].fg.eEName;
      }
    }
  }
  return pNew;
}

/* If the Expr node is a subquery or an EXISTS operator or an IN operator that
** uses a subquery, and if the subquery is SF_Correlated, then mark the
** expression as EP_VarSelect.
*/
static int sqlite3ReturningSubqueryVarSelect(Walker *NotUsed, Expr *pExpr){
  UNUSED_PARAMETER(NotUsed);
  if( ExprUseXSelect(pExpr)
   && (pExpr->x.pSelect->selFlags & SF_Correlated)!=0
  ){
    testcase( ExprHasProperty(pExpr, EP_VarSelect) );
    ExprSetProperty(pExpr, EP_VarSelect);
  }
  return WRC_Continue;
}


/*
** If the SELECT references the table pWalker->u.pTab, then do two things:
**
**    (1) Mark the SELECT as as SF_Correlated.
**    (2) Set pWalker->eCode to non-zero so that the caller will know
**        that (1) has happened.
*/
static int sqlite3ReturningSubqueryCorrelated(Walker *pWalker, Select *pSelect){
  int i;
  SrcList *pSrc;
  assert( pSelect!=0 );
  pSrc = pSelect->pSrc;
  assert( pSrc!=0 );
  for(i=0; i<pSrc->nSrc; i++){
    if( pSrc->a[i].pTab==pWalker->u.pTab ){
      testcase( pSelect->selFlags & SF_Correlated );
      pSelect->selFlags |= SF_Correlated;
      pWalker->eCode = 1;
      break;
    }
  }
  return WRC_Continue;
}

/*
** Scan the expression list that is the argument to RETURNING looking
** for subqueries that depend on the table which is being modified in the
** statement that is hosting the RETURNING clause (pTab).  Mark all such
** subqueries as SF_Correlated.  If the subqueries are part of an
** expression, mark the expression as EP_VarSelect.
**
** https://sqlite.org/forum/forumpost/2c83569ce8945d39
*/
static void sqlite3ProcessReturningSubqueries(
  ExprList *pEList,
  Table *pTab
){
  Walker w;
  memset(&w, 0, sizeof(w));
  w.xExprCallback = sqlite3ExprWalkNoop;
  w.xSelectCallback = sqlite3ReturningSubqueryCorrelated;
  w.u.pTab = pTab;
  sqlite3WalkExprList(&w, pEList);
  if( w.eCode ){
    w.xExprCallback = sqlite3ReturningSubqueryVarSelect;
    w.xSelectCallback = sqlite3SelectWalkNoop;
    sqlite3WalkExprList(&w, pEList);
  }
}

/*
** Generate code for the RETURNING trigger.  Unlike other triggers
** that invoke a subprogram in the bytecode, the code for RETURNING
** is generated in-line.
*/
static void codeReturningTrigger(
  Parse *pParse,       /* Parse context */
152722
152723
152724
152725
152726
152727
152728
152729
152730
152731
152732
152733
152734
152735
152736
152611
152612
152613
152614
152615
152616
152617

152618
152619
152620
152621
152622
152623
152624







-







  }
  memset(&sSelect, 0, sizeof(sSelect));
  memset(&sFrom, 0, sizeof(sFrom));
  sSelect.pEList = sqlite3ExprListDup(db, pReturning->pReturnEL, 0);
  sSelect.pSrc = &sFrom;
  sFrom.nSrc = 1;
  sFrom.a[0].pTab = pTab;
  sFrom.a[0].zName = pTab->zName; /* tag-20240424-1 */
  sFrom.a[0].iCursor = -1;
  sqlite3SelectPrep(pParse, &sSelect, 0);
  if( pParse->nErr==0 ){
    assert( db->mallocFailed==0 );
    sqlite3GenerateColumnNames(pParse, &sSelect);
  }
  sqlite3ExprListDelete(db, sSelect.pEList);
152749
152750
152751
152752
152753
152754
152755
152756
152757
152758
152759
152760
152761
152762
152763
152637
152638
152639
152640
152641
152642
152643

152644
152645
152646
152647
152648
152649
152650







-







    pParse->pTriggerTab = pTab;
    if( sqlite3ResolveExprListNames(&sNC, pNew)==SQLITE_OK
     && ALWAYS(!db->mallocFailed)
    ){
      int i;
      int nCol = pNew->nExpr;
      int reg = pParse->nMem+1;
      sqlite3ProcessReturningSubqueries(pNew, pTab);
      pParse->nMem += nCol+2;
      pReturning->iRetReg = reg;
      for(i=0; i<nCol; i++){
        Expr *pCol = pNew->a[i].pExpr;
        assert( pCol!=0 ); /* Due to !db->mallocFailed ~9 lines above */
        sqlite3ExprCodeFactorable(pParse, pCol, reg+i);
        if( sqlite3ExprAffinity(pCol)==SQLITE_AFF_REAL ){
158702
158703
158704
158705
158706
158707
158708
158709
158710
158711
158712
158713
158714
158715
158716
158717
158718
158719
158720
158721
158722
158723
158724
158725
158726
158727
158728
158729
158730
158731
158732
158733
158734
158735
158736
158589
158590
158591
158592
158593
158594
158595





















158596
158597
158598
158599
158600
158601
158602







-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-







      VdbeCoverage(pParse->pVdbe);
    }
    pLevel->regFilter = 0;
    pLevel->addrBrk = 0;
  }
}

/*
** Loop pLoop is a WHERE_INDEXED level that uses at least one IN(...)
** operator. Return true if level pLoop is guaranteed to visit only one
** row for each key generated for the index.
*/
static int whereLoopIsOneRow(WhereLoop *pLoop){
  if( pLoop->u.btree.pIndex->onError
   && pLoop->nSkip==0
   && pLoop->u.btree.nEq==pLoop->u.btree.pIndex->nKeyCol
  ){
    int ii;
    for(ii=0; ii<pLoop->u.btree.nEq; ii++){
      if( pLoop->aLTerm[ii]->eOperator & (WO_IS|WO_ISNULL) ){
        return 0;
      }
    }
    return 1;
  }
  return 0;
}

/*
** Generate code for the start of the iLevel-th loop in the WHERE clause
** implementation described by pWInfo.
*/
SQLITE_PRIVATE Bitmask sqlite3WhereCodeOneLoopStart(
  Parse *pParse,       /* Parsing context */
  Vdbe *v,             /* Prepared statement under construction */
159470
159471
159472
159473
159474
159475
159476
159477

159478
159479
159480
159481
159482
159483
159484
159485
159486
159336
159337
159338
159339
159340
159341
159342

159343


159344
159345
159346
159347
159348
159349
159350







-
+
-
-







      /* The following assert() is not a requirement, merely an observation:
      ** The OR-optimization doesn't work for the right hand table of
      ** a LEFT JOIN: */
      assert( (pWInfo->wctrlFlags & (WHERE_OR_SUBCLAUSE|WHERE_RIGHT_JOIN))==0 );
    }

    /* Record the instruction used to terminate the loop. */
    if( (pLoop->wsFlags & WHERE_ONEROW)
    if( pLoop->wsFlags & WHERE_ONEROW ){
     || (pLevel->u.in.nIn && regBignull==0 && whereLoopIsOneRow(pLoop))
    ){
      pLevel->op = OP_Noop;
    }else if( bRev ){
      pLevel->op = OP_Prev;
    }else{
      pLevel->op = OP_Next;
    }
    pLevel->p1 = iIdxCur;
160127
160128
160129
160130
160131
160132
160133
160134
160135
160136
160137
160138






160139
160140
160141
160142
160143
160144
160145
159991
159992
159993
159994
159995
159996
159997





159998
159999
160000
160001
160002
160003
160004
160005
160006
160007
160008
160009
160010







-
-
-
-
-
+
+
+
+
+
+







    pRight = &pWInfo->pTabList->a[pWInfo->a[k].iFrom];
    mAll |= pWInfo->a[k].pWLoop->maskSelf;
    if( pRight->fg.viaCoroutine ){
      sqlite3VdbeAddOp3(
          v, OP_Null, 0, pRight->regResult,
          pRight->regResult + pRight->pSelect->pEList->nExpr-1
      );
    }
    sqlite3VdbeAddOp1(v, OP_NullRow, pWInfo->a[k].iTabCur);
    iIdxCur = pWInfo->a[k].iIdxCur;
    if( iIdxCur ){
      sqlite3VdbeAddOp1(v, OP_NullRow, iIdxCur);
    }else{
      sqlite3VdbeAddOp1(v, OP_NullRow, pWInfo->a[k].iTabCur);
      iIdxCur = pWInfo->a[k].iIdxCur;
      if( iIdxCur ){
        sqlite3VdbeAddOp1(v, OP_NullRow, iIdxCur);
      }
    }
  }
  if( (pTabItem->fg.jointype & JT_LTORJ)==0 ){
    mAll |= pLoop->maskSelf;
    for(k=0; k<pWC->nTerm; k++){
      WhereTerm *pTerm = &pWC->a[k];
      if( (pTerm->wtFlags & (TERM_VIRTUAL|TERM_SLICE))!=0
161833
161834
161835
161836
161837
161838
161839
161840
161841
161842
161843
161844
161845
161846
161847
161848
161849
161850
161851
161852
161853
161854


161855

161856
161857
161858
161859
161860
161861
161862
161863
161864
161865
161866
161867
161868
161869
161698
161699
161700
161701
161702
161703
161704

161705
161706
161707
161708
161709
161710
161711
161712
161713
161714
161715
161716
161717
161718
161719
161720

161721
161722
161723
161724




161725
161726
161727
161728
161729
161730
161731







-














+
+
-
+



-
-
-
-







        /* If this term has child terms, then they are also part of the
        ** pWC->a[] array. So this term can be ignored, as a LIMIT clause
        ** will only be added if each of the child terms passes the
        ** (leftCursor==iCsr) test below.  */
        continue;
      }
      if( pWC->a[ii].leftCursor!=iCsr ) return;
      if( pWC->a[ii].prereqRight!=0 ) return;
    }

    /* Check condition (5). Return early if it is not met. */
    if( pOrderBy ){
      for(ii=0; ii<pOrderBy->nExpr; ii++){
        Expr *pExpr = pOrderBy->a[ii].pExpr;
        if( pExpr->op!=TK_COLUMN ) return;
        if( pExpr->iTable!=iCsr ) return;
        if( pOrderBy->a[ii].fg.sortFlags & KEYINFO_ORDER_BIGNULL ) return;
      }
    }

    /* All conditions are met. Add the terms to the where-clause object. */
    assert( p->pLimit->op==TK_LIMIT );
    whereAddLimitExpr(pWC, p->iLimit, p->pLimit->pLeft,
                      iCsr, SQLITE_INDEX_CONSTRAINT_LIMIT);
    if( p->iOffset!=0 && (p->selFlags & SF_Compound)==0 ){
    if( p->iOffset>0 ){
      whereAddLimitExpr(pWC, p->iOffset, p->pLimit->pRight,
                        iCsr, SQLITE_INDEX_CONSTRAINT_OFFSET);
    }
    if( p->iOffset==0 || (p->selFlags & SF_Compound)==0 ){
      whereAddLimitExpr(pWC, p->iLimit, p->pLimit->pLeft,
                        iCsr, SQLITE_INDEX_CONSTRAINT_LIMIT);
    }
  }
}

/*
** Initialize a preallocated WhereClause structure.
*/
SQLITE_PRIVATE void sqlite3WhereClauseInit(
162373
162374
162375
162376
162377
162378
162379
162380
162381
162382
162383
162384
162385
162386
162387
162388
162389
162390
162391
162392
162393
162394
162395
162396
162397
162398
162399
162400
162401
162402
162403
162404
162405
162406
162407
162408
162409
162410
162411
162412
162413
162414
162415
162416
162417
162418
162419
162420
162421
162422
162235
162236
162237
162238
162239
162240
162241




































162242
162243
162244
162245
162246
162247
162248







-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-







  p = sqlite3ExprSkipCollateAndLikely(p->pRight);
  if( ALWAYS(p!=0) && p->op==TK_COLUMN && !ExprHasProperty(p, EP_FixedCol) ){
    return p;
  }
  return 0;
}

/*
** Term pTerm is guaranteed to be a WO_IN term. It may be a component term
** of a vector IN expression of the form "(x, y, ...) IN (SELECT ...)".
** This function checks to see if the term is compatible with an index
** column with affinity idxaff (one of the SQLITE_AFF_XYZ values). If so,
** it returns a pointer to the name of the collation sequence (e.g. "BINARY"
** or "NOCASE") used by the comparison in pTerm. If it is not compatible
** with affinity idxaff, NULL is returned.
*/
static SQLITE_NOINLINE const char *indexInAffinityOk(
  Parse *pParse,
  WhereTerm *pTerm,
  u8 idxaff
){
  Expr *pX = pTerm->pExpr;
  Expr inexpr;

  assert( pTerm->eOperator & WO_IN );

  if( sqlite3ExprIsVector(pX->pLeft) ){
    int iField = pTerm->u.x.iField - 1;
    inexpr.flags = 0;
    inexpr.op = TK_EQ;
    inexpr.pLeft = pX->pLeft->x.pList->a[iField].pExpr;
    assert( ExprUseXSelect(pX) );
    inexpr.pRight = pX->x.pSelect->pEList->a[iField].pExpr;
    pX = &inexpr;
  }

  if( sqlite3IndexAffinityOk(pX, idxaff) ){
    CollSeq *pRet = sqlite3ExprCompareCollSeq(pParse, pX);
    return pRet ? pRet->zName : sqlite3StrBINARY;
  }
  return 0;
}

/*
** Advance to the next WhereTerm that matches according to the criteria
** established when the pScan object was initialized by whereScanInit().
** Return NULL if there are no more matching WhereTerms.
*/
static WhereTerm *whereScanNext(WhereScan *pScan){
  int iCur;            /* The cursor on the LHS of the term */
162459
162460
162461
162462
162463
162464
162465
162466

162467
162468
162469
162470
162471
162472
162473
162474
162475
162476
162477
162478
162479





162480
162481
162482
162483


162484
162485
162486
162487
162488
162489
162490
162285
162286
162287
162288
162289
162290
162291

162292
162293
162294











162295
162296
162297
162298
162299




162300
162301
162302
162303
162304
162305
162306
162307
162308







-
+


-
-
-
-
-
-
-
-
-
-
-
+
+
+
+
+
-
-
-
-
+
+







              pScan->aiColumn[j] = pX->iColumn;
              pScan->nEquiv++;
            }
          }
          if( (pTerm->eOperator & pScan->opMask)!=0 ){
            /* Verify the affinity and collating sequence match */
            if( pScan->zCollName && (pTerm->eOperator & WO_ISNULL)==0 ){
              const char *zCollName;
              CollSeq *pColl;
              Parse *pParse = pWC->pWInfo->pParse;
              pX = pTerm->pExpr;

              if( (pTerm->eOperator & WO_IN) ){
                zCollName = indexInAffinityOk(pParse, pTerm, pScan->idxaff);
                if( !zCollName ) continue;
              }else{
                CollSeq *pColl;
                if( !sqlite3IndexAffinityOk(pX, pScan->idxaff) ){
                  continue;
                }
                assert(pX->pLeft);
                pColl = sqlite3ExprCompareCollSeq(pParse, pX);
              if( !sqlite3IndexAffinityOk(pX, pScan->idxaff) ){
                continue;
              }
              assert(pX->pLeft);
              pColl = sqlite3ExprCompareCollSeq(pParse, pX);
                zCollName = pColl ? pColl->zName : sqlite3StrBINARY;
              }

              if( sqlite3StrICmp(zCollName, pScan->zCollName) ){
              if( pColl==0 ) pColl = pParse->db->pDfltColl;
              if( sqlite3StrICmp(pColl->zName, pScan->zCollName) ){
                continue;
              }
            }
            if( (pTerm->eOperator & (WO_EQ|WO_IS))!=0
             && (pX = pTerm->pExpr->pRight, ALWAYS(pX!=0))
             && pX->op==TK_COLUMN
             && pX->iTable==pScan->aiCur[0]
166128
166129
166130
166131
166132
166133
166134
166135
166136
166137
166138
166139
166140
166141
166142
166143
166144
166145
166146
166147
166148
166149
166150
166151
166152
166153
166154
166155
166156
165946
165947
165948
165949
165950
165951
165952















165953
165954
165955
165956
165957
165958
165959







-
-
-
-
-
-
-
-
-
-
-
-
-
-
-







*/
static int isLimitTerm(WhereTerm *pTerm){
  assert( pTerm->eOperator==WO_AUX || pTerm->eMatchOp==0 );
  return pTerm->eMatchOp>=SQLITE_INDEX_CONSTRAINT_LIMIT
      && pTerm->eMatchOp<=SQLITE_INDEX_CONSTRAINT_OFFSET;
}

/*
** Return true if the first nCons constraints in the pUsage array are
** marked as in-use (have argvIndex>0). False otherwise.
*/
static int allConstraintsUsed(
  struct sqlite3_index_constraint_usage *aUsage,
  int nCons
){
  int ii;
  for(ii=0; ii<nCons; ii++){
    if( aUsage[ii].argvIndex<=0 ) return 0;
  }
  return 1;
}

/*
** Argument pIdxInfo is already populated with all constraints that may
** be used by the virtual table identified by pBuilder->pNew->iTab. This
** function marks a subset of those constraints usable, invokes the
** xBestIndex method and adds the returned plan to pBuilder.
**
** A constraint is marked usable if:
166283
166284
166285
166286
166287
166288
166289
166290
166291
166292
166293
166294
166295
166296

166297
166298
166299

166300
166301
166302
166303


166304
166305
166306
166307
166308
166309
166310
166086
166087
166088
166089
166090
166091
166092


166093




166094
166095
166096

166097




166098
166099
166100
166101
166102
166103
166104
166105
166106







-
-

-
-
-
-
+


-
+
-
-
-
-
+
+







        ** (2) Multiple outputs from a single IN value will not merge
        ** together.  */
        pIdxInfo->orderByConsumed = 0;
        pIdxInfo->idxFlags &= ~SQLITE_INDEX_SCAN_UNIQUE;
        *pbIn = 1; assert( (mExclude & WO_IN)==0 );
      }

      /* Unless pbRetryLimit is non-NULL, there should be no LIMIT/OFFSET
      ** terms. And if there are any, they should follow all other terms. */
      assert( pbRetryLimit || !isLimitTerm(pTerm) );
      assert( !isLimitTerm(pTerm) || i>=nConstraint-2 );
      assert( !isLimitTerm(pTerm) || i==nConstraint-1 || isLimitTerm(pTerm+1) );

      if( isLimitTerm(pTerm) && (*pbIn || !allConstraintsUsed(pUsage, i)) ){
      if( isLimitTerm(pTerm) && *pbIn ){
        /* If there is an IN(...) term handled as an == (separate call to
        ** xFilter for each value on the RHS of the IN) and a LIMIT or
        ** OFFSET term handled as well, the plan is unusable. Similarly,
        ** OFFSET term handled as well, the plan is unusable. Set output
        ** if there is a LIMIT/OFFSET and there are other unused terms,
        ** the plan cannot be used. In these cases set variable *pbRetryLimit
        ** to true to tell the caller to retry with LIMIT and OFFSET
        ** disabled. */
        ** variable *pbRetryLimit to true to tell the caller to retry with
        ** LIMIT and OFFSET disabled. */
        if( pIdxInfo->needToFreeIdxStr ){
          sqlite3_free(pIdxInfo->idxStr);
          pIdxInfo->idxStr = 0;
          pIdxInfo->needToFreeIdxStr = 0;
        }
        *pbRetryLimit = 1;
        return SQLITE_OK;
168060
168061
168062
168063
168064
168065
168066
168067
168068
168069
168070
168071
168072
168073
168074
168075
168076
168077
168078
168079
168080
168081
168082
168083
168084
168085
168086
168087
168088
168089
168090
168091
168092
168093
168094
168095
168096
168097
168098
168099
168100
168101
168102
168103
168104
168105
168106
168107
168108
168109
168110
168111
168112
168113
168114
168115
168116
168117
168118
168119
168120
168121
168122
168123
168124
168125
167856
167857
167858
167859
167860
167861
167862




















































167863
167864
167865
167866
167867
167868
167869







-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-







           (double)sqlite3LogEstToInt(pTab->nRowLogEst)));
      }
    }
    nSearch += pLoop->nOut;
  }
}

/*
** Expression Node callback for sqlite3ExprCanReturnSubtype().
**
** Only a function call is able to return a subtype.  So if the node
** is not a function call, return WRC_Prune immediately.
**
** A function call is able to return a subtype if it has the
** SQLITE_RESULT_SUBTYPE property.
**
** Assume that every function is able to pass-through a subtype from
** one of its argument (using sqlite3_result_value()).  Most functions
** are not this way, but we don't have a mechanism to distinguish those
** that are from those that are not, so assume they all work this way.
** That means that if one of its arguments is another function and that
** other function is able to return a subtype, then this function is
** able to return a subtype.
*/
static int exprNodeCanReturnSubtype(Walker *pWalker, Expr *pExpr){
  int n;
  FuncDef *pDef;
  sqlite3 *db;
  if( pExpr->op!=TK_FUNCTION ){
    return WRC_Prune;
  }
  assert( ExprUseXList(pExpr) );
  db = pWalker->pParse->db;
  n = pExpr->x.pList ? pExpr->x.pList->nExpr : 0;
  pDef = sqlite3FindFunction(db, pExpr->u.zToken, n, ENC(db), 0);
  if( pDef==0 || (pDef->funcFlags & SQLITE_RESULT_SUBTYPE)!=0 ){
    pWalker->eCode = 1;
    return WRC_Prune;
  }
  return WRC_Continue;
}

/*
** Return TRUE if expression pExpr is able to return a subtype.
**
** A TRUE return does not guarantee that a subtype will be returned.
** It only indicates that a subtype return is possible.  False positives
** are acceptable as they only disable an optimization.  False negatives,
** on the other hand, can lead to incorrect answers.
*/
static int sqlite3ExprCanReturnSubtype(Parse *pParse, Expr *pExpr){
  Walker w;
  memset(&w, 0, sizeof(w));
  w.pParse = pParse;
  w.xExprCallback = exprNodeCanReturnSubtype;
  sqlite3WalkExpr(&w, pExpr);
  return w.eCode;
}

/*
** The index pIdx is used by a query and contains one or more expressions.
** In other words pIdx is an index on an expression.  iIdxCur is the cursor
** number for the index and iDataCur is the cursor number for the corresponding
** table.
**
** This routine adds IndexedExpr entries to the Parse->pIdxEpr field for
168145
168146
168147
168148
168149
168150
168151
168152

168153
168154
168155







168156


168157
168158
168159
168160
168161
168162
168163
167889
167890
167891
167892
167893
167894
167895

167896
167897
167898
167899
167900
167901
167902
167903
167904
167905
167906

167907
167908
167909
167910
167911
167912
167913
167914
167915







-
+



+
+
+
+
+
+
+
-
+
+







      pExpr = pIdx->aColExpr->a[i].pExpr;
    }else if( j>=0 && (pTab->aCol[j].colFlags & COLFLAG_VIRTUAL)!=0 ){
      pExpr = sqlite3ColumnExpr(pTab, &pTab->aCol[j]);
    }else{
      continue;
    }
    if( sqlite3ExprIsConstant(0,pExpr) ) continue;
    if( pExpr->op==TK_FUNCTION && sqlite3ExprCanReturnSubtype(pParse,pExpr) ){
    if( pExpr->op==TK_FUNCTION ){
      /* Functions that might set a subtype should not be replaced by the
      ** value taken from an expression index since the index omits the
      ** subtype.  https://sqlite.org/forum/forumpost/68d284c86b082c3e */
      int n;
      FuncDef *pDef;
      sqlite3 *db = pParse->db;
      assert( ExprUseXList(pExpr) );
      n = pExpr->x.pList ? pExpr->x.pList->nExpr : 0;
      pDef = sqlite3FindFunction(db, pExpr->u.zToken, n, ENC(db), 0);
      if( pDef==0 || (pDef->funcFlags & SQLITE_RESULT_SUBTYPE)!=0 ){
      continue;
        continue;
      }
    }
    p = sqlite3DbMallocRaw(pParse->db,  sizeof(IndexedExpr));
    if( p==0 ) break;
    p->pIENext = pParse->pIdxEpr;
#ifdef WHERETRACE_ENABLED
    if( sqlite3WhereTrace & 0x200 ){
      sqlite3DebugPrintf("New pParse->pIdxEpr term {%d,%d}\n", iIdxCur, i);
169129
169130
169131
169132
169133
169134
169135
169136
169137



169138
169139
169140
169141
169142
169143
169144
168881
168882
168883
168884
168885
168886
168887


168888
168889
168890
168891
168892
168893
168894
168895
168896
168897







-
-
+
+
+







        assert( pLevel->iTabCur==pSrc->iCursor );
        if( pSrc->fg.viaCoroutine ){
          int m, n;
          n = pSrc->regResult;
          assert( pSrc->pTab!=0 );
          m = pSrc->pTab->nCol;
          sqlite3VdbeAddOp3(v, OP_Null, 0, n, n+m-1);
        }
        sqlite3VdbeAddOp1(v, OP_NullRow, pLevel->iTabCur);
        }else{
          sqlite3VdbeAddOp1(v, OP_NullRow, pLevel->iTabCur);
        }
      }
      if( (ws & WHERE_INDEXED)
       || ((ws & WHERE_MULTI_OR) && pLevel->u.pCoveringIdx)
      ){
        if( ws & WHERE_MULTI_OR ){
          Index *pIx = pLevel->u.pCoveringIdx;
          int iDb = sqlite3SchemaToIndex(db, pIx->pSchema);
172833
172834
172835
172836
172837
172838
172839
172840
172841


172842
172843
172844
172845
172846
172847
172848
172586
172587
172588
172589
172590
172591
172592


172593
172594
172595
172596
172597
172598
172599
172600
172601







-
-
+
+







#define TK_FILTER                         166
#define TK_COLUMN                         167
#define TK_AGG_FUNCTION                   168
#define TK_AGG_COLUMN                     169
#define TK_TRUEFALSE                      170
#define TK_ISNOT                          171
#define TK_FUNCTION                       172
#define TK_UPLUS                          173
#define TK_UMINUS                         174
#define TK_UMINUS                         173
#define TK_UPLUS                          174
#define TK_TRUTH                          175
#define TK_REGISTER                       176
#define TK_VECTOR                         177
#define TK_SELECT_COLUMN                  178
#define TK_IF_NULL_ROW                    179
#define TK_ASTERISK                       180
#define TK_SPAN                           181
173865
173866
173867
173868
173869
173870
173871
173872
173873

173874
173875
173876
173877
173878
173879
173880
173618
173619
173620
173621
173622
173623
173624

173625
173626
173627
173628
173629
173630
173631
173632
173633







-

+







    0,  /*     FILTER => nothing */
    0,  /*     COLUMN => nothing */
    0,  /* AGG_FUNCTION => nothing */
    0,  /* AGG_COLUMN => nothing */
    0,  /*  TRUEFALSE => nothing */
    0,  /*      ISNOT => nothing */
    0,  /*   FUNCTION => nothing */
    0,  /*      UPLUS => nothing */
    0,  /*     UMINUS => nothing */
    0,  /*      UPLUS => nothing */
    0,  /*      TRUTH => nothing */
    0,  /*   REGISTER => nothing */
    0,  /*     VECTOR => nothing */
    0,  /* SELECT_COLUMN => nothing */
    0,  /* IF_NULL_ROW => nothing */
    0,  /*   ASTERISK => nothing */
    0,  /*       SPAN => nothing */
174134
174135
174136
174137
174138
174139
174140
174141
174142


174143
174144
174145
174146
174147
174148
174149
173887
173888
173889
173890
173891
173892
173893


173894
173895
173896
173897
173898
173899
173900
173901
173902







-
-
+
+







  /*  166 */ "FILTER",
  /*  167 */ "COLUMN",
  /*  168 */ "AGG_FUNCTION",
  /*  169 */ "AGG_COLUMN",
  /*  170 */ "TRUEFALSE",
  /*  171 */ "ISNOT",
  /*  172 */ "FUNCTION",
  /*  173 */ "UPLUS",
  /*  174 */ "UMINUS",
  /*  173 */ "UMINUS",
  /*  174 */ "UPLUS",
  /*  175 */ "TRUTH",
  /*  176 */ "REGISTER",
  /*  177 */ "VECTOR",
  /*  178 */ "SELECT_COLUMN",
  /*  179 */ "IF_NULL_ROW",
  /*  180 */ "ASTERISK",
  /*  181 */ "SPAN",
176996
176997
176998
176999
177000
177001
177002
177003
177004
177005
177006
177007
177008
177009
177010
177011
177012


177013
177014
177015
177016
177017
177018
177019
177020
176749
176750
176751
176752
176753
176754
176755










176756
176757

176758
176759
176760
176761
176762
176763
176764







-
-
-
-
-
-
-
-
-
-
+
+
-







        break;
      case 214: /* expr ::= NOT expr */
      case 215: /* expr ::= BITNOT expr */ yytestcase(yyruleno==215);
{yymsp[-1].minor.yy454 = sqlite3PExpr(pParse, yymsp[-1].major, yymsp[0].minor.yy454, 0);/*A-overwrites-B*/}
        break;
      case 216: /* expr ::= PLUS|MINUS expr */
{
  Expr *p = yymsp[0].minor.yy454;
  u8 op = yymsp[-1].major + (TK_UPLUS-TK_PLUS);
  assert( TK_UPLUS>TK_PLUS );
  assert( TK_UMINUS == TK_MINUS + (TK_UPLUS - TK_PLUS) );
  if( p && p->op==TK_UPLUS ){
    p->op = op;
    yymsp[-1].minor.yy454 = p;
  }else{
    yymsp[-1].minor.yy454 = sqlite3PExpr(pParse, op, p, 0);
    /*A-overwrites-B*/
  yymsp[-1].minor.yy454 = sqlite3PExpr(pParse, yymsp[-1].major==TK_PLUS ? TK_UPLUS : TK_UMINUS, yymsp[0].minor.yy454, 0);
  /*A-overwrites-B*/
  }
}
        break;
      case 217: /* expr ::= expr PTR expr */
{
  ExprList *pList = sqlite3ExprListAppend(pParse, 0, yymsp[-2].minor.yy454);
  pList = sqlite3ExprListAppend(pParse, pList, yymsp[0].minor.yy454);
  yylhsminor.yy454 = sqlite3ExprFunction(pParse, pList, &yymsp[-1].minor.yy0, 0);
228550
228551
228552
228553
228554
228555
228556
228557
228558
228559
228560
228561
228562
228563
228564



228565
228566
228567
228568
228569
228570
228571
228294
228295
228296
228297
228298
228299
228300



228301
228302
228303
228304
228305
228306
228307
228308
228309
228310
228311
228312
228313
228314
228315







-
-
-





+
+
+







  /* Make sure the buffer contains at least 10 bytes of input data, or all
  ** remaining data if there are less than 10 bytes available. This is
  ** sufficient either for the 'T' or 'P' byte and the varint that follows
  ** it, or for the two single byte values otherwise. */
  p->rc = sessionInputBuffer(&p->in, 2);
  if( p->rc!=SQLITE_OK ) return p->rc;

  sessionDiscardData(&p->in);
  p->in.iCurrent = p->in.iNext;

  /* If the iterator is already at the end of the changeset, return DONE. */
  if( p->in.iNext>=p->in.nData ){
    return SQLITE_DONE;
  }

  sessionDiscardData(&p->in);
  p->in.iCurrent = p->in.iNext;

  op = p->in.aData[p->in.iNext++];
  while( op=='T' || op=='P' ){
    if( pbNew ) *pbNew = 1;
    p->bPatchset = (op=='P');
    if( sessionChangesetReadTblhdr(p) ) return p->rc;
    if( (p->rc = sessionInputBuffer(&p->in, 2)) ) return p->rc;
    p->in.iCurrent = p->in.iNext;
230292
230293
230294
230295
230296
230297
230298
230299
230300
230301
230302
230303
230304
230305
230306
230036
230037
230038
230039
230040
230041
230042

230043
230044
230045
230046
230047
230048
230049







-







/*
** sqlite3_changegroup handle.
*/
struct sqlite3_changegroup {
  int rc;                         /* Error code */
  int bPatch;                     /* True to accumulate patchsets */
  SessionTable *pList;            /* List of tables in current patch */
  SessionBuffer rec;

  sqlite3 *db;                    /* Configured by changegroup_schema() */
  char *zDb;                      /* Configured by changegroup_schema() */
};

/*
** This function is called to merge two changes to the same row together as
230591
230592
230593
230594
230595
230596
230597
230598
230599


230600
230601
230602
230603
230604

230605
230606
230607

230608
230609
230610
230611
230612
230613
230614
230615

230616
230617
230618

230619
230620
230621
230622
230623
230624
230625
230626
230627
230628
230629
230630
230631
230632
230633
230634
230635
230636
230637
230638
230639
230640
230641
230642

230643
230644
230645
230646
230647
230648

230649
230650
230651
230652
230653
230654
230655
230656

230657
230658
230659

230660
230661
230662
230663
230664
230665




230666
230667
230668

230669
230670
230671
230672
230673
230674
230675
230676
230677
230678
230679
230680
230681
230682







230683
230684
230685
230686
230687
230688
230689
230690
230691
230692
230693
230694









230695

230696
230697
230698
230699
230700
230701
230702
230703
230704
230705
230706
230707
230708
230709
230710
230711
230712
230713
230714
230715
























































230716




230717
230718
230719




230720
230721
230722
230723
230724
230725
230726
230727
230728
230729
230730
230731
230732
230733
230734

230735
230736
230737
230738
230739
230740
230741
230742
230743
230744
230745
230746







230747
230748
230749
230750


230751
230752
230753
230754
230755
230756
230757
230758
230759
230760
230761
230762
230763
230764
230765
230766
230767
230768
230769
230770
230771
230772
230773
230774
230775
230334
230335
230336
230337
230338
230339
230340


230341
230342



230343

230344



230345








230346



230347
























230348






230349








230350



230351






230352
230353
230354
230355



230356














230357
230358
230359
230360
230361
230362
230363



230364








230365
230366
230367
230368
230369
230370
230371
230372
230373
230374
230375




















230376
230377
230378
230379
230380
230381
230382
230383
230384
230385
230386
230387
230388
230389
230390
230391
230392
230393
230394
230395
230396
230397
230398
230399
230400
230401
230402
230403
230404
230405
230406
230407
230408
230409
230410
230411
230412
230413
230414
230415
230416
230417
230418
230419
230420
230421
230422
230423
230424
230425
230426
230427
230428
230429
230430
230431

230432
230433
230434
230435
230436
230437
230438
230439
230440
230441
230442
230443
230444
230445
230446
230447
230448
230449
230450
230451
230452
230453
230454
230455
230456

230457


230458
230459
230460







230461
230462
230463
230464
230465
230466
230467




230468
230469


















230470
230471
230472
230473
230474
230475
230476







-
-
+
+
-
-
-

-
+
-
-
-
+
-
-
-
-
-
-
-
-
+
-
-
-
+
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
+
-
-
-
-
-
-
+
-
-
-
-
-
-
-
-
+
-
-
-
+
-
-
-
-
-
-
+
+
+
+
-
-
-
+
-
-
-
-
-
-
-
-
-
-
-
-
-
-
+
+
+
+
+
+
+
-
-
-

-
-
-
-
-
-
-
-
+
+
+
+
+
+
+
+
+

+
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
-
+
+
+
+



+
+
+
+














-
+
-
-



-
-
-
-
-
-
-
+
+
+
+
+
+
+
-
-
-
-
+
+
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-







    sessionAppendBlob(pOut, aRec, nRec, &rc);
  }

  return rc;
}

/*
** Locate or create a SessionTable object that may be used to add the
** change currently pointed to by iterator pIter to changegroup pGrp.
** Add all changes in the changeset traversed by the iterator passed as
** the first argument to the changegroup hash tables.
** If successful, set output variable (*ppTab) to point to the table
** object and return SQLITE_OK. Otherwise, if some error occurs, return
** an SQLite error code and leave (*ppTab) set to NULL.
*/
static int sessionChangesetFindTable(
static int sessionChangesetToHash(
  sqlite3_changegroup *pGrp,
  const char *zTab,
  sqlite3_changeset_iter *pIter,
  sqlite3_changeset_iter *pIter,   /* Iterator to read from */
  SessionTable **ppTab
){
  int rc = SQLITE_OK;
  SessionTable *pTab = 0;
  int nTab = (int)strlen(zTab);
  u8 *abPK = 0;
  int nCol = 0;

  sqlite3_changegroup *pGrp,       /* Changegroup object to add changeset to */
  *ppTab = 0;
  sqlite3changeset_pk(pIter, &abPK, &nCol);

  int bRebase                      /* True if hash table is for rebasing */
  /* Search the list for an existing table */
  for(pTab = pGrp->pList; pTab; pTab=pTab->pNext){
    if( 0==sqlite3_strnicmp(pTab->zName, zTab, nTab+1) ) break;
  }

  /* If one was not found above, create a new table now */
  if( !pTab ){
    SessionTable **ppNew;

    pTab = sqlite3_malloc64(sizeof(SessionTable) + nCol + nTab+1);
    if( !pTab ){
      return SQLITE_NOMEM;
    }
    memset(pTab, 0, sizeof(SessionTable));
    pTab->nCol = nCol;
    pTab->abPK = (u8*)&pTab[1];
    memcpy(pTab->abPK, abPK, nCol);
    pTab->zName = (char*)&pTab->abPK[nCol];
    memcpy(pTab->zName, zTab, nTab+1);

    if( pGrp->db ){
      pTab->nCol = 0;
      rc = sessionInitTable(0, pTab, pGrp->db, pGrp->zDb);
      if( rc ){
){
        assert( pTab->azCol==0 );
        sqlite3_free(pTab);
        return rc;
      }
    }

  u8 *aRec;
    /* The new object must be linked on to the end of the list, not
    ** simply added to the start of it. This is to ensure that the
    ** tables within the output of sqlite3changegroup_output() are in
    ** the right order.  */
    for(ppNew=&pGrp->pList; *ppNew; ppNew=&(*ppNew)->pNext);
    *ppNew = pTab;
  }

  int nRec;
  /* Check that the table is compatible. */
  if( !sessionChangesetCheckCompat(pTab, nCol, abPK) ){
    rc = SQLITE_SCHEMA;
  int rc = SQLITE_OK;
  }

  *ppTab = pTab;
  return rc;
}

  SessionTable *pTab = 0;
  SessionBuffer rec = {0, 0, 0};

  while( SQLITE_ROW==sessionChangesetNext(pIter, &aRec, &nRec, 0) ){
/*
** Add the change currently indicated by iterator pIter to the hash table
** belonging to changegroup pGrp.
    const char *zNew;
*/
static int sessionOneChangeToHash(
  sqlite3_changegroup *pGrp,
  sqlite3_changeset_iter *pIter,
  int bRebase
){
  int rc = SQLITE_OK;
  int nCol = 0;
  int op = 0;
  int iHash = 0;
  int bIndirect = 0;
  SessionChange *pChange = 0;
  SessionChange *pExist = 0;
  SessionChange **pp = 0;
    int nCol;
    int op;
    int iHash;
    int bIndirect;
    SessionChange *pChange;
    SessionChange *pExist = 0;
    SessionChange **pp;
  SessionTable *pTab = 0;
  u8 *aRec = &pIter->in.aData[pIter->in.iCurrent + 2];
  int nRec = (pIter->in.iNext - pIter->in.iCurrent) - 2;

  /* Ensure that only changesets, or only patchsets, but not a mixture
  ** of both, are being combined. It is an error to try to combine a
  ** changeset and a patchset.  */
  if( pGrp->pList==0 ){
    pGrp->bPatch = pIter->bPatchset;
  }else if( pIter->bPatchset!=pGrp->bPatch ){
    rc = SQLITE_ERROR;
  }
    /* Ensure that only changesets, or only patchsets, but not a mixture
    ** of both, are being combined. It is an error to try to combine a
    ** changeset and a patchset.  */
    if( pGrp->pList==0 ){
      pGrp->bPatch = pIter->bPatchset;
    }else if( pIter->bPatchset!=pGrp->bPatch ){
      rc = SQLITE_ERROR;
      break;
    }

    sqlite3changeset_op(pIter, &zNew, &nCol, &op, &bIndirect);
  if( rc==SQLITE_OK ){
    const char *zTab = 0;
    sqlite3changeset_op(pIter, &zTab, &nCol, &op, &bIndirect);
    rc = sessionChangesetFindTable(pGrp, zTab, pIter, &pTab);
  }

  if( rc==SQLITE_OK && nCol<pTab->nCol ){
    SessionBuffer *pBuf = &pGrp->rec;
    rc = sessionChangesetExtendRecord(pGrp, pTab, nCol, op, aRec, nRec, pBuf);
    aRec = pBuf->aBuf;
    nRec = pBuf->nBuf;
    assert( pGrp->db );
  }

  if( rc==SQLITE_OK && sessionGrowHash(0, pIter->bPatchset, pTab) ){
    rc = SQLITE_NOMEM;
  }

  if( rc==SQLITE_OK ){
    /* Search for existing entry. If found, remove it from the hash table.
    if( !pTab || sqlite3_stricmp(zNew, pTab->zName) ){
      /* Search the list for a matching table */
      int nNew = (int)strlen(zNew);
      u8 *abPK;

      sqlite3changeset_pk(pIter, &abPK, 0);
      for(pTab = pGrp->pList; pTab; pTab=pTab->pNext){
        if( 0==sqlite3_strnicmp(pTab->zName, zNew, nNew+1) ) break;
      }
      if( !pTab ){
        SessionTable **ppTab;

        pTab = sqlite3_malloc64(sizeof(SessionTable) + nCol + nNew+1);
        if( !pTab ){
          rc = SQLITE_NOMEM;
          break;
        }
        memset(pTab, 0, sizeof(SessionTable));
        pTab->nCol = nCol;
        pTab->abPK = (u8*)&pTab[1];
        memcpy(pTab->abPK, abPK, nCol);
        pTab->zName = (char*)&pTab->abPK[nCol];
        memcpy(pTab->zName, zNew, nNew+1);

        if( pGrp->db ){
          pTab->nCol = 0;
          rc = sessionInitTable(0, pTab, pGrp->db, pGrp->zDb);
          if( rc ){
            assert( pTab->azCol==0 );
            sqlite3_free(pTab);
            break;
          }
        }

        /* The new object must be linked on to the end of the list, not
        ** simply added to the start of it. This is to ensure that the
        ** tables within the output of sqlite3changegroup_output() are in
        ** the right order.  */
        for(ppTab=&pGrp->pList; *ppTab; ppTab=&(*ppTab)->pNext);
        *ppTab = pTab;
      }

      if( !sessionChangesetCheckCompat(pTab, nCol, abPK) ){
        rc = SQLITE_SCHEMA;
        break;
      }
    }

    if( nCol<pTab->nCol ){
      assert( pGrp->db );
      rc = sessionChangesetExtendRecord(pGrp, pTab, nCol, op, aRec, nRec, &rec);
      if( rc ) break;
      aRec = rec.aBuf;
      nRec = rec.nBuf;
    }

    ** Code below may link it back in.  */
    if( sessionGrowHash(0, pIter->bPatchset, pTab) ){
      rc = SQLITE_NOMEM;
      break;
    }
    iHash = sessionChangeHash(
        pTab, (pIter->bPatchset && op==SQLITE_DELETE), aRec, pTab->nChange
    );

    /* Search for existing entry. If found, remove it from the hash table.
    ** Code below may link it back in.
    */
    for(pp=&pTab->apChange[iHash]; *pp; pp=&(*pp)->pNext){
      int bPkOnly1 = 0;
      int bPkOnly2 = 0;
      if( pIter->bPatchset ){
        bPkOnly1 = (*pp)->op==SQLITE_DELETE;
        bPkOnly2 = op==SQLITE_DELETE;
      }
      if( sessionChangeEqual(pTab, bPkOnly1, (*pp)->aRecord, bPkOnly2, aRec) ){
        pExist = *pp;
        *pp = (*pp)->pNext;
        pTab->nEntry--;
        break;
      }
    }
  }


  if( rc==SQLITE_OK ){
    rc = sessionChangeMerge(pTab, bRebase,
        pIter->bPatchset, pExist, op, bIndirect, aRec, nRec, &pChange
    );
  }
  if( rc==SQLITE_OK && pChange ){
    pChange->pNext = pTab->apChange[iHash];
    pTab->apChange[iHash] = pChange;
    pTab->nEntry++;
  }

    if( rc ) break;
    if( pChange ){
      pChange->pNext = pTab->apChange[iHash];
      pTab->apChange[iHash] = pChange;
      pTab->nEntry++;
    }
  }
  if( rc==SQLITE_OK ) rc = pIter->rc;
  return rc;
}


  sqlite3_free(rec.aBuf);
/*
** Add all changes in the changeset traversed by the iterator passed as
** the first argument to the changegroup hash tables.
*/
static int sessionChangesetToHash(
  sqlite3_changeset_iter *pIter,   /* Iterator to read from */
  sqlite3_changegroup *pGrp,       /* Changegroup object to add changeset to */
  int bRebase                      /* True if hash table is for rebasing */
){
  u8 *aRec;
  int nRec;
  int rc = SQLITE_OK;

  while( SQLITE_ROW==(sessionChangesetNext(pIter, &aRec, &nRec, 0)) ){
    rc = sessionOneChangeToHash(pGrp, pIter, bRebase);
    if( rc!=SQLITE_OK ) break;
  }

  if( rc==SQLITE_OK ) rc = pIter->rc;
  return rc;
}

/*
** Serialize a changeset (or patchset) based on all changesets (or patchsets)
** added to the changegroup object passed as the first argument.
230889
230890
230891
230892
230893
230894
230895
230896
230897
230898
230899
230900
230901
230902
230903
230904
230905
230906
230907
230908
230909
230910
230911
230912
230913
230914
230915
230916
230917
230918
230919
230590
230591
230592
230593
230594
230595
230596

















230597
230598
230599
230600
230601
230602
230603







-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-







  if( rc==SQLITE_OK ){
    rc = sessionChangesetToHash(pIter, pGrp, 0);
  }
  sqlite3changeset_finalize(pIter);
  return rc;
}

/*
** Add a single change to a changeset-group.
*/
SQLITE_API int sqlite3changegroup_add_change(
  sqlite3_changegroup *pGrp,
  sqlite3_changeset_iter *pIter
){
  if( pIter->in.iCurrent==pIter->in.iNext
   || pIter->rc!=SQLITE_OK
   || pIter->bInvert
  ){
    /* Iterator does not point to any valid entry or is an INVERT iterator. */
    return SQLITE_ERROR;
  }
  return sessionOneChangeToHash(pGrp, pIter, 0);
}

/*
** Obtain a buffer containing a changeset representing the concatenation
** of all changesets added to the group so far.
*/
SQLITE_API int sqlite3changegroup_output(
    sqlite3_changegroup *pGrp,
    int *pnData,
230955
230956
230957
230958
230959
230960
230961
230962
230963
230964
230965
230966
230967
230968
230969
230639
230640
230641
230642
230643
230644
230645

230646
230647
230648
230649
230650
230651
230652







-







/*
** Delete a changegroup object.
*/
SQLITE_API void sqlite3changegroup_delete(sqlite3_changegroup *pGrp){
  if( pGrp ){
    sqlite3_free(pGrp->zDb);
    sessionDeleteTable(0, pGrp->pList);
    sqlite3_free(pGrp->rec.aBuf);
    sqlite3_free(pGrp);
  }
}

/*
** Combine two changesets together.
*/
231357
231358
231359
231360
231361
231362
231363
231364
231365
231366
231367
231368
231369
231370
231371
231040
231041
231042
231043
231044
231045
231046

231047
231048
231049
231050
231051
231052
231053







-








/*
** Destroy a rebaser object
*/
SQLITE_API void sqlite3rebaser_delete(sqlite3_rebaser *p){
  if( p ){
    sessionDeleteTable(0, p->grp.pList);
    sqlite3_free(p->grp.rec.aBuf);
    sqlite3_free(p);
  }
}

/*
** Global configuration
*/
252509
252510
252511
252512
252513
252514
252515
252516

252517
252518
252519
252520
252521
252522
252523
252191
252192
252193
252194
252195
252196
252197

252198
252199
252200
252201
252202
252203
252204
252205







-
+







static void fts5SourceIdFunc(
  sqlite3_context *pCtx,          /* Function call context */
  int nArg,                       /* Number of args */
  sqlite3_value **apUnused        /* Function arguments */
){
  assert( nArg==0 );
  UNUSED_PARAM2(nArg, apUnused);
  sqlite3_result_text(pCtx, "fts5: 2024-05-08 11:51:56 42d67c6fed3a5f21d7b71515aca471ba61d387e620022735a2e7929fa3a237cf", -1, SQLITE_TRANSIENT);
  sqlite3_result_text(pCtx, "fts5: 2024-04-18 16:11:01 8c0f69e0e4ae0a446838cc193bfd4395fd251f3c7659b35ac388e5a0a7650a66", -1, SQLITE_TRANSIENT);
}

/*
** Return true if zName is the extension on one of the shadow tables used
** by this module.
*/
static int fts5ShadowName(const char *zName){

Changes to extsrc/sqlite3.h.

144
145
146
147
148
149
150
151

152
153
154
155
156
157
158
144
145
146
147
148
149
150

151
152
153
154
155
156
157
158







-
+







**
** See also: [sqlite3_libversion()],
** [sqlite3_libversion_number()], [sqlite3_sourceid()],
** [sqlite_version()] and [sqlite_source_id()].
*/
#define SQLITE_VERSION        "3.46.0"
#define SQLITE_VERSION_NUMBER 3046000
#define SQLITE_SOURCE_ID      "2024-05-08 11:51:56 42d67c6fed3a5f21d7b71515aca471ba61d387e620022735a2e7929fa3a237cf"
#define SQLITE_SOURCE_ID      "2024-04-18 16:11:01 8c0f69e0e4ae0a446838cc193bfd4395fd251f3c7659b35ac388e5a0a7650a66"

/*
** CAPI3REF: Run-Time Library Version Numbers
** KEYWORDS: sqlite3_version sqlite3_sourceid
**
** These interfaces provide the same information as the [SQLITE_VERSION],
** [SQLITE_VERSION_NUMBER], and [SQLITE_SOURCE_ID] C preprocessor macros
11999
12000
12001
12002
12003
12004
12005
12006
12007
12008
12009
12010
12011
12012
12013
12014
12015
12016
12017
12018
12019
12020
12021
12022
12023
12024
12025
12026
12027
12028
12029
12030
12031
12032
12033
12034
12035
12036
11999
12000
12001
12002
12003
12004
12005
























12006
12007
12008
12009
12010
12011
12012







-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-







** detected, SQLITE_CORRUPT is returned. Or, if an out-of-memory condition
** occurs during processing, this function returns SQLITE_NOMEM.
**
** In all cases, if an error occurs the state of the final contents of the
** changegroup is undefined. If no error occurs, SQLITE_OK is returned.
*/
SQLITE_API int sqlite3changegroup_add(sqlite3_changegroup*, int nData, void *pData);

/*
** CAPI3REF: Add A Single Change To A Changegroup
** METHOD: sqlite3_changegroup
**
** This function adds the single change currently indicated by the iterator
** passed as the second argument to the changegroup object. The rules for
** adding the change are just as described for [sqlite3changegroup_add()].
**
** If the change is successfully added to the changegroup, SQLITE_OK is
** returned. Otherwise, an SQLite error code is returned.
**
** The iterator must point to a valid entry when this function is called.
** If it does not, SQLITE_ERROR is returned and no change is added to the
** changegroup. Additionally, the iterator must not have been opened with
** the SQLITE_CHANGESETAPPLY_INVERT flag. In this case SQLITE_ERROR is also
** returned.
*/
SQLITE_API int sqlite3changegroup_add_change(
  sqlite3_changegroup*,
  sqlite3_changeset_iter*
);



/*
** CAPI3REF: Obtain A Composite Changeset From A Changegroup
** METHOD: sqlite3_changegroup
**
** Obtain a buffer containing a changeset (or patchset) representing the
** current contents of the changegroup. If the inputs to the changegroup

Changes to src/add.c.

1055
1056
1057
1058
1059
1060
1061
1062

1063
1064
1065
1066
1067
1068
1069
1055
1056
1057
1058
1059
1060
1061

1062
1063
1064
1065
1066
1067
1068
1069







-
+







  vid = db_lget_int("checkout", 0);
  if( vid==0 ){
    fossil_fatal("no check-out in which to rename files");
  }
  if( g.argc<4 ){
    usage("OLDNAME NEWNAME");
  }
  zDest = file_case_preferred_name(".",g.argv[g.argc-1]);
  zDest = g.argv[g.argc-1];
  db_begin_transaction();
  if( g.argv[1][0]=='r' ){ /* i.e. "rename" */
    moveFiles = 0;
  }else if( softFlag ){
    moveFiles = 0;
  }else if( hardFlag ){
    moveFiles = 1;
1141
1142
1143
1144
1145
1146
1147
1148
1149
1150
1151
1152
1153
1154
1155
1156
1157
1141
1142
1143
1144
1145
1146
1147

1148
1149
1150
1151
1152
1153
1154
1155
1156







-









    mv_one_file(vid, zFrom, zTo, dryRunFlag);
    if( moveFiles ) add_file_to_move(zFrom, zTo);
  }
  db_finalize(&q);
  undo_reset();
  db_end_transaction(0);
  if( moveFiles ) process_files_to_move(dryRunFlag);
  fossil_free(zDest);
}

/*
** Function for stash_apply to be able to restore a file and indicate
** newly ADDED state.
*/
int stash_add_files_in_sfile(int vid){
  return add_files_in_sfile(vid);
}

Changes to src/checkin.c.

591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
622
623
624
625
626
627
628
629
630
631
632
633
634
635
636
637
638
639
640
641
642
643
644
645
646
647
648
649
650
651
652
653
654
655
656
657
658
659
660
661
662
663
664
665
666
667
668
669
670
671
672
673
674
675
676
677
678
679
680
681
682
683
684

685
686
687
688
689
690
691
692
693
694
695
696
697
698
699
700
591
592
593
594
595
596
597



























598




















































599
600
601
602
603
604

605

606
607
608
609
610
611
612

613
614
615
616
617
618
619







-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-

-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-






-
+
-







-








  /* The status command ends with warnings about ambiguous leaves (forks). */
  if( command==STATUS ){
    leaf_ambiguity_warning(vid, vid);
  }
}

/* zIn is a string that is guaranteed to be followed by \n.  Return
** a pointer to the next line after the \n.  The returned value might
** point to the \000 string terminator.
*/
static const char *next_line(const char *zIn){
  const char *z = strchr(zIn, '\n');
  assert( z!=0 );
  return z+1;
}

/* zIn is a non-empty list of filenames in sorted order and separated
** by \n.  There might be a cluster of lines that have the same n-character
** prefix.  Return a pointer to the start of the last line of that
** cluster.  The return value might be zIn if the first line of zIn is
** unique in its first n character.
*/
static const char *last_line(const char *zIn, int n){
  const char *zLast = zIn;
  const char *z;
  while( 1 ){
    z = next_line(zLast);
    if( z[0]==0 || (n>0 && strncmp(zIn, z, n)!=0) ) break;
    zLast = z;
  }
  return zLast;
}

/*
** Print a section of a filelist hierarchy graph.  This is a helper
** routine for print_filelist_as_tree() below.
*/
static const char *print_filelist_section(
  const char *zIn,           /* List of filenames, separated by \n */
  const char *zLast,         /* Last filename in the list to print */
  const char *zPrefix,       /* Prefix so put before each output line */
  int nDir                   /* Ignore this many characters of directory name */
){
  /* Unicode box-drawing characters: U+251C, U+2514, U+2502 */
  const char *zENTRY = "\342\224\234\342\224\200\342\224\200 ";
  const char *zLASTE = "\342\224\224\342\224\200\342\224\200 ";   
  const char *zCONTU = "\342\224\202   ";
  const char *zBLANK = "    ";

  while( zIn<=zLast ){
    int i;
    for(i=nDir; zIn[i]!='\n' && zIn[i]!='/'; i++){}
    if( zIn[i]=='/' ){
      char *zSubPrefix;
      const char *zSubLast = last_line(zIn, i+1);
      zSubPrefix = mprintf("%s%s", zPrefix, zSubLast==zLast ? zBLANK : zCONTU);
      fossil_print("%s%s%.*s\n", zPrefix, zSubLast==zLast ? zLASTE : zENTRY,
                   i-nDir, &zIn[nDir]);
      zIn = print_filelist_section(zIn, zSubLast, zSubPrefix, i+1);
      fossil_free(zSubPrefix);
    }else{
      fossil_print("%s%s%.*s\n", zPrefix, zIn==zLast ? zLASTE : zENTRY,
                   i-nDir, &zIn[nDir]);
      zIn = next_line(zIn);
    }
  }
  return zIn;
}

/*
** Input blob pList is a list of filenames, one filename per line,
** in sorted order and with / directory separators.  Output this list
** as a tree in a manner similar to the "tree" command on Linux.
*/
static void print_filelist_as_tree(Blob *pList){
  char *zAll;
  const char *zLast;
  fossil_print("%s\n", g.zLocalRoot);
  zAll = blob_str(pList);
  if( zAll[0] ){
    zLast = last_line(zAll, 0);
    print_filelist_section(zAll, zLast, "", 0);
  }
}

/*
** Take care of -r version of ls command
*/
static void ls_cmd_rev(
  const char *zRev,  /* Revision string given */
  int verboseFlag,   /* Verbose flag given */
  int showAge,       /* Age flag given */
  int timeOrder,     /* Order by time flag given */
  int timeOrder      /* Order by time flag given */
  int treeFmt        /* Show output in the tree format */
){
  Stmt q;
  char *zOrderBy = "pathname COLLATE nocase";
  char *zName;
  Blob where;
  int rid;
  int i;
  Blob out;

  /* Handle given file names */
  blob_zero(&where);
  for(i=2; i<g.argc; i++){
    Blob fname;
    file_tree_name(g.argv[i], &fname, 0, 1);
    zName = blob_str(&fname);
728
729
730
731
732
733
734
735
736
737
738
739
740
741
742
743

744
745
746
747
748
749
750
751
752
753
754
755
756
757
758
759
760
761
762
647
648
649
650
651
652
653

654
655
656
657
658



659
660
661
662
663
664
665
666
667




668
669
670
671
672
673
674







-





-
-
-
+








-
-
-
-







    "SELECT datetime(fileage.mtime, toLocal()), fileage.pathname,\n"
    "       blob.size\n"
    "  FROM fileage, blob\n"
    " WHERE blob.rid=fileage.fid %s\n"
    " ORDER BY %s;", blob_sql_text(&where), zOrderBy /*safe-for-%s*/
  );
  blob_reset(&where);
  if( treeFmt ) blob_init(&out, 0, 0);

  while( db_step(&q)==SQLITE_ROW ){
    const char *zTime = db_column_text(&q,0);
    const char *zFile = db_column_text(&q,1);
    int size = db_column_int(&q,2);
    if( treeFmt ){
      blob_appendf(&out, "%s\n", zFile);
    }else if( verboseFlag ){
    if( verboseFlag ){
      fossil_print("%s  %7d  %s\n", zTime, size, zFile);
    }else if( showAge ){
      fossil_print("%s  %s\n", zTime, zFile);
    }else{
      fossil_print("%s\n", zFile);
    }
  }
  db_finalize(&q);
  if( treeFmt ){
    print_filelist_as_tree(&out);
    blob_reset(&out);
  }
}

/*
** COMMAND: ls
**
** Usage: %fossil ls ?OPTIONS? ?PATHS ...?
**
778
779
780
781
782
783
784
785
786


787
788
789
790


791
792
793

794
795
796
797
798
799
800
801
802
803
804
805
806
807
808
809
810
811
812
813
814
815
816
817
818
819
820
821
822
823
824
825
826
827

828
829
830
831
832
833
834
690
691
692
693
694
695
696


697
698
699
700


701
702

703

704
705
706
707
708
709
710

711
712
713
714
715
716
717
718
719
720
721
722
723
724
725
726
727
728




729
730
731
732

733
734
735
736
737
738
739
740







-
-
+
+


-
-
+
+
-

-
+






-


















-
-
-
-




-
+







**
** The -t option changes the sort order.  Without -t, files are sorted by
** path and name (case insensitive sort if -r).  If neither --age nor -r
** are used, -t sorts by modification time, otherwise by commit time.
**
** Options:
**   --age                 Show when each file was committed
**   --hash                With -v, verify file status using hashing
**                         rather than relying on file sizes and mtimes
**   -v|--verbose          Provide extra information about each file
**   -t                    Sort output in time order
**   -r VERSION            The specific check-in to list
**   -R|--repository REPO  Extract info from repository REPO
**   -t                    Sort output in time order
**   --tree                Tree format
**   --hash                With -v, verify file status using hashing
**                         rather than relying on file sizes and mtimes
**   -v|--verbose          Provide extra information about each file
**
** See also: [[changes]], [[extras]], [[status]], [[tree]]
** See also: [[changes]], [[extras]], [[status]]
*/
void ls_cmd(void){
  int vid;
  Stmt q;
  int verboseFlag;
  int showAge;
  int treeFmt;
  int timeOrder;
  char *zOrderBy = "pathname";
  Blob where;
  int i;
  int useHash = 0;
  const char *zName;
  const char *zRev;

  verboseFlag = find_option("verbose","v", 0)!=0;
  if( !verboseFlag ){
    verboseFlag = find_option("l","l", 0)!=0; /* deprecated */
  }
  showAge = find_option("age",0,0)!=0;
  zRev = find_option("r","r",1);
  timeOrder = find_option("t","t",0)!=0;
  if( verboseFlag ){
    useHash = find_option("hash",0,0)!=0;
  }
  treeFmt = find_option("tree",0,0)!=0;
  if( treeFmt ){
    if( zRev==0 ) zRev = "current";
  }

  if( zRev!=0 ){
    db_find_and_open_repository(0, 0);
    verify_all_options();
    ls_cmd_rev(zRev,verboseFlag,showAge,timeOrder,treeFmt);
    ls_cmd_rev(zRev,verboseFlag,showAge,timeOrder);
    return;
  }else if( find_option("R",0,1)!=0 ){
    fossil_fatal("the -r is required in addition to -R");
  }

  db_must_be_within_tree();
  vid = db_lget_int("checkout", 0);
921
922
923
924
925
926
927
928
929
930
931
932
933
934
935
936
937
938
939
940
941
942
943
944
945
946
947
948
949
950
951
952
953
954
955
956
957
958
959
827
828
829
830
831
832
833

























834
835
836
837
838
839
840







-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-







      fossil_print("%s%s\n", type, zPathname);
    }
    free(zFullName);
  }
  db_finalize(&q);
}

/*
** COMMAND: tree
**
** Usage: %fossil tree ?OPTIONS? ?PATHS ...?
**
** List all files in the current check-out in after the fashion of the
** "tree" command.  If PATHS is included, only the named files
** (or their children if directories) are shown.
**
** Options:
**   -r VERSION            The specific check-in to list
**   -R|--repository REPO  Extract info from repository REPO
**
** See also: [[ls]]
*/
void tree_cmd(void){
  const char *zRev;

  zRev = find_option("r","r",1);
  if( zRev==0 ) zRev = "current";
  db_find_and_open_repository(0, 0);
  verify_all_options();
  ls_cmd_rev(zRev,0,0,0,1);
}

/*
** COMMAND: extras
**
** Usage: %fossil extras ?OPTIONS? ?PATH1 ...?
**
** Print a list of all files in the source tree that are not part of the
** current check-out. See also the "clean" command. If paths are specified,
973
974
975
976
977
978
979
980
981
982
983
984
985
986
987
988
989
990
991
992
993
994
995
996
997
998
999
1000
1001
1002
1003
1004
1005
1006
1007
1008
1009
1010
1011
1012
1013
1014
1015
1016
1017
1018
1019
1020
1021
1022
1023
1024
1025
1026

1027
1028
1029
1030
1031
1032
1033
1034
854
855
856
857
858
859
860

861
862
863
864
865
866
867
868
869

870
871
872
873
874
875
876
877
878
879
880




881
882
883
884
885
886
887
888
889
890
891
892
893
894
895
896
897




898

899
900
901
902
903
904
905







-









-











-
-
-
-

















-
-
-
-
+
-







**    --abs-paths             Display absolute pathnames
**    --case-sensitive BOOL   Override case-sensitive setting
**    --dotfiles              Include files beginning with a dot (".")
**    --header                Identify the repository if there are extras
**    --ignore CSG            Ignore files matching patterns from the argument
**    --rel-paths             Display pathnames relative to the current working
**                            directory
**    --tree                  Show output in the tree format
**
** See also: [[changes]], [[clean]], [[status]]
*/
void extras_cmd(void){
  Blob report = BLOB_INITIALIZER;
  const char *zIgnoreFlag = find_option("ignore",0,1);
  unsigned scanFlags = find_option("dotfiles",0,0)!=0 ? SCAN_ALL : 0;
  unsigned flags = C_EXTRA;
  int showHdr = find_option("header",0,0)!=0;
  int treeFmt = find_option("tree",0,0)!=0;
  Glob *pIgnore;

  if( find_option("temp",0,0)!=0 ) scanFlags |= SCAN_TEMP;
  db_must_be_within_tree();

  if( determine_cwd_relative_option() ){
    flags |= C_RELPATH;
  }

  if( db_get_boolean("dotfiles", 0) ) scanFlags |= SCAN_ALL;

  if( treeFmt ){
    flags &= ~C_RELPATH;
  }

  /* We should be done with options.. */
  verify_all_options();

  if( zIgnoreFlag==0 ){
    zIgnoreFlag = db_get("ignore-glob", 0);
  }
  pIgnore = glob_create(zIgnoreFlag);
  locate_unmanaged_files(g.argc-2, g.argv+2, scanFlags, pIgnore);
  glob_free(pIgnore);

  blob_zero(&report);
  status_report(&report, flags);
  if( blob_size(&report) ){
    if( showHdr ){
      fossil_print("Extras for %s at %s:\n", db_get("project-name","<unnamed>"),
                   g.zLocalRoot);
    }
    if( treeFmt ){
      print_filelist_as_tree(&report);
    }else{
      blob_write_to_file(&report, "-");
    blob_write_to_file(&report, "-");
    }
  }
  blob_reset(&report);
}

/*
** COMMAND: clean
**

Changes to src/file.c.

2407
2408
2409
2410
2411
2412
2413
2414
2415
2416
2417
2418
2419
2420
2421
2422
2423
2424
2425
2426
2427
2428
2429

2430
2431
2432
2433
2434
2435
2436
2437
2438
2439
2440
2441
2442
2407
2408
2409
2410
2411
2412
2413



2414
2415
2416
2417
2418
2419
2420
2421
2422
2423
2424
2425

2426






2427
2428
2429
2430
2431
2432
2433







-
-
-












-
+
-
-
-
-
-
-







  return z;
}

/*
** Count the number of objects (files and subdirectories) in a given
** directory.  Return the count.  Return -1 if the object is not a
** directory.
**
** This routine never counts the two "." and ".." special directory
** entries, even if the provided glob would match them.
*/
int file_directory_size(const char *zDir, const char *zGlob, int omitDotFiles){
  void *zNative;
  DIR *d;
  int n = -1;
  zNative = fossil_utf8_to_path(zDir,1);
  d = opendir(zNative);
  if( d ){
    struct dirent *pEntry;
    n = 0;
    while( (pEntry=readdir(d))!=0 ){
      if( pEntry->d_name[0]==0 ) continue;
      if( pEntry->d_name[0]=='.' &&
      if( omitDotFiles && pEntry->d_name[0]=='.' ) continue;
          (omitDotFiles
           /* Skip the special "." and ".." entries. */
           || pEntry->d_name[1]==0
           || (pEntry->d_name[1]=='.' && pEntry->d_name[2]==0))){
        continue;
      }
      if( zGlob ){
        char *zUtf8 = fossil_path_to_utf8(pEntry->d_name);
        int rc = sqlite3_strglob(zGlob, zUtf8);
        fossil_path_free(zUtf8);
        if( rc ) continue;
      }
      n++;

Changes to src/graph.c.

977
978
979
980
981
982
983
984

985
986
987
988
989
990
991
992
993
994
995
996
997
998
999
1000
1001
1002
1003
1004
1005
1006
1007
1008
1009
1010
1011
1012
1013
1014
1015
1016
1017
1018
1019





1020
1021
1022
1023
1024
1025
1026
977
978
979
980
981
982
983

984



985
986
987
988
989
990
991
992
993
994
995
996
997
998
999
1000





1001
1002
1003
1004
1005
1006
1007
1008
1009
1010
1011
1012
1013
1014
1015
1016
1017
1018
1019
1020
1021
1022
1023







-
+
-
-
-
















-
-
-
-
-











+
+
+
+
+







  aMap = p->aiRailMap;
  for(i=0; i<=p->mxRail; i++) aMap[i] = i; /* Set up a default mapping */
  if( nTimewarp==0 ){
    /* Priority bits:
    **
    **    0x04      The preferred branch
    **
    **    0x02      A merge rail - a rail that contains merge lines into
    **    0x02      A merge rail - a rail that contains merge lines
    **              the preferred branch.  Only applies if a preferred branch
    **              is defined.  This improves the display of r=BRANCH
    **              options to /timeline.
    **
    **    0x01      A rail that merges with the preferred branch
    */
    u8 aPriority[GR_MAX_RAIL];
    memset(aPriority, 0, p->mxRail+1);
    if( zLeftBranch ){
      char *zLeft = persistBranchName(p, zLeftBranch);
      for(pRow=p->pFirst; pRow; pRow=pRow->pNext){
        if( pRow->zBranch==zLeft ){
          aPriority[pRow->iRail] |= 4;
          for(i=0; i<=p->mxRail; i++){
            if( pRow->mergeIn[i] ) aPriority[i] |= 1;
          }
          if( pRow->mergeOut>=0 ) aPriority[pRow->mergeOut] |= 1;
        }
      }
      for(i=0; i<=p->mxRail; i++){
        if( p->mergeRail & BIT(i) ){
          aPriority[i] |= 2;
        }
      }
    }else{
      j = 1;
      aPriority[0] = 4;
      for(pRow=p->pFirst; pRow; pRow=pRow->pNext){
        if( pRow->iRail==0 ){
          for(i=0; i<=p->mxRail; i++){
            if( pRow->mergeIn[i] ) aPriority[i] |= 1;
          }
          if( pRow->mergeOut>=0 ) aPriority[pRow->mergeOut] |= 1;
        }
      }
    }
    for(i=0; i<=p->mxRail; i++){
      if( p->mergeRail & BIT(i) ){
        aPriority[i] |= 2;
      }
    }

#if 0
    fprintf(stderr,"mergeRail: 0x%llx\n", p->mergeRail);
    fprintf(stderr,"Priority:");
    for(i=0; i<=p->mxRail; i++) fprintf(stderr," %d", aPriority[i]);
    fprintf(stderr,"\n");

Changes to src/path.c.

466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495



496
497
498
499
500
501
502
466
467
468
469
470
471
472








473
474
475
476
477
478
479
480
481
482
483
484
485


486
487
488
489
490
491
492
493
494
495







-
-
-
-
-
-
-
-













-
-
+
+
+







  for(p=path.pStart; p; p=p->u.pTo){
    int fnid, pfnid;
    if( !p->fromIsParent && (p->u.pTo==0 || p->u.pTo->fromIsParent) ){
      /* Skip nodes where the parent is not on the path */
      continue;
    }
    db_bind_int(&q1, ":mid", p->rid);
    if( zDebug ){
      fossil_print("%s check-in %.16z %z rid %d\n",
         zDebug,
         db_text(0, "SELECT uuid FROM blob WHERE rid=%d", p->rid),
         db_text(0, "SELECT date(mtime) FROM event WHERE objid=%d", p->rid),
         p->rid
      );
    }
    while( db_step(&q1)==SQLITE_ROW ){
      fnid = db_column_int(&q1, 1);
      pfnid = db_column_int(&q1, 0);
      if( pfnid==0 ){
        pfnid = fnid;
        fnid = 0;
      }
      if( !p->fromIsParent ){
        int t = fnid;
        fnid = pfnid;
        pfnid = t;
      }
      if( zDebug ){
        fossil_print("%s %d[%z] -> %d[%z]\n",
           zDebug,
        fossil_print("%s at %d%s %.10z: %d[%z] -> %d[%z]\n",
           zDebug, p->rid, p->fromIsParent ? ">" : "<",
           db_text(0, "SELECT uuid FROM blob WHERE rid=%d", p->rid),
           pfnid,
           db_text(0, "SELECT name FROM filename WHERE fnid=%d", pfnid),
           fnid,
           db_text(0, "SELECT name FROM filename WHERE fnid=%d", fnid));
      }
      for(pChng=pAll; pChng; pChng=pChng->pNext){
        if( pChng->curName==pfnid ){

Changes to src/stash.c.

579
580
581
582
583
584
585
586

587
588
589
590
591
592
593
594
595
596
597
598
599
600
601

602
603
604
605
606
607
608
579
580
581
582
583
584
585

586
587
588




589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605







-
+


-
-
-
-









+







      fossil_fatal("nothing to stash");
    }
    stashid = stash_create();
    undo_disable();
    if( g.argc>=2 ){
      int nFile = db_int(0, "SELECT count(*) FROM stashfile WHERE stashid=%d",
                         stashid);
      char **newArgv;
      char **newArgv = fossil_malloc( sizeof(char*)*(nFile+2) );
      int i = 2;
      Stmt q;
      if( nFile==0 ){
        fossil_fatal("No modified files match the provided pattern.");
      }
      newArgv = fossil_malloc( sizeof(char*)*(nFile+2) );
      db_prepare(&q,"SELECT origname FROM stashfile WHERE stashid=%d", stashid);
      while( db_step(&q)==SQLITE_ROW ){
        newArgv[i++] = mprintf("%s%s", g.zLocalRoot, db_column_text(&q, 0));
      }
      db_finalize(&q);
      newArgv[0] = g.argv[0];
      newArgv[1] = 0;
      g.argv = newArgv;
      g.argc = nFile+2;
      if( nFile==0 ) return;
    }
    /* Make sure the stash has committed before running the revert, so that
    ** we have a copy of the changes before deleting them. */
    db_commit_transaction();
    g.argv[1] = "revert";
    revert_cmd();
    fossil_print("stash %d saved\n", stashid);

Changes to src/timeline.c.

1716
1717
1718
1719
1720
1721
1722
1723
1724
1725
1726
1727
1728
1729
1730
1731
1732
1733
1734
1735
1716
1717
1718
1719
1720
1721
1722

1723

1724
1725

1726
1727
1728
1729
1730
1731
1732







-

-


-







**                    match a check-in that is already in the timeline.
**    sel2=TIMEORTAG  Like sel1= but use the secondary highlight.
**    n=COUNT         Maximum number of events. "all" for no limit
**    n1=COUNT        Same as "n" but doesn't set the display-preference cookie
**                       Use "n1=COUNT" for a one-time display change
**    p=CHECKIN       Parents and ancestors of CHECKIN
**                       bt=PRIOR   ... going back to PRIOR
**                       p2=CKIN2   ... use CKIN2 if CHECKIN is not found
**    d=CHECKIN       Children and descendants of CHECKIN
**                       d2=CKIN2        ... Use CKIN2 if CHECKIN is not found
**                       ft=DESCENDANT   ... going forward to DESCENDANT
**    dp=CHECKIN      Same as 'd=CHECKIN&p=CHECKIN'
**    dp2=CKIN2       Same as 'd2=CKIN2&p2=CKIN2'
**    df=CHECKIN      Same as 'd=CHECKIN&n1=all&nd'.  Mnemonic: "Derived From"
**    bt=CHECKIN      "Back To".  Show ancenstors going back to CHECKIN
**                       p=CX       ... from CX back to time of CHECKIN
**                       from=CX    ... shortest path from CX back to CHECKIN
**    ft=CHECKIN      "Forward To":  Show decendents forward to CHECKIN
**                       d=CX       ... from CX up to the time of CHECKIN
**                       from=CX    ... shortest path from CX up to CHECKIN
1845
1846
1847
1848
1849
1850
1851
1852
1853
1854
1855
1856
1857
1858
1859
1842
1843
1844
1845
1846
1847
1848

1849
1850
1851
1852
1853
1854
1855







-







  int from_rid = name_to_typed_rid(P("from"),"ci"); /* from= for paths */
  const char *zTo2 = 0;
  int to_rid = name_choice("to","to2",&zTo2);    /* to= for path timelines */
  int noMerge = P("shortest")==0;           /* Follow merge links if shorter */
  int me_rid = name_to_typed_rid(P("me"),"ci");  /* me= for common ancestory */
  int you_rid = name_to_typed_rid(P("you"),"ci");/* you= for common ancst */
  int pd_rid;
  const char *zDPName;                /* Value of p=, d=, or dp= params */
  double rBefore, rAfter, rCirca;     /* Boundary times */
  const char *z;
  char *zOlderButton = 0;             /* URL for Older button at the bottom */
  char *zOlderButtonLabel = 0;        /* Label for the Older Button */
  char *zNewerButton = 0;             /* URL for Newer button at the top */
  char *zNewerButtonLabel = 0;        /* Label for the Newer button */
  int selectedRid = 0;                /* Show a highlight on this RID */
1904
1905
1906
1907
1908
1909
1910

1911
1912



1913
1914
1915
1916
1917
1918
1919
1900
1901
1902
1903
1904
1905
1906
1907


1908
1909
1910
1911
1912
1913
1914
1915
1916
1917







+
-
-
+
+
+







      }
    }
  }else{
    nEntry = 50;
  }

  /* Query parameters d=, p=, and f= and variants */
  z = P("p");
  p_rid = name_choice("p","p2", &zDPName);
  d_rid = name_choice("d","d2", &zDPName);
  p_rid = z ? name_to_typed_rid(z,"ci") : 0;
  z = P("d");
  d_rid = z ? name_to_typed_rid(z,"ci") : 0;
  z = P("f");
  f_rid = z ? name_to_typed_rid(z,"ci") : 0;
  z = P("df");
  if( z && (d_rid = name_to_typed_rid(z,"ci"))!=0 ){
    nEntry = 0;
    useDividers = 0;
    cgi_replace_query_parameter("d",fossil_strdup(z));
1936
1937
1938
1939
1940
1941
1942
1943

1944
1945
1946
1947
1948
1949
1950
1934
1935
1936
1937
1938
1939
1940

1941
1942
1943
1944
1945
1946
1947
1948







-
+







  ** present or if this repository lacks a "cherrypick" table. */
  if( PB("ncp") || !db_table_exists("repository","cherrypick") ){
    showCherrypicks = 0;
  }

  /* To view the timeline, must have permission to read project data.
  */
  pd_rid = name_choice("dp","dp2",&zDPName);
  pd_rid = name_to_typed_rid(P("dp"),"ci");
  if( pd_rid ){
    p_rid = d_rid = pd_rid;
  }
  login_check_credentials();
  if( (!g.perm.Read && !g.perm.RdTkt && !g.perm.RdWiki && !g.perm.RdForum)
   || (bisectLocal && !g.perm.Setup)
  ){
2329
2330
2331
2332
2333
2334
2335
2336

2337
2338
2339
2340
2341
2342
2343
2327
2328
2329
2330
2331
2332
2333

2334
2335
2336
2337
2338
2339
2340
2341







-
+







      if( !haveParameterN ) nEntry = 10;
    }
    db_multi_exec(
       "CREATE TEMP TABLE IF NOT EXISTS ok(rid INTEGER PRIMARY KEY)"
    );
    zUuid = db_text("", "SELECT uuid FROM blob WHERE rid=%d",
                         p_rid ? p_rid : d_rid);
    zCiName = zDPName;
    zCiName = pd_rid ? P("pd") : p_rid ? P("p") : P("d");
    if( zCiName==0 ) zCiName = zUuid;
    blob_append_sql(&sql, " AND event.objid IN ok");
    nd = 0;
    if( d_rid ){
      Stmt s;
      double rStopTime = 9e99;
      zFwdTo = P("ft");

Changes to test/amend.test.

294
295
296
297
298
299
300
301

302
303
304
305
306
307
308
309

310
311
312
313
314
315
316
294
295
296
297
298
299
300

301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317







-
+








+







  {000000 lower Upper alpha 0alpha} {000000 0alpha Upper alpha lower}
}
set tc 0
foreach {tagt result} $tagtests {
  incr tc
  set tags {}
  set cancels {}
  set t1exp [join $result ", "]
  set t1exp ""
  set t2exp "*"
  set t3exp "*"
  set t5exp "*"
  foreach tag $tagt {
    lappend tags -tag $tag
    lappend cancels -cancel $tag
  }
  foreach res $result {
    append t1exp ", $res"
    append t3exp "Add*tag*\"$res\".*"
    append t5exp "Cancel*tag*\"$res\".*"
  }
  foreach res [lsort -nocase $result] {
    append t2exp "sym-$res*"
  }
  eval fossil amend $HASH $tags

Changes to test/merge5.test.

21
22
23
24
25
26
27
28

29
30
31
32
33
34
35
21
22
23
24
25
26
27

28
29
30
31
32
33
34
35







-
+







if {! $::QUIET} {
  puts "Skipping Merge5 tests"
}
protOut {
fossil sqlite3 --no-repository reacts badly to SQL dumped from
repositories created from fossil older than version 2.0.
}
#test merge5-sqlite3-issue false knownBug
test merge5-sqlite3-issue false knownBug
test_cleanup_then_return

# Verify the results of a check-out
#
proc checkout-test {testid expected_content} {
  set flist {}
  foreach {status filename} [exec $::fossilexe ls -l] {

Changes to test/stash.test.

166
167
168
169
170
171
172
173
174
175
176




177
178
179
180
181
182
183
166
167
168
169
170
171
172




173
174
175
176
177
178
179
180
181
182
183







-
-
-
-
+
+
+
+







@@ -0,0 +1,1 @@
+f0}

########
# fossil stash show|cat ?STASHID? ?DIFF-OPTIONS?
# fossil stash [g]diff ?STASHID? ?DIFF-OPTIONS?

#fossil stash show
#test stash-1-show {[normalize_result] eq $diff_stash_1}
#fossil stash diff
#test stash-1-diff {[normalize_result] eq $diff_stash_1} knownBug
fossil stash show
test stash-1-show {[normalize_result] eq $diff_stash_1}
fossil stash diff
test stash-1-diff {[normalize_result] eq $diff_stash_1} knownBug

########
# fossil stash pop

stash-test 2 pop {
  DELETE f1
  UPDATE f2
204
205
206
207
208
209
210
211
212
213
214
215
216






217
218
219
220
221
222
223
204
205
206
207
208
209
210






211
212
213
214
215
216
217
218
219
220
221
222
223







-
-
-
-
-
-
+
+
+
+
+
+







#   64 bit intel, 8-byte pointer, 4-byte integer
# Stashed renamed file said:
# fossil: ./src/delta.c:231: checksum: Assertion '...' failed.
# Should be triggered by this stash-WY-1 test.
fossil checkout --force c1
fossil clean
fossil mv --soft f1 f1new
#stash-test WY-1 {-expectError save -m "Reported 2016-02-09"} {
#  REVERT   f1
#  DELETE   f1new
#} -changes {
#} -addremove {
#} -exists {f1 f2 f3} -notexists {f1new} -knownbugs {-code -result}
stash-test WY-1 {-expectError save -m "Reported 2016-02-09"} {
  REVERT   f1
  DELETE   f1new
} -changes {
} -addremove {
} -exists {f1 f2 f3} -notexists {f1new} -knownbugs {-code -result}
# TODO: add tests that verify the saved stash is sensible. Possibly
# by applying it and checking results. But until the SQLITE_CONSTRAINT
# error is fixed, there is nothing stashed to test.



# Test stashing the combination of a renamed file and an added file that
294
295
296
297
298
299
300
301
302
303
304
305




306
307

308
309
310
311
312
313
314
294
295
296
297
298
299
300





301
302
303
304
305

306
307
308
309
310
311
312
313







-
-
-
-
-
+
+
+
+

-
+







  RENAME f2 f2n
  MOVED_FILE} [file normalize f2] {
}] -changes {
  RENAMED f2  ->  f2n
} -addremove {
} -exists {f1 f2n} -notexists {f2}

fossil stash save -m f2n
#stash-test 3-2 {save -m f2n} {
#  REVERT f2
#  DELETE f2n
#} -exists {f1 f2} -notexists {f2n} -knownbugs {-result}
stash-test 3-2 {save -m f2n} {
  REVERT f2
  DELETE f2n
} -exists {f1 f2} -notexists {f2n} -knownbugs {-result}
fossil stash show
#test stash-3-2-show-1 {![regexp {\sf1} $RESULT]} knownBug
test stash-3-2-show-1 {![regexp {\sf1} $RESULT]} knownBug
test stash-3-2-show-2 {[regexp {\sf2n} $RESULT]}
stash-test 3-2-pop {pop} {
  UPDATE f1
  UPDATE f2n
} -changes {
  RENAMED    f2  ->  f2n
} -addremove {

Changes to test/tester.tcl.

306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
306
307
308
309
310
311
312

313
314
315
316
317
318
319







-







      clean-glob \
      clearsign \
      comment-format \
      crlf-glob \
      crnl-glob \
      default-csp \
      default-perms \
      default-skin \
      diff-binary \
      diff-command \
      dont-commit \
      dont-push \
      dotfiles \
      editor \
      email-admin \

Changes to test/utf.test.

33
34
35
36
37
38
39
40

41
42
43
44
45
46
47
33
34
35
36
37
38
39

40
41
42
43
44
45
46
47







-
+







proc utf-check {testname args} {
  global tempPath
  set i 1
  foreach {fileName result} $args {
    set fileName [file join $tempPath $fileName]
    fossil test-looks-like-utf $fileName
    set result [string map [list %TEMP% $tempPath \r\n \n] $result]
    # if {$::RESULT ne $result} {puts stdout $::RESULT; exit}
    # if {$::RESULT ne $result} {puts stdout $::RESULT}
    test utf-check-$testname.$i {$::RESULT eq $result}
    incr i
  }
}

unset -nocomplain enc
array set enc [list     \
17607
17608
17609
17610
17611
17612
17613
17614
17615
17616
17617
17618
17619
17620
17621
17622
17623
17624
17625
17626
17627
17628















17629
17630
17631
17632
17633
17634
17635
17607
17608
17609
17610
17611
17612
17613















17614
17615
17616
17617
17618
17619
17620
17621
17622
17623
17624
17625
17626
17627
17628
17629
17630
17631
17632
17633
17634
17635







-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+







Has flag LOOK_LONE_LF: no
Has flag LOOK_CRLF: no
Has flag LOOK_LONG: no
Has flag LOOK_INVALID: no
Has flag LOOK_ODD: no
Has flag LOOK_SHORT: no}

#utf-check 1179 utf-check-1179-2-129-1.jnk \
#{File "%TEMP%/utf-check-1179-2-129-1.jnk" has 7 bytes.
#Starts with UTF-8 BOM: no
#Starts with UTF-16 BOM: yes
#Looks like UTF-8: yes
#Has flag LOOK_NUL: no
#Has flag LOOK_CR: no
#Has flag LOOK_LONE_CR: no
#Has flag LOOK_LF: no
#Has flag LOOK_LONE_LF: no
#Has flag LOOK_CRLF: no
#Has flag LOOK_LONG: no
#Has flag LOOK_INVALID: yes
#Has flag LOOK_ODD: no
#Has flag LOOK_SHORT: no}
utf-check 1179 utf-check-1179-2-129-1.jnk \
{File "%TEMP%/utf-check-1179-2-129-1.jnk" has 7 bytes.
Starts with UTF-8 BOM: no
Starts with UTF-16 BOM: yes
Looks like UTF-8: no
Has flag LOOK_NUL: yes
Has flag LOOK_CR: no
Has flag LOOK_LONE_CR: no
Has flag LOOK_LF: no
Has flag LOOK_LONE_LF: no
Has flag LOOK_CRLF: no
Has flag LOOK_LONG: no
Has flag LOOK_INVALID: yes
Has flag LOOK_ODD: no
Has flag LOOK_SHORT: no}

utf-check 1180 utf-check-1180-2-130-0.jnk \
{File "%TEMP%/utf-check-1180-2-130-0.jnk" has 4 bytes.
Starts with UTF-8 BOM: no
Starts with UTF-16 BOM: yes
Looks like UTF-16: yes
Has flag LOOK_NUL: no
24119
24120
24121
24122
24123
24124
24125
24126
24127
24128
24129
24130
24131
24132
24133
24134
24135
24136
24137
24138
24139
24140















24141
24142
24143
24144
24145
24146
24147
24148
24149
24150
24151
24152
24153
24154
24155
24156















24157
24158
24159
24160
24161
24162
24163
24119
24120
24121
24122
24123
24124
24125















24126
24127
24128
24129
24130
24131
24132
24133
24134
24135
24136
24137
24138
24139
24140
24141















24142
24143
24144
24145
24146
24147
24148
24149
24150
24151
24152
24153
24154
24155
24156
24157
24158
24159
24160
24161
24162
24163







-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+

-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+







Has flag LOOK_LONE_LF: no
Has flag LOOK_CRLF: no
Has flag LOOK_LONG: no
Has flag LOOK_INVALID: yes
Has flag LOOK_ODD: no
Has flag LOOK_SHORT: no}

#utf-check 1586 utf-check-1586-3-128-0.jnk \
#{File "%TEMP%/utf-check-1586-3-128-0.jnk" has 6 bytes.
#Starts with UTF-8 BOM: no
#Starts with UTF-16 BOM: reversed
#Looks like UTF-16: no
#Has flag LOOK_NUL: yes
#Has flag LOOK_CR: no
#Has flag LOOK_LONE_CR: no
#Has flag LOOK_LF: no
#Has flag LOOK_LONE_LF: no
#Has flag LOOK_CRLF: no
#Has flag LOOK_LONG: no
#Has flag LOOK_INVALID: no
#Has flag LOOK_ODD: no
#Has flag LOOK_SHORT: no}
utf-check 1586 utf-check-1586-3-128-0.jnk \
{File "%TEMP%/utf-check-1586-3-128-0.jnk" has 6 bytes.
Starts with UTF-8 BOM: no
Starts with UTF-16 BOM: no
Looks like UTF-8: no
Has flag LOOK_NUL: yes
Has flag LOOK_CR: no
Has flag LOOK_LONE_CR: no
Has flag LOOK_LF: no
Has flag LOOK_LONE_LF: no
Has flag LOOK_CRLF: no
Has flag LOOK_LONG: no
Has flag LOOK_INVALID: yes
Has flag LOOK_ODD: no
Has flag LOOK_SHORT: no}

#utf-check 1587 utf-check-1587-3-128-1.jnk \
#{File "%TEMP%/utf-check-1587-3-128-1.jnk" has 7 bytes.
#Starts with UTF-8 BOM: no
#Starts with UTF-16 BOM: reversed
#Looks like UTF-8: no
#Has flag LOOK_NUL: yes
#Has flag LOOK_CR: no
#Has flag LOOK_LONE_CR: no
#Has flag LOOK_LF: no
#Has flag LOOK_LONE_LF: no
#Has flag LOOK_CRLF: no
#Has flag LOOK_LONG: no
#Has flag LOOK_INVALID: yes
#Has flag LOOK_ODD: no
#Has flag LOOK_SHORT: no}
utf-check 1587 utf-check-1587-3-128-1.jnk \
{File "%TEMP%/utf-check-1587-3-128-1.jnk" has 7 bytes.
Starts with UTF-8 BOM: no
Starts with UTF-16 BOM: no
Looks like UTF-8: no
Has flag LOOK_NUL: yes
Has flag LOOK_CR: no
Has flag LOOK_LONE_CR: no
Has flag LOOK_LF: no
Has flag LOOK_LONE_LF: no
Has flag LOOK_CRLF: no
Has flag LOOK_LONG: no
Has flag LOOK_INVALID: yes
Has flag LOOK_ODD: no
Has flag LOOK_SHORT: no}

utf-check 1588 utf-check-1588-3-129-0.jnk \
{File "%TEMP%/utf-check-1588-3-129-0.jnk" has 6 bytes.
Starts with UTF-8 BOM: no
Starts with UTF-16 BOM: no
Looks like UTF-8: no
Has flag LOOK_NUL: yes

Changes to www/changes.wiki.

1
2
3

4
5
6
7
8
9
10

11
12
13
14
15
16
17
1
2

3
4
5
6
7
8
9

10
11
12
13
14
15
16
17


-
+






-
+







<title>Change Log</title>

<h2 id='v2_24'>Changes for version 2.24 (2024-04-23)</h2>
<h2 id='v2_24'>Changes for version 2.24 (pending)</h2>

  *  Apache change work-around &rarr; As part of a security fix, the Apache webserver
     mod_cgi module has stopped relaying the Content-Length field of the HTTP
     reply header from the CGI programs back to the client in cases where the
     connection is to be closed and the client is able to read until end-of-file.
     The HTTP and CGI specs allow for this, though it does seem rude.
     Older versions of Fossil were depending on the Content-Length header field
     Older versions of Fossil was depending on the Content-Length header field
     being set.  To work around the change to Apache, Fossil has
     been enhanced to cope with a missing Content-Length in the reply header.  See
     [forum:/forumpost/12ac403fd29cfc89|forum thread 12ac403fd29cfc89].
  *  [./customskin.md|Skin] enhancements:
     <ul>
     <li>  Reworked the default skin to make everything more readable: larger
           fonts, more whitespace, deeper indents to show hierarchy and to
39
40
41
42
43
44
45
46

47
48
49
50
51
52
53
39
40
41
42
43
44
45

46
47
48
49
50
51
52
53







-
+







     <li> Automatically highlight the endpoints for from=,to= queries.
     <li> Add the to2=Z query parameter to augment from=X,to=Y so that the
          path from X to Z is shown if Y cannot be found.
     </ul>
  *  Moved the /museum/repo.fossil file referenced from the Dockerfile from
     the ENTRYPOINT to the CMD part to allow use of --repolist mode.
  *  The [/uvlist] page now shows the hash algorithm used so that
     viewers don't have to guess.  The hash is shown in a fixed-width
     viewers don't have to guess.  And the hash is shows in a fixed-width
     font for a more visually pleasing display.
  *  If the [/help?cmd=autosync|autosync setting] contains keyword "all",
     the automatic sync occurs against all defined remote repositories, not
     just the default.
  *  Markdown formatter: improved handling of indented fenced code blocks
     that contain blank lines.
  *  When doing a "[/help?cmd=add|fossil add]" on a system with case-insensitive

Deleted www/colordiff.md.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104








































































































-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
# Colorized Diffs

The oldest and most widely compatible method to get colorized diffs in
Fossil is to use its web UI:

    fossil ui --page '/vdiff?from=2024-04-01&to=trunk'

That syntax is admittedly awkward, and it doesn’t work where “from” is
the current checkout.  Fortunately, there are many other methods to get
colorized `diff` output from Fossil.


<a id="ui"></a>
## `fossil diff -b`

This produces a graphical diff in HTML format and sends it to the
user’s default web browser for viewing.



<a id="ui"></a>
## `fossil diff -tk`

You may be surprised to learn that the prior feature doesn’t use any of
the skinning or chrome from Fossil UI. This is because it is meant as a
functional replacement for an older method of getting colorized diffs,
“`fossil diff -tk`”. The feature was added after Apple stopped shipping
Tcl/Tk in macOS, and the third-party replacements often failed to work
correctly. It’s useful on other platforms as well.


<a id="git"></a>
## Delegate to Git

It may be considered sacrilege by some, but the most direct method for
those who want Git-like diff behavior may be to delegate diff behavior
to Git:

    fossil set --global diff-command 'git diff --no-index'

The flag permits it to diff files that aren’t inside a Git repository.


<a id="diffutils"></a>
## GNU Diffutils

If your system is from 2016 or later, it may include [GNU Diffutils][gd]
3.4 or newer, which lets you say:

    fossil set --global diff-command 'diff -dwu --color=always'

You might think you could give `--color=auto`, but that fails with
commands like “`fossil diff | less`” since the pipe turns the output
non-interactive from the perspective of the underlying `diff` instance.

This use of unconditional colorization means you will then have to
remember to add the `-i` option to `fossil diff` commands when producing
`patch(1)` files or piping diff output to another command that doesn’t
understand ANSI escape sequences, such as [`diffstat`][ds].

[ds]: https://invisible-island.net/diffstat/
[gd]: https://www.gnu.org/software/diffutils/


<a id="bat"></a>
## Bat, the Cat with Wings

We can work around the `--color=auto` problem by switching from GNU less
as our pager to [`bat`][bat], as it can detect GNU diff output and
colorize it for you:

    fossil set --global diff-command 'diff -dwu --color=auto'
    fossil diff | bat

In this author’s experience, that works a lot more reliably than GNU
less’s ANSI color escape code handling, even when you set `LESS=-R` in
your environment.

The reason we don’t leave the `diff-command` unset in this case is that
Fossil produces additional lines at the start which confuse the diff
format detection in `bat`. Forcing output through an external diff
command solves that. It also means that if you forget to pipe the output
through `bat`, you still get colorized output from GNU diff.

[bat]: https://github.com/sharkdp/bat


<a id="colordiff"></a>
## Colordiff

A method that works on systems predating GNU diffutils 3.4 or the
widespread availability of `bat` is to install [`colordiff`][cdurl], as
it is included in [many package systems][cdpkg], including ones for
outdated OSes. That then lets you say:

    fossil set --global diff-command 'colordiff -dwu'

The main reason we list this alternative last is that it has the same
limitation of unconditional color as [above](#diffutils).

[cdurl]: https://www.colordiff.org/
[cdpkg]: https://repology.org/project/colordiff/versions

<div style="height:50em" id="this-space-intentionally-left-blank"></div>

Changes to www/embeddeddoc.wiki.

239
240
241
242
243
244
245
246

247
248
249
250
251
252
253
239
240
241
242
243
244
245

246
247
248
249
250
251
252
253







-
+







</ul>

When the symbolic name is a date and time, fossil shows the version
of the document that was most recently checked in as of the date
and time specified.  So, for example, to see what the fossil website
looked like at the beginning of 2010, enter:

<pre><a href="https://fossil-scm.org/home/doc/2010-01-01/www/index.wiki">https://fossil-scm.org/home/doc/<b>2010-01-01</b>/www/index.wiki
<pre><a href="/doc/2010-01-01/www/index.wiki">https://fossil-scm.org/home/doc/<b>2010-01-01</b>/www/index.wiki
</a></pre>

The file that encodes this document is stored in the fossil source tree under
the name "<b>www/embeddeddoc.wiki</b>" and so that name forms the
last part of the URL for this document.

As I sit writing this documentation file, I am testing my work by

Changes to www/gitusers.md.

137
138
139
140
141
142
143
144
145
146



147
148
149
150
151
152
153
137
138
139
140
141
142
143



144
145
146
147
148
149
150
151
152
153







-
-
-
+
+
+







[gpull]:   https://git-scm.com/docs/git-pull
[gcokoan]: https://stevelosh.com/blog/2013/04/git-koans/#s2-one-thing-well


#### <a id="close" name="dotfile"></a> Closing a Check-Out

The [`fossil close`][close] command dissociates a check-out directory from the
Fossil repository database, _nondestructively_ inverting [`fossil open`][open].
(Contrast Git’s [closest alternative](#worktree), `git worktree remove`, which *is*
destructive!) This Fossil command does not
Fossil repository database, nondestructively inverting [`fossil open`][open].
(Contrast [its closest inverse](#worktree), `git worktree remove`, which *is*
destructive in Git!) This Fossil command does not
remove the managed files, and unless you give the `--force`
option, it won’t let you close the check-out with uncommitted changes to
those managed files.

The `close` command also refuses to run without `--force` when you have
certain other precious per-checkout data that Fossil stores in the
`.fslckout` file at the root of a check-out directory. This is a SQLite
298
299
300
301
302
303
304
305
306
307
308
309





310
311
312
313
314
315
316
298
299
300
301
302
303
304





305
306
307
308
309
310
311
312
313
314
315
316







-
-
-
-
-
+
+
+
+
+







constants are equal.

Yet the constants are *not* equal because Fossil
reads from a single disk file rather than visit potentially many
files in sequence as Git must, so the OS’s buffer cache can result in
[still better performance][35pct].

Unlike Git’s log, Fossil’s timeline shows info across all branches by
default, a feature for maintaining better situational awareness.
It is possible to restrict the timeline to a single branch using `fossil timeline -b`.
Similarly, to restrict the timeline using the web UI equivalent,
click the name of a branch on the `/timeline` or `/brlist` page. (Or
Unlike Git’s log, Fossil’s timeline shows info across branches by
default, a feature for maintaining better situational awareness. Although the
`fossil timeline` command has no way to show a single branch’s commits,
you can restrict your view like this using the web UI equivalent by
clicking the name of a branch on the `/timeline` or `/brlist` page. (Or
manually, by adding the `r=` query parameter.) Note that even in this
case, the Fossil timeline still shows other branches where they interact
with the one you’ve referenced in this way; again, better situational
awareness.


#### <a id="emu-log"></a> Emulating `git log`
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499

500
501
502
503
504
505
506
507
508
509

510
511
512

513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537






538
539
540
541
542
543












544
545
546
547
548
549
550
551
455
456
457
458
459
460
461































462
463
464
465
466
467

468
469
470
471
472
473
474
475
476
477

478



479






480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504






505
506
507
508
509
510
511
512
513
514
515
516

517
518
519
520
521
522
523







-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-






-
+









-
+
-
-
-
+
-
-
-
-
-
-



















+
+
+
+
+
+
-
-
-
-
-
-
+
+
+
+
+
+
+
+
+
+
+
+
-







character and such, because it doesn’t recognize Fossil commit messages
and apply similar rules as to Git commit messages.

[cskin]: ./customskin.md
[lsl]:   https://chris.beams.io/posts/git-commit/#limit-50


<a id="autocommit"></a>
## Fossil Never Auto-Commits

There are several features in Git besides its `commit` command that
produce a new commit to the repository, and by default, they do it
without prompting or even asking for a commit message. These include
Git’s [`rebase`](#rebase), `merge`, and [`cherrypick`](#cpickrev)
commands, plus the [commit splitting](#comsplit) sub-feature
“`git commit -p`”.

Fossil never does this, on firm philosophical grounds: we wish to be
able to test that each potentially repository-changing command does not
break anything _before_ freezing it immutably into the [Merkle
tree](./blockchain.md). Where Fossil has equivalent commands, they
modify the checkout tree alone, requiring a separate `commit` command
afterward, withheld until the user has satisfied themselves that the
command’s result is correct.

We believe this is the main reason Git lacks an [autosync](#autosync)
feature: making push a manual step gives the user a chance to rewrite
history after triggering one of these autocommits locally, should the
automatic commit fail to work out as expected.  Fossil chooses the
inverse path under the philosophy that commits are *commitments,* not
something you’re allowed to go back and rewrite later.

This is also why there is no automatic commit message writing feature in
Fossil, as in these autocommit-triggering Git commands. The user is
meant to write the commit message by hand after they are sure it’s
correct, in clear-headed retrospective fashion.  Having the tool do it
prospectively before one can test the result is simply backwards.


<a id="staging"></a>
## There Is No Staging Area

Fossil omits the "Git index" or "staging area" concept.  When you
type "`fossil commit`" _all_ changes in your check-out are committed,
by default.  There is no need for the "-a" option as with Git.
automatically.  There is no need for the "-a" option as with Git.

If you only want to commit _some_ of the changes, list the names
of the files or directories you want to commit as arguments, like this:

    fossil commit src/feature.c doc/feature.md examples/feature

Note that the last element is a directory name, meaning “any changed
file under the `examples/feature` directory.”


Although there are currently no
<a id="comsplit"></a>
## Commit Splitting

<a id="csplit"></a>[commit splitting][gcspl] features in Fossil like
[Git’s commit splitting features][gcspl] rely on
other features of Git that Fossil purposefully lacks, as covered in the
prior two sections: [autocommit](#autocommit) and [the staging
area](#staging).

While there is no direct Fossil equivalent for
`git add -p`, `git commit -p`, or `git rebase -i`, you can get the same
effect by converting an uncommitted change set to a patch and then
running it through [Patchouli].

Rather than use `fossil diff -i` to produce such a patch, a safer and
more idiomatic method would be:

    fossil stash save -m 'my big ball-o-hackage'
    fossil stash diff > my-changes.patch

That stores your changes in the stash, then lets you operate on a copy
of that patch. Each time you re-run the second command, it will take the
current state of the working directory into account to produce a
potentially different patch, likely smaller because it leaves out patch
hunks already applied.

In this way, the combination of working tree and stash replaces the need
for Git’s index feature.

This also solves a philosophical problem with `git commit -p`: how can
you test that a split commit doesn’t break anything if you do it as part
of the commit action? Git’s lack of an autosync feature means you can
commit locally and then rewrite history if the commit doesn’t work out,
but we’d rather make changes only to the working directory, test the
changes there, and only commit once we’re sure it’s right.
We believe we know how to do commit splitting in a way compatible with
the Fossil philosophy, without following Git’s ill-considered design
leads. It amounts to automating the above process through an interactive
variant of [`fossil stash apply`][stash], as currently prototyped in the
third-party tool [`fnc`][fnc] and [its interactive `stash`
command][fncsta]. We merely await someone’s [contribution][ctrb] of this

This also explains why we don’t have anything like `git rebase -i`
to split an existing commit: in Fossil, commits are *commitments,* not
something you’re allowed to go back and rewrite later.

If someone does [contribute][ctrb] a commit splitting feature to Fossil,
wed expect it to be an interactive form of
[`fossil stash apply`][stash], rather than follow Git’s ill-considered
design leads.

Until then, there’s the third-party tool [`fnc`][fnc] and
[its interactive `stash` command][fncsta].
feature into Fossil proper.

[ctrb]:      https://fossil-scm.org/fossil/doc/trunk/www/contribute.wiki
[fnc]:       https://fnc.bsdbox.org/
[fncsta]:    https://fnc.bsdbox.org/uv/doc/fnc.1.html#stash
[gcspl]:     https://git-scm.com/docs/git-rebase#_splitting_commits
[Patchouli]: https://pypi.org/project/patchouli/

843
844
845
846
847
848
849
850

851
852
853
854
855

856
857
858
859
860
861
862


863





























864
865
866

867
868
869
870
871
872
873
815
816
817
818
819
820
821

822
823
824
825
826

827

828
829
830
831
832
833
834
835

836
837
838
839
840
841
842
843
844
845
846
847
848
849
850
851
852
853
854
855
856
857
858
859
860
861
862
863
864
865
866
867
868
869
870
871
872
873
874
875







-
+




-
+
-






+
+
-
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+



+







draft was written, it’s been revised multiple times to address less
common objections as well. Chances are not good that you are going to
come up with a new objection that we haven’t already considered and
addressed there.

There is only one sub-feature of `git rebase` that is philosophically
compatible with Fossil yet which currently has no functional equivalent.
We [covered this and the workaround for its lack](#comsplit) above.
We [covered this and the workaround for its lack](#csplit) above.

[3]: ./rebaseharm.md


<a id="cdiff"></a>
## <a id="cdiff"></a> Colorized Diffs
## Colorized Diffs

When you run `git diff` on an ANSI X3.64 capable terminal, it uses color
to distinguish insertions, deletions, and replacements, but as of this
writing, `fossil diff` produces traditional uncolored [unified diff
format][udiff] output, suitable for producing a [patch file][pfile].

Nevertheless, there are multiple ways to get colorized diff output from
Fossil:
There are [many methods](./colordiff.md) for solving this.

*   The most direct method is to delegate diff behavior back to Git:

      fossil set --global diff-command 'git diff --no-index'

    The flag permits it to diff files that aren’t inside a Git repository.

*   Another method is to install [`colordiff`][cdiff] — included in
    [many package systems][cdpkg] — then say:

      fossil set --global diff-command 'colordiff -wu'

    Because this is unconditional, unlike `git diff --color=auto`, you
    will then have to remember to add the `-i` option to `fossil diff`
    commands when you want color disabled, such as when producing
    `patch(1)` files or piping diff output to another command that
    doesn’t understand ANSI escape sequences. There’s an example of this
    [below](#dstat).

*   Use the Fossil web UI to diff existing commits.

*   To diff the current working directory contents against some parent
    instead, Fossil’s diff command can produce
    colorized HTML output and open it in the OS’s default web browser.
    For example, `fossil diff -by` will show side-by-side diffs.

*   Use the older `fossil diff --tk` option to do much the same using
    Tcl/Tk instead of a browser.

Viewed this way, Fossil doesn’t lack colorized diffs, it simply has
*one* method where they *aren’t* colorized.

[cdpkg]: https://repology.org/project/colordiff/versions
[pfile]: https://en.wikipedia.org/wiki/Patch_(Unix)
[udiff]: https://en.wikipedia.org/wiki/Diff#Unified_format


## <a id="show"></a> Showing Information About Commits

While there is no direct equivalent to Git’s “`show`” command, similar
932
933
934
935
936
937
938
939
940
941




942
943
944
945

946
947
948
949
950
951
952
934
935
936
937
938
939
940



941
942
943
944
945
946
947
948
949
950
951
952
953
954
955
956







-
-
-
+
+
+
+




+







The `--numstat` output is a bit cryptic, so we recommend delegating
this task to [the widely-available `diffstat` tool][dst], which gives
a histogram in its default output mode rather than bare integers:

    fossil diff -i -v --from 2020-04-01 | diffstat

We gave the `-i` flag in both cases to force Fossil to use its internal
diff implementation, bypassing [your local `diff-command` setting][dcset]
since the `--numstat` option has no effect when you have an external diff
command set.
diff implementation, bypassing [your local `diff-command` setting][dcset].
The `--numstat` option has no effect when you have an external diff
command set, and some diff command alternatives like
[`colordiff`][cdiff] (covered [above](#cdiff)) produce output that confuses `diffstat`.

If you leave off the `-v` flag in the second example, the `diffstat`
output won’t include info about any newly-added files.

[cdiff]: https://www.colordiff.org/
[dcset]: https://fossil-scm.org/home/help?cmd=diff-command
[dst]:   https://invisible-island.net/diffstat/diffstat.html


<a id="btnames"></a>
## Branch and Tag Names

Changes to www/hints.wiki.

1
2
3
4
5
6


7
8
9
10
11
12
13
1
2




3
4
5
6
7
8
9
10
11


-
-
-
-
+
+







<title>Fossil Tips And Usage Hints</title>

A collection of useful hints and tricks in no particular order:

  1.  Click on two nodes of any timeline graph in succession
      to see a diff between the two versions.
  1.  Click on nodes of any timeline graph to see diffs between the two
      selected versions.

  2.  Add the "--tk" option to "[/help?cmd=diff | fossil diff]" commands
      to get a pop-up
      window containing a complete side-by-side diff.  (NB:  The pop-up
      window is run as a separate Tcl/Tk process, so you will need to
      have Tcl/Tk installed on your machine for this to work.  Visit
      [http://www.activestate.com/activetcl] to for a quick download of
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
56
57
58
59
60
61
62

















-
-
-
-
-
-
-
-
-
-
      on line numbers.   This feature only works right with files with
      a mimetype of text/plain, of course.

  10.  When editing documentation to be checked in as managed files, you can
       preview what the documentation will look like by using the special
       "ckout" branch name in the "doc" URL while running "fossil ui".
       See the [./embeddeddoc.wiki | embedded documentation] for details.

  11.  Use the "[/help?cmd=ui|fossil ui /]" command to bring up a menu of
       all of your local Fossil repositories in your web browser.

  12.  If you have a bunch of Fossil repositories living on a remote machine
       that you are able to access using ssh using a command like
       "ssh login@remote", then you can bring up a user interface for all
       those remote repositories using the command:
       "[/help?cmd=ui|fossil ui login@remote:/]".  This works by tunneling
       all HTTP traffic through SSH to the remote machine.

Changes to www/index.wiki.

82
83
84
85
86
87
88
89

90
91
92
93
94



95
96
97
98
99
100
101
82
83
84
85
86
87
88

89
90
91



92
93
94
95
96
97
98
99
100
101







-
+


-
-
-
+
+
+







      atomic even if interrupted by a power loss or system crash.
      Automatic [./selfcheck.wiki | self-checks] verify that all aspects of
      the repository are consistent prior to each commit.

  8.  <b>Free and Open-Source</b> — [../COPYRIGHT-BSD2.txt|2-clause BSD license].

<hr>
<h3>Latest Release: 2.24 ([/timeline?c=version-2.24|2024-04-23])</h3>
<h3>Latest Release: 2.23 ([/timeline?c=version-2.23|2023-11-01])</h3>

  *  [/uv/download.html|Download]
  *  [./changes.wiki#v2_24|Change Summary]
  *  [/timeline?p=version-2.24&bt=version-2.23&y=ci|Check-ins in version 2.24]
  *  [/timeline?df=version-2.24&y=ci|Check-ins derived from the 2.24 release]
  *  [./changes.wiki#v2_23|Change Summary]
  *  [/timeline?p=version-2.23&bt=version-2.22&y=ci|Check-ins in version 2.23]
  *  [/timeline?df=version-2.23&y=ci|Check-ins derived from the 2.23 release]
  *  [/timeline?t=release|Timeline of all past releases]

<hr>
<h3>Quick Start</h3>

  1.  [/uv/download.html|Download] or install using a package manager or
      [./build.wiki|compile from sources].

Changes to www/mkindex.tcl.

33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
33
34
35
36
37
38
39

40
41
42
43
44
45
46







-







  chat.md {Fossil Chat}
  checkin_names.wiki {Check-in And Version Names}
  checkin.wiki {Check-in Checklist}
  childprojects.wiki {Child Projects}
  chroot.md {Server Chroot Jail}
  ckout-workflows.md {Check-Out Workflows}
  co-vs-up.md {Checkout vs Update}
  colordiff.md {Colorized Diffs}
  copyright-release.html {Contributor License Agreement}
  concepts.wiki {Fossil Core Concepts}
  contact.md {Developer Contact Information}
  containers.md {OCI Containers}
  contribute.wiki {Contributing Code or Documentation To The Fossil Project}
  css-tricks.md {Fossil CSS Tips and Tricks}
  customgraph.md {Theming: Customizing the Timeline Graph}

Changes to www/permutedindex.html.

33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
33
34
35
36
37
38
39

40
41
42
43
44
45
46







-







<li><a href="serverext.wiki">CGI Server Extensions</a></li>
<li><a href="checkin_names.wiki">Check-in And Version Names</a></li>
<li><a href="checkin.wiki">Check-in Checklist</a></li>
<li><a href="ckout-workflows.md">Check-Out Workflows</a></li>
<li><a href="foss-cklist.wiki">Checklist For Successful Open-Source Projects</a></li>
<li><a href="co-vs-up.md">Checkout vs Update</a></li>
<li><a href="childprojects.wiki">Child Projects</a></li>
<li><a href="colordiff.md">Colorized Diffs</a></li>
<li><a href="build.wiki">Compiling and Installing Fossil</a></li>
<li><a href="contribute.wiki">Contributing Code or Documentation To The Fossil Project</a></li>
<li><a href="copyright-release.html">Contributor License Agreement</a></li>
<li><a href="private.wiki">Creating, Syncing, and Deleting Private Branches</a></li>
<li><a href="customskin.md">Custom Skins</a></li>
<li><a href="custom_ticket.wiki">Customizing The Ticket System</a></li>
<li><a href="antibot.wiki">Defense against Spiders and Robots</a></li>